Category Archives: new

Microbiome and research reproducibility

This week journal Nature published a long overdue perspective where the authors argued that one of the critical, but frequently unaccounted reasons for pre-clinical research irreproducibility, was role of microbiome in shaping physiology and phenotype of laboratory animals.

Written by Thaddeus Stappenbeck & Herbert Virgin, this article explained that when designing and analyzing research data we need to consider effect of “metagenome” defined “as the sum of all host genes plus all organism genes of the microbiome.”

The term ‘microbiome’ refers to not only endogenous bacteria, clarified the authors, but “also the virome, archaea, the mycobiome (fungi) and meiofauna (for example, protists and helminths)”.

I am going to highlight some of the important messages from this article.

First, the authors start with acknowledgment that “nearly all aspects of human physiology, as well as model organisms such as mice, are influenced by the microbiome and metagenome”.  This is especially true for conditions where immune system is heavily implicated, but even organs such as lung, pancreas, and brain are directly influenced by microbiome.

A gold standard for working with gene-modified organisms, argue the authors, is to use “littermate controls” to control the effects of the microbiome and metagenome. For example, the authors correctly pointed out that “the common practice of comparing wild type mice to mice with a mutation when the wild-type and mutant mice are bred separately or one group is purchased from an outside facility” is totally flawed and no self-respected research investigator or science journals should publish results obtained from such sloppy experiments. Use of littermate controls is not a new concept and many publications specifically mention such controls, but it must be a mandatory requirement for gene-modified animal experimentation going forward.

Another important recommendation, the authors say, would be to conduct key experiments “in multiple animal facilities in order to draw firm conclusions about the generality of a role of host and/or microbiome genes in a phenotype.” This is akin to clinical trials on humans that “relies on multi-centre trials as its gold standard for treatment efficacy”.

I personally fully support the recommendations proposed in this article because they are sound observations derived from analysis of decade-long scientific experimentation.

Of course, implementation of these rules would be expensive and time-consuming. But without it, vast number of experiments are totally wasted to begin with, not to mention millions of lab animals sacrificed for no good cause. I will go even further and say that it is moral and ethical obligation for every scientist to do the best he/she can to minimize undue experimentation on live animals.

Finally, one way to accomplish such transformation, funding agencies and scientific journals should demand higher standards for “practices and controls for mouse experiments”.

David Usharauli

My view on “An incentive-based approach for improving data reproducibility”

This week Science Translational Medicine published the commentary by Michael Rosenblatt, Chief Medical Officer at the big Pharma company Merck & Co., addressing the problem of research data reproducibility.

He correctly pointed out that “The problem of [research] data that cannot be replicated has many potential origins: pressures to publish papers or secure grants, criteria for career advancement, deficiencies in training, and nonrigorous reviews and journal practices, with fraud an infrequent cause”.

He proposed that one way to improve confidence and reliability of translational research would be “if universities stand behind the research data that lead to collaborative agreements with industry” and “In the instance of failure [i.e. irreproducible data]” “what if universities offered some form of full or partial money-back guarantee [to industry partner]?”

The main starting point for this proposal is the fact that currently “industry expends and universities collect funding, even when the original data cannot be reproduced.”. “Compensation clause” proposed by Michael Rosenblatt is an attempt to put certain additional “accountability” [call it “incentive” if you prefer] on universities’ shoulders.

Would such arrangement work? Unlikely, in my opinion. Why? By accepting such arrangement, it naturally would imply  that academic scientists are less than “good” in their work. It would suggest that university does not have a confidence in their own scientists. It would ultimately impinge on academic freedom by dividing scientists between reliable and non-reliable ones (in my view, simple double-blind peer review of scientific manuscripts would greatly improve the quality of the academic research).

In addition, this proposal also somehow wants to unnecessary ease the burden on industry bosses who are ultimately responsible for selection of the best academic projects for commercial purpose.

These are few examples why it would be extremely sensitive subject to implement. I don’t have illusion that data reproducibility issue has simple solution. One aspect that is not even mentioned in this commentary is whether our heavy reliance on animal [mouse] models is misplaced and is actually one of the main causes for failure to “translate” into human research.

posted by David Usharauli

Me on Twitter 1 year later

I joined Twitter a little over 1 year ago. Prior to this I was only on Blogger where I used to write my analysis of new research articles, specifically in immunology.

Two reasons why I choose Twitter as my primary social site:

(a) Even though I was blogging about science and writing unique quality content, very few people visited my site and even fewer left their comments. On average, my site had around 10 pageviews per day. It went up slightly to 30 pageviews when I started to regularly post my analysis (2-3 per week). Still in my view I felt it hadn’t attracted enough visitors who were interested in immunology. On Google one finds plenty of advice on how to make a site more visible and usually number 1 advice is to have current, novel and original stuff and even though I was writing original stuff it wasn’t working as I expected. So I thought maybe sending links of my posts via Twitter would increase its visibility (it did as discussed below).

(b) I have a quick mind and it is quite easy for me to come up with quick and short titles (at least I believe such things about me). So I thought Twitter can be a good venue to express my thoughts as “idea bursts”.

So I joined Twitter and began learning how to use it in a way to popularize my immunology blog. However, immediately I encountered a major hurdle on Twitter: it appeared that url links from my blogger posts were not going “public” on Twitter when attached to my tweets but were visible only to my “Followers” and I had basically none at this stage.

I searched Google to find if anyone had reported similar situation. Indeed, few discussion sites mentioned that only Twitter accounts that were popular or had many followers or were long-standing, permitted “Public” url visibility.

Basically it was a kind of catch22 situation for me: on one hand, to gain popularity and followers I needed to attach my blog post url links to my tweets, but such tweets were not visible in Twitterverse. On the other hand, my Twitter account would have not been visible in Twitterverse unless I had some followers.

So, for some time I had no idea how to solve this dilemma. Then few days later I came across an online discussion where it was mentioned that not all url links are “equal” and some url links are more popular for Twitter’s algorithm. Specifically, names of such site as BBC or NYT were mentioned. After reading this I had an “epiphany”: what would happen if I attached to my tweet a prestigious [but random] url alongside of my “non-prestigious” blog post url? Would such prestigious ulr “carry”/”boost” my non-popular url link and make it visible in Twitterverse?

It did. For a long time (2-3 months) I used to attach so called “booster” url to my tweets if I need to share my blog post links. As a “booster” url I used home page link and it worked wonderfully.

This is how I made my Twitter account visible to Twitterverse at this stage. Later, few months later, my Twitter account “graduated” from the point of view of Twitter’s algorithm and I was able to share my blog post link autonomously without “booster” url links. I also found that attaching any photo to a tweet had the same “booster” effect.

After being an active Twitter user for more than 1 year, my experience is mostly positive. For me Twitter is one of the best places to go to find News.

However there are few things that still puzzle me about how people use Twitter.

Right now I have around 185 Followers. I myself follow around 25 people, so far. My immunology blog reached ~100 views per day since I joined Twitter. Sometimes it has more.

On Twitter I prefer to follow people who are (a) active users, (b) who write their own blog, (c) who don’t use too much of retweets, (d) tweet and share links about topics that are not yet worldwide “common knowledge”.

I especially try not to follow people who are retweeting too much. It shows that they have nothing unique to say themselves and depend on others to fill the void. I also find very puzzling the situation when people start to follow and then few days later unfollow because I did not follow them back. The fact is that I specifically state in my profile that I am tweeting mostly about immunology. If you are interested in immunology, you can follow for that purpose and not because you have an expectation of follow-follow principle, especially when you don’t tweet about immunology or science related topics.

I also have a strong opinion regarding what to tweet, retweet and even follow. Since Twitter is a public social site, we need to exercise some social oriented judgment. When I tweet or retweet anything, I do this because either I find information positive or I find information negative and of high value enough to share. But this also means that I have my own opinion about my tweets or my retweets. In other words, you need to be able to “defend” your tweet or retweets. I disapprove when people blindly retweet something and when asked to explain they have nothing to say and have no idea or opinion why they are retweeting it. This is not correct, in my view.

Of course, I don’t likes people on Twitter who ignore direct questions. This is especially true for people who have lot of followers and wrongly assume that someone with less followers does not merit their answer. This is a mistake and shows lack of culture.

I am also curious how people who follow 1000s or even 100s of people are managing their twitter feed. Right now, I follow 25 twitter account and my twitter feed has dozen tweets per hour. Imagine following 100s or even more of “active” twitter accounts and getting 100s of tweets per hour. It would be very demanding to navigate it, to sort it out and respond.

posted by David Usharauli

Rise of autocratic science research

Advances in Science comes when there is a free exchange of accumulating knowledge. Thus, Science, by definition, should be a democratic institution by its nature.

As organization of family units is a foundation for modern states, organization of research laboratory units is a foundation for modern science. So analysis of laboratory units can give us a clue of how science advances or stumbles.

In contrast to popular belief, organization of modern research laboratory units is clearly and unequivocally autocratic in nature rather than democratic. Simply put, an absolute majority of research laboratories in both academia and industry do not and cannot contribute to the advancement of science, period. Of course, such labs do publish research articles at the end of the calendar year or submitting quarterly reports to justify their existence, but that all.

So a natural question is why is it the case? Almost everyone is starting their science journey as an idealist democrats and are ending up as a fearful autocrats. Why? This is because there is no separation between laboratory research and laboratory management. When the same Principal Investigator (PI) are required to conduct high quality scientific research as well as to procure the funds for the same research, there is a little tolerance to different ideas and opinions. Fear of losing funding prevent PIs to be courageous in science and follow their gut instincts. In the end, fear reduces the diversity and chances of great discoveries and such PIs become career scientists with more knowledge in bureaucracy than science.

This was a one reason why the US Government originally had created an intramural research labs with a secure funding where scientists were just asked to focus on science. The same idea was behind HHMI funding. When PIs are released from the fear of loosing financial support, and only requirement to them is that they will produce high quality research that can be published in journals like Nature or Science (or at least in top 5 journals in their subject field), then PIs would become open-minded and more democratic because high quality research needs a democratic environment to become a reality.

Actually, HHMI Investigators do publish in top journals. However, intramural biomedical research funded by the US Government did not fare well since Government did not put the requirement that the research should be of high quality, not just any kind of research conducted at leisure. Without such balancing approach, system becomes easily distorted.

posted by David Usharauli