Category Archives: my opinion

Microbiome and research reproducibility

This week journal Nature published a long overdue perspective where the authors argued that one of the critical, but frequently unaccounted reasons for pre-clinical research irreproducibility, was role of microbiome in shaping physiology and phenotype of laboratory animals.

Written by Thaddeus Stappenbeck & Herbert Virgin, this article explained that when designing and analyzing research data we need to consider effect of “metagenome” defined “as the sum of all host genes plus all organism genes of the microbiome.”

The term ‘microbiome’ refers to not only endogenous bacteria, clarified the authors, but “also the virome, archaea, the mycobiome (fungi) and meiofauna (for example, protists and helminths)”.

I am going to highlight some of the important messages from this article.

First, the authors start with acknowledgment that “nearly all aspects of human physiology, as well as model organisms such as mice, are influenced by the microbiome and metagenome”.  This is especially true for conditions where immune system is heavily implicated, but even organs such as lung, pancreas, and brain are directly influenced by microbiome.

A gold standard for working with gene-modified organisms, argue the authors, is to use “littermate controls” to control the effects of the microbiome and metagenome. For example, the authors correctly pointed out that “the common practice of comparing wild type mice to mice with a mutation when the wild-type and mutant mice are bred separately or one group is purchased from an outside facility” is totally flawed and no self-respected research investigator or science journals should publish results obtained from such sloppy experiments. Use of littermate controls is not a new concept and many publications specifically mention such controls, but it must be a mandatory requirement for gene-modified animal experimentation going forward.

Another important recommendation, the authors say, would be to conduct key experiments “in multiple animal facilities in order to draw firm conclusions about the generality of a role of host and/or microbiome genes in a phenotype.” This is akin to clinical trials on humans that “relies on multi-centre trials as its gold standard for treatment efficacy”.

I personally fully support the recommendations proposed in this article because they are sound observations derived from analysis of decade-long scientific experimentation.

Of course, implementation of these rules would be expensive and time-consuming. But without it, vast number of experiments are totally wasted to begin with, not to mention millions of lab animals sacrificed for no good cause. I will go even further and say that it is moral and ethical obligation for every scientist to do the best he/she can to minimize undue experimentation on live animals.

Finally, one way to accomplish such transformation, funding agencies and scientific journals should demand higher standards for “practices and controls for mouse experiments”.

David Usharauli

Advertisements

My view on “An incentive-based approach for improving data reproducibility”

This week Science Translational Medicine published the commentary by Michael Rosenblatt, Chief Medical Officer at the big Pharma company Merck & Co., addressing the problem of research data reproducibility.

He correctly pointed out that “The problem of [research] data that cannot be replicated has many potential origins: pressures to publish papers or secure grants, criteria for career advancement, deficiencies in training, and nonrigorous reviews and journal practices, with fraud an infrequent cause”.

He proposed that one way to improve confidence and reliability of translational research would be “if universities stand behind the research data that lead to collaborative agreements with industry” and “In the instance of failure [i.e. irreproducible data]” “what if universities offered some form of full or partial money-back guarantee [to industry partner]?”

The main starting point for this proposal is the fact that currently “industry expends and universities collect funding, even when the original data cannot be reproduced.”. “Compensation clause” proposed by Michael Rosenblatt is an attempt to put certain additional “accountability” [call it “incentive” if you prefer] on universities’ shoulders.

Would such arrangement work? Unlikely, in my opinion. Why? By accepting such arrangement, it naturally would imply  that academic scientists are less than “good” in their work. It would suggest that university does not have a confidence in their own scientists. It would ultimately impinge on academic freedom by dividing scientists between reliable and non-reliable ones (in my view, simple double-blind peer review of scientific manuscripts would greatly improve the quality of the academic research).

In addition, this proposal also somehow wants to unnecessary ease the burden on industry bosses who are ultimately responsible for selection of the best academic projects for commercial purpose.

These are few examples why it would be extremely sensitive subject to implement. I don’t have illusion that data reproducibility issue has simple solution. One aspect that is not even mentioned in this commentary is whether our heavy reliance on animal [mouse] models is misplaced and is actually one of the main causes for failure to “translate” into human research.

posted by David Usharauli

Me on Twitter 1 year later

I joined Twitter a little over 1 year ago. Prior to this I was only on Blogger where I used to write my analysis of new research articles, specifically in immunology.

Two reasons why I choose Twitter as my primary social site:

(a) Even though I was blogging about science and writing unique quality content, very few people visited my site and even fewer left their comments. On average, my site had around 10 pageviews per day. It went up slightly to 30 pageviews when I started to regularly post my analysis (2-3 per week). Still in my view I felt it hadn’t attracted enough visitors who were interested in immunology. On Google one finds plenty of advice on how to make a site more visible and usually number 1 advice is to have current, novel and original stuff and even though I was writing original stuff it wasn’t working as I expected. So I thought maybe sending links of my posts via Twitter would increase its visibility (it did as discussed below).

(b) I have a quick mind and it is quite easy for me to come up with quick and short titles (at least I believe such things about me). So I thought Twitter can be a good venue to express my thoughts as “idea bursts”.

So I joined Twitter and began learning how to use it in a way to popularize my immunology blog. However, immediately I encountered a major hurdle on Twitter: it appeared that url links from my blogger posts were not going “public” on Twitter when attached to my tweets but were visible only to my “Followers” and I had basically none at this stage.

I searched Google to find if anyone had reported similar situation. Indeed, few discussion sites mentioned that only Twitter accounts that were popular or had many followers or were long-standing, permitted “Public” url visibility.

Basically it was a kind of catch22 situation for me: on one hand, to gain popularity and followers I needed to attach my blog post url links to my tweets, but such tweets were not visible in Twitterverse. On the other hand, my Twitter account would have not been visible in Twitterverse unless I had some followers.

So, for some time I had no idea how to solve this dilemma. Then few days later I came across an online discussion where it was mentioned that not all url links are “equal” and some url links are more popular for Twitter’s algorithm. Specifically, names of such site as BBC or NYT were mentioned. After reading this I had an “epiphany”: what would happen if I attached to my tweet a prestigious [but random] url alongside of my “non-prestigious” blog post url? Would such prestigious ulr “carry”/”boost” my non-popular url link and make it visible in Twitterverse?

It did. For a long time (2-3 months) I used to attach so called “booster” url to my tweets if I need to share my blog post links. As a “booster” url I used Nature.com home page link and it worked wonderfully.

This is how I made my Twitter account visible to Twitterverse at this stage. Later, few months later, my Twitter account “graduated” from the point of view of Twitter’s algorithm and I was able to share my blog post link autonomously without “booster” url links. I also found that attaching any photo to a tweet had the same “booster” effect.

After being an active Twitter user for more than 1 year, my experience is mostly positive. For me Twitter is one of the best places to go to find News.

However there are few things that still puzzle me about how people use Twitter.

Right now I have around 185 Followers. I myself follow around 25 people, so far. My immunology blog reached ~100 views per day since I joined Twitter. Sometimes it has more.

On Twitter I prefer to follow people who are (a) active users, (b) who write their own blog, (c) who don’t use too much of retweets, (d) tweet and share links about topics that are not yet worldwide “common knowledge”.

I especially try not to follow people who are retweeting too much. It shows that they have nothing unique to say themselves and depend on others to fill the void. I also find very puzzling the situation when people start to follow and then few days later unfollow because I did not follow them back. The fact is that I specifically state in my profile that I am tweeting mostly about immunology. If you are interested in immunology, you can follow for that purpose and not because you have an expectation of follow-follow principle, especially when you don’t tweet about immunology or science related topics.

I also have a strong opinion regarding what to tweet, retweet and even follow. Since Twitter is a public social site, we need to exercise some social oriented judgment. When I tweet or retweet anything, I do this because either I find information positive or I find information negative and of high value enough to share. But this also means that I have my own opinion about my tweets or my retweets. In other words, you need to be able to “defend” your tweet or retweets. I disapprove when people blindly retweet something and when asked to explain they have nothing to say and have no idea or opinion why they are retweeting it. This is not correct, in my view.

Of course, I don’t likes people on Twitter who ignore direct questions. This is especially true for people who have lot of followers and wrongly assume that someone with less followers does not merit their answer. This is a mistake and shows lack of culture.

I am also curious how people who follow 1000s or even 100s of people are managing their twitter feed. Right now, I follow 25 twitter account and my twitter feed has dozen tweets per hour. Imagine following 100s or even more of “active” twitter accounts and getting 100s of tweets per hour. It would be very demanding to navigate it, to sort it out and respond.

posted by David Usharauli

My path to vegetarianism

It is now almost 8 months since I became a vegetarian 🍅 🍆🍴. This transformation did not happen overnight or even in 1 year. In fact, it took me 11 years to convert this idea of being a vegetarian into practice.

From my point of view, there are few criteria (that should be met) that would allow for one’s transition from eating meat 🍗 to mostly plant diet [and some animal-derived diet, such as unfertilized eggs].

1st criterion is a basic interest and feeling for animals. Person must enjoy the presence of animals. Person should be willing to take care of animals (if need be). I don’t mean here just dogs or cats but any small 🐹, farm 🐔 🐖 🐑 🐮 or wild animals 🐗[including birds or fish]. Person must accept that any living animal has the right to exist free of pain. We should accept that we are not a “superior” species, but just a species with specific set of talents, as are other species with different sets of talents.

2nd criterion is any association/relationship with individuals who are vegetarians. I am fortunate to be married to a woman who is a lifelong vegetarian👫. Such an association clearly helps in various ways. Constant information sharing allows familiarization with the concept thus helping to overcome preconceived prejudices and ignorance, if any.

3rd criterion is a desire to learn to cook vegetarian meals 🍝 and feel good about it 👍. Only by experiencing food preparation and cooking, one can truly embrace vegetarianism at the dinner table. Transition to vegetarianism is a commitment  and to carry out such a commitment, one needs some level of constant incentives and enforcement. And there is no better incentive and enforcement than eating food one has spent energy in preparing. I would advise against buying vegetarian food at early stage of vegetarian transition. Such an arrangement is important, especially at the early stage until one’s vegetarian instincts become “fixed”. Also, since vegetarian diet is more nuanced in its flavor [as compared to meat-based food], it is more difficult to achieve consistent, reproducible  results with vegetarian food. By making your own food, one can overcome initial disappointment with vegetarian food bought outside [that could drive “converts” away].

Out of these 3 criteria, #3 was the hardest to achieve for me. I never liked “kitchen work”. I felt it was too burdensome. We used to constantly buy food or eat outside and then constantly complain about its quality. But now, after starting vegetarian home cooking I feel quite comfortable with time spent in kitchen and the quality of my food. I have no craving and yearning for meat as many meat-eaters think vegetarian “converts” should have. Finally, it makes me feel better that I can contribute, even if it just a little, to animal welfare and global warming causes ✌.

posted by David Usharauli

How I stumbled upon selfies back in 2004

Sometimes social culture changes and something becomes acceptable so rapidly that it’s just remarkable. I experienced at least one such event in 2004.

In early 2004, I arrived in the US to begin postdoctoral studies. At that time, I still believed that hard work was necessary and sufficient for success. So I was basically busy working in the lab. I never really developed good social skills, so standard postdoctoral work life in US was “ideal” for me. In our lab, postdocs worked alone on individual projects and we did not have much of a “party” culture.

But there was one problem. I wanted to send my photos to my family back in Georgia, showing my “life” at the workplace or at home. But how could I do it? I did have a cell phone with a silly no-resolution camera and another ordinary 1.2 mega pixel photo camera. But who was going to take my photos? I felt so “embarrassed” to take my own photos since it would have implied that I was alone and that I did not feel comfortable to ask anyone around me to take my photos for me. Basically, the concept of “selfie” was an anti-social concept for me then (and also probably for majority of people in Georgia).

But I needed photos. So, I set my camera to a 10 second timer option and started to make my selfies and sending them to my family via e-mails and hoped that no one on the other side would ask me why I was alone in all these photos. And gratefully no one did. I guess my family was “kind” enough not to make me feel “low”.

However, within a few years the selfie concept became so mainstream that no one would now question a photo where someone is pictured alone. It’s become a norm. Ultimately whether it’s good or not, I am not sure. It does give a person some “freedom” on certain occasions and sometimes I feel “proud” that I inadvertently became a”selfie pioneer” back in 2004 🙂

posted by David Usharauli

Two simple reasons why experiments on lab mouse do not adequately recreate physiologically relevant human disease models

These days we frequently come across of discussions about relevance or non-relevance of laboratory mouse models to study human diseases. Mouse models have their supporters and opponents.

Usually, those in favor say that it’s the cheapest model at hand. However, this is not exactly true. Genetically modified mice that actually represent the main attractiveness of using mouse models can easily cost $200 per one male and female pair. And we are talking about the simplest genetic modifications. More fancy mutants can cost $500 or above apiece.

In the US, the FDA’s position is important to consider. FDA’s Investigational New Drug (IND) application requires incorporation of pre-clinical studies (related to proof of concept and toxicology). However, except mice and rats (and some birds), all other mammals are USDA covered species and hence fall under more complex guidelines. For example, USDA covered species include non-human primates, dogs, cats, guinea pigs, hamsters, rabbits, and any other warm-blooded animal. So, basically USDA and FDA regulations naturally make mouse the main experimental target animal for drug testing or academic laboratory manipulation.

Now, those who oppose mouse experimentation say that mouse models are waste and inhumane since they do not adequately represent human disease models. There are sufficient data to make such conclusion. Without going into detailed analysis, I want to suggest two simple reasons that many people haven’t even heard of.

1. Laboratory mice in any university or research organizations are kept in plastic cages at room temperature, around 22-25℃ (72-77℉). However, this is sub-optimal temperature for mouse whose natural affinity is for higher temperature (30℃). Basically lab mice are constantly feeling a little “cold”. And we know enough that this can affect their physiology, for example development of insulin sensitivity, obesity, type 2 immunity, overall inflammatory response.

2. Laboratory mice are fed only specially formulated dry pellets and water. Nothing else. However, in natural environment, mice eat almost everything including vegetables and fruits. Now, such non-natural food composition for lab mice affects their GI tract physiology and of course their microbiota. Research articles are full of protocols where mice are treated with antibiotic and combination of different antibiotics. It appears that this type of antibiotic treatment does not affect lab mice behavior such as their appetite and does not induce loss of weight or other GI tract disturbance that one would expect to happen from antibiotic treatment in ordinary mammals, including humans. Now, would you still think that testing new antibiotics safety profile in mouse could predict any side effects in humans?

The real issue is not whether mouse model is “representative” or not, but rather the question is, if not mouse, what else should we use for pre-clinical studies?

posted by David Usharauli

Precision medicine will be expensive and prejudicial, unless..

These days we frequently hear that precision medicine will change how patients are treated. It is true that precision medicine, alternatively called personalized medicine, will change how diseases are diagnosed and treatments prescribed. However very rarely such reports mention that this near future medicine would be so prohibitively expensive that only limited number of individuals would have sufficient financial resources to benefit from it.

Let me explain my point. In the 20th century, the pharmacological drugs mainly consisted of chemically synthesized small molecules. These drugs would typically target small, conservative regions of the enzymes or receptors to mediate their pharmacological effects (aspirin, β2-blockers, etc). Luckily, big proportion of human population was responsive to such small drugs, though existence of protein variants (alleles) among ethnic groups or sometimes just differences between male and female metabolism affected their relative effectiveness.

One area of medicine that really suffered from generic approach to drug activity was cancer therapy. Since tumors and healthy cells frequently express similar molecules targeted by small chemical drugs, cancer therapy was associated with high level of drug side effects.

One way to overcome drug side effects is to make them more selective, more precise. Now, small molecule chemistry would not have much usefulness here since smaller the region the drug targets, the more non-selective its effect. However, selectivity could be achieved by increasing the size of region targeting by drugs. For example, monoclonal antibodies could interact with the larger area of the target protein and their use could allow more precise differentiation between tumor and healthy cells expressing the same target protein differing in just one (or more) amino acid.

Since every human potentially expresses a unique combination of alleles, application of precision medicine would require availability of pharmacogenomics data for each patients. No matter how easy the host genotyping could become in the future, it still will be more expensive that current generic approach. In addition, genomic data would reveal allele variants selectively expressed by certain ethnic groups. If these ethnic groups would not have political or social-economical resources for whatever reason, big pharma would not spend money to develop allele-selective medicine for these groups, and even if Government would provide financial incentives along the line of orphan drug legislation, it would be still very expensive (as orphan drugs are these days, costing on average between $10,000 to $350,000 per year).

Precision medicine will bring better disease management but current healthcare system would not be able to afford it. However, I think if following conditions are met, precision medicine could work as intended, at least in the US.

1. Introduction of upper limit on cost for each class of drug (based on overall performance and developmental cost).

2. Introduction of upper limit on profits for each class of drug (that would take effect after developmental cost is fully recovered. For example, 100% profit law would mean that if drug development cost was 1 Billion, then the company would be allowed to make 2 Billion in free market and afterwards drug cost would decrease step-wise (10%, each subsequent year), until it reaches manufacturing cost.

3. Increase in Medicare/Medicaid contribution and coverage.

posted by David Usharauli

B cells: from black and white to color set

B cells are part of adaptive immune system. They secret antibodies (IgM, IgG, IgA, IgE) and can present processed antigens to CD4 T cells.  However, beyond these functions, nothing much is known about B cells.

Studies on CD4 helper T cells, on the other hand, has uncovered and characterized multiple individual subsets ranging from Th1 to Th17.

Still, occasionally, one can come across of new paper in top journals (weird!) describing some new role of B cells that do not fit into current paradigm. Such example, for example, includes GM-CSF producing IRA B cells. In addition, there are plenty reports of IL-10 producing B cells involved in T cell suppression. Usually, immunologists would say “that’s weird” and then forget about it.

So why are research on B cells lagging so far behind the T cell’s studies?

One explanation has to do with the fact that unlike T cells, B cells do not survive well in an in vitro culture. One can re-stimulate T cells in vitro over and over again and they will still continue to expand (this is how originally Th1 and Th2 clones were derived). However, culture conditions developed for B cells thus far stimulate them to develop into short-term antibody-secreting cells (plasmablasts or  plasma cells) which finally will die, usually within 2 weeks of initiation of B cell culture. There is one report describing culture condition where the authors were able to maintain plasma cells for 60 days in vitro. But plasma cells are not B cells.

The real issue with B cells studies is the fact that we actually do not know how B cells survive in vivo. We don’t know what are the combination of stromal cells or cytokines that can provide tonic, survival signals to B cells. We don’t even know what is the role of surface IgM receptors in B cells survival in vivo.

Basically, we have an incomplete view of B cells biology. I think time has come to go back and re-evaluate our models for B cells. Without conceptual progress in B cells biology we will be just scratching our heads every time we hear some new research describing novel non-conventional B cell function.

posted by David Usharauli

How T cells interpret signaling delivered through chimeric antigen receptor (CAR)

Recently chimeric antigen receptor (CAR)-transduced T cell-based immunotherapy made headlines around the globe. Human trials conducted mostly against B cell-derived malignancies showed extraordinary medical benefits in large percentage of treated patients by extending their disease-free episodes for several years.

Below is a diagram of CAR molecule borrowed from Juno Therapeutics’ home page. Juno is one out of several biotech/pharma companies leading this field. As shown here, it is clear that current CAR construct contains several sub-signaling pathways (CD3zeta, CD28, etc).

 

Colors

Now, people who are familiar with basic immunology would notice that CAR construct artificially combines two principally distinct signaling pathways commonly known as signal 1 (CD3zeta) and signal 2 (CD28, etc). It is believed that such combination of two signaling pathways increases CAR T cells vitality and effector differentiation.

I would like to remind the readers that two-signaling model of T cell activation was introduced to account for tolerance towards self-antigens. It is thought to operate a kind of fail-safe mechanism to discriminate between self and nonself antigens. Since vast majority of nonself antigens would carry other attributes of “foreignness”, 2nd signal would be selectively available to nonself-specific T cells, but rarely to self-specific T cells.

Separation of signal 1 and signal 2 in T cells had to have some biological significance on how T cells interpret incoming signaling. This is especially critical for generation of long-term memory since unlike short-term immune response, memory cells with self-renewal potential could clearly cause long-term damage to the host’s tissue when inappropriately activated.

In my opinion, this is why innate cells, including NK cells, in most part, lack truly long-term memory potential. Innate cells are typically signal 1-only cells, meaning they could get fully activated after receiving specific signal 1. However, absence of memory formation among innate cells ensures that they would not go on and continue to damage host’s tissue once they are activated.

So, I can visualize similarities between how innate cells interpret activation signaling and CAR signaling in T cells. Absence of spatio-temporal separation of two signaling pathways in CAR-T cells may affect their memory differentiation potential, essentially transforming CAR-T cells into signal-1-only cells. If this model is correct, persistence and self-renewal potential of CAR-T cells in the hosts will be drastically reduced compared to normal memory T cells. Practically this could translate into diminished long-term medical benefits.

posted by David Usharauli

Paradox of post-publication citation

Publication records are one of the objective criterion one can use to assess scientist’s value. Of course, every scientist has a dream and desire to publish in top journals such as Nature or Science with impact factors 25 and higher .

However, absolute majority of research articles end up in society-sponsored journals. For example, Journal of Immunology, with impact factor between 3-5, is considered to be a “staple” journal in immunology. Scientist can publish his/her research article in Journal of Immunology and still feel “proud” of it.

Sure, there is enormous difference between Nature and Science and Journal of Immunology. First of all, it has to do with the branding. Both Nature and Science have huge reputation. People naturally assume that articles published there are of higher quality and reliable and indeed, on average it is absolutely true.

However, there is paradox with the regard of post-publication citations or referencing. I have frequently witnessed the fact that when scientists are presenting their work at research seminars or at scientific conferences, they are consistently citing or referring to any earlier studies as if totally equal, i.e. independent whether the referred studies were originally published in Nature, Science or Journal of Immunology (or PNAS).

In other words their confidence in reliability and quality of research results from Nature or Journal of Immunology are equal.

Logically, if you cite article from Journal of Immunology or Nature as if equal, then you accept that these journals publish equivalently valuable research results.

If so, then why are scientists still eager to publish in Nature or Science? I don’t know, that’s why it is a paradox.

posted by David Usharauli