Tuesday, April 22, 2014

Where are the STAP cells?

Signals Blog just posted a short commentary I wrote on STAP cells and why I think they're too good to be true:
Esophageal cancer is the end point on a spectrum of diseases. At the beginning, chronic acid reflux exposes the esophagus to the low pH levels of stomach acid, which is a risk factor for Barrett’s Esophagus. Barrett’s, in turn, is a risk factor for developing this type of cancer, which may take several decades to appear.

My original thoughts when STAP cells were first reported were that there would be a clear link between acidic conditions and stemness. ... But there’s no overwhelming evidence to suggest that that happens.
Acid reflux seems like the ideal natural experiment to prove that STAP cells exist, yet the esophagus doesn't turn into a mess of iPS cells every time someone gets heartburn.

Read on at Signals.

Monday, April 14, 2014

Called It: Artificial Blood from iPS Cells

Genetic Engineering & Biotechnology News reports that the Scottish National Blood Transfusion Service is looking into the safety of stem cell derived blood:
A team of scientists led by SNBTS director Marc Turner, M.D., Ph.D. is heading the project, which reflects the combined efforts of the BloodPharma consortium, which has benefited from a Strategic Award, provided by the Wellcome Trust, in the amount of £5 million (approximately $8.4 million). 

The research funded by the award involves multiplying and converting pluripotent stem cells into red blood cells for use in humans, with the aim of making the process scalable for manufacture on a commercial scale.
It's a study to test transfusions using small amounts of blood (5mL), but nevertheless it will be real stem cell derived blood.

I called it back in 2010.

Friday, April 11, 2014

Answer the Why of Your Work

All too often, scientists (and other research minded people) are drawn into a never ending spiral of questions.  Answers lead to questions which lead to answers, leading someone to inevitably describe the next line of inquiry and cap off their thoughts with "We need to do experiments to answer these questions."

The problem that usually arises is that no one objects.

Why not? It's easier to let someone go ahead and do their work than it is to stop and think about other things that can be done.

But assuming they've already decided that the questions are worthy of work, it should be easy for them to articulate why those questions need to be answered and why now is a good time to answer them.  Is it because there's a key conundrum in your field of specialization?  Will the answer tell us something useful about a disease, a key point about cells or disease, or a physical process?  On an extremely practical level, will your answer contribute to a publishable paper or getting a grant?

Or, most commonly, will your answer tell you that Gene X amongst 20,000 genes goes up or down because you poked a particular cell the right way?  That, too, may be important but you need to state why.

The reality is that not all questions need to be answered, at least not immediately.  Unanswered questions can simmer for a little while longer.

Monday, April 7, 2014

Big Data Sets, Multiple Hypothesis Testing, and Choices

Jason McDermott, at The Mad Scientist's Confectioners Club writes:
Here’s where the problem of a false dichotomy occurs. Many researchers who analyze large amounts of data believe that utilizing a hypothesis-based approach mitigates the effect of multiple hypothesis testing on their results. That is, they believe that they can focus their investigation of the data to a subset constrained by a model/hypothesis and thus reduce the effect that multiple hypothesis testing has on their analysis. Instead of looking at 10,000 proteins in a study they now look at only the 25 proteins that are thought to be present in a particular pathway of interest (where the pathway here represent the model based on existing knowledge). ... All well and good EXCEPT for the fact that the actual chance of detecting something by random chance HASN’T changed.
The article in its entirety is a good read, especially in describing the use of big data sets as a balance between hypothesis-driven projects and discovery-driven ones.  The former can loosely be described as "research" in it's classical sense, while the latter is sometimes derided as "a fishing expedition".  Both approaches can be useful, as long as you're honest with yourself and know what you're dealing with.

But the quote above isn't exactly accurate.  In the hypothetical 10,000 protein experiment, the chance of detecting any one thing as significant is the same whether you're looking at a subset of 25 or 250.  Given that constant random probability, the chance of finding anything significant is much greater in the whole set of 10,000 as compared with 25.  That 10,000 protein data set is where multiple testing is drastically needed.  You still need correction with 25 but you usually simple methods are adequate.  Picking the right way to correct your results is tricky, as I've seen large experiments designed as a fishing expeditions fail to detect known, real effects in the data set with statistical significance, even after multiple testing correction is done.

So if you know what you're looking for and have a specific question in mind, you can make multiple hypothesis testing work for you.  You won't have your big data set dilute away all your interesting observations.

Having something very specific to act on also means you're less likely to be fooled by chance and drawn down a path that's "significant".  You're free to restrict your observations to a more specific set of data and choose to look at any set of measurements based on the question at hand, and not the other way around. 

Of course, making the decision to ask a specific question should be made before seeing the data in it's entirety, not after the fact when something "looks good", but that's a whole other issue altogether.

Wednesday, March 12, 2014

Important Advice from Peter Gluckman, NZ Chief Science Advisor

Peter Gluckman offer ten points of advice at Nature for those interested in advising government regarding science policy.  Amongst the ten, this one is probably most critical:
Distinguish science for policy from policy for science. Science advising is distinct from the role of administering the system of public funding for science. There is potential for perceived conflict of interest and consequent loss of influence if the science adviser has both roles. There is a risk that the adviser comes to be perceived as a lobbyist for resources [Emphasis mine].

Tuesday, February 25, 2014

There's Something to Learn from Hero Worship

Kevin Hughes, writing for The Guardian and arguing against the idea of worshiping big name people in academia, centers his position around the following statements:
The truth of the matter is that heroic academics are just regular academics with two uninspiring credentials: good connections and a healthy dose of luck. A hard work ethic and an agile mind – which is to say a normative talent set at the graduate level – sets almost no one apart.
This is a sweeping assumption that dismisses the value of identifying people who are unusually successful (here, in academics, but in principle in any industry).  Identifying the unusually successful from the merely excellent is actually hard work, and is something that a 'big name' is supposed to help people sift out those worthy of emulation. 

There are heroic individuals that got lucky, were in the right place at the right time, or are simply egocentric, if you can figure out who is heroic for what reason you can find those people that can teach you something useful.  In the comments, 'fluffybunnywabbits' describes this kind of viewpoint very well:
I also think it's telling that in the anecdote recounted here the 'worshippers' are at a more advanced stage than the author (PhDs to his masters). I think the further through you get, the more you can track your own improvement, and that makes you realize how valuable experience is. I'm a couple of years into (what I hope will become) an academic career, and revising my doctoral thesis for publication. Re-reading stuff I wrote just a year or two ago makes me cringe, and makes me realize how much I've developed. If I think my ideas now are worth much more than my ideas were two years ago, why wouldn't I respect someone with ten times that extra experience?

Tongue in Cheek Look at the $1000 Genome

This post at AllSeq has a list of seven important things to consider if you want to deliver $1000 genomes using Illumina's HiSeq X platform, most of which can be realistically met (for a megaproject), except for these two points:
  • You don’t need to pay for the building you’re in and you can work in the dark. The budget doesn’t include overhead.
  • You don’t really want to analyze or store the data. The $1000 might get you a basic alignment, but nothing else.
Besides overhead, analysis of data has become, and probably will remain for the foreseeable future, the more expensive part of doing genomics.

Friday, February 21, 2014

OICR: Perhaps One of the Best Canadian Employers Ever

I haven't posted anything in the past several weeks as I've been incredibly busy with a whole series of projects. This is even despite going to AGBT 2014 and seeing the GenapSys presentation (Wow.  Just wow.).

Today, everyone at OICR received the following email, which must be either the most awesome or most Canadian thing I've ever heard of from anyone, anywhere I've worked before. 


Thursday, February 6, 2014

Three Different Ways of Reading a Scientific Article

Nature News reports:
In 2012, US scientists and social scientists estimated that they read, on average, 22 scholarly articles per month (or 264 per year). That is, statistically, not different from what they reported in an identical survey last conducted in 2005. It is the first time since the reading-habit questionnaire began in 1977 that manuscript consumption has not increased.
And further on:
Aside from the levelling out of article readings, the latest survey of 800 scholars, which is due to appear in the journal Learned Publishing, also finds that the time taken per article seems to have bottomed out at just over half an hour.
Anecdotally, I'd have to say this study hits the trend bang-on.  22 articles per month at half an hour each is actually a pretty low commitment, if you consider how articles are being read by many people.

I doubt many of the people 'reading' over 22 articles actually have the time to fully absorb every little bit of information within - Most people don't really have that luxury of time (or perhaps terrific reading comprehension.  Some of the other thoughts mentioned at Nature capture this very well:
When articles were only available in print, it was implicitly assumed by communication analysts that researchers always read manuscripts in their entirety, as if a ‘scholarly article’ was an object to be consumed as a whole. That may never have been true, he says: most of the time, scholars were likely scanning for particular snippets of information.
Below are a few approaches and reasons why someone would want to read a scientific article.  This list isn't exhaustive by any means:
  1. To understand a new idea.  This is the real learning, and learning takes effort.  This is also where you really have to study the article in depth to avoid missing details that don't seem relevant at first glance.  If you're out of your usual area of expertise, you need to understand the context of why the final product is scientifically important, what the assumptions or facts in the report are, how and why the experiments are done (at a technical level).  You might also have to re-read the article a second time to really 'get it'.  Time alloted: Up to several hours.
  2. To stay up to date in your field.  Here, you're really just skimming the results and references while still reading the paper.  You don't have to study technical aspects of the report because you're familiar with them.  Were the experiments actually risky enough to show something daring?  Is the result worth citing in the future, or does the paper refer you to other new papers?  Time alloted: About 30 minutes.
  3. To replicate or adapt some published experiment.  You're only interested in one figure in the paper that shows the data you'd like, or think you'd like, to show in your work.  The end result of the paper doesn't matter to you, but the methods, software, and reagents actually used matter to you.  Just look up the information you need and file the paper away for a rainy day.  Time alloted: 10 minutes.
There are many other ways of approaching a paper.  If you have another way, send in your comments or add them below.

Monday, February 3, 2014

Getting Prestigious Awards Gets You Noticed

Especially if you're a scientist. 

Bioscience Technology covered a recent paper in Management Science which points out that life sciences investigators see a 12 percent increase in citation rates, on average, after becoming Howard Hughes Medical Institute investigators.  Being associated with HHMI is considered prestigious but most accounts.

Among the main things the authors observed: Big gains in citation rates post-award were seen for people working in new areas of research, publishing in lower impact journals, and for younger researchers.  However, the effect of a new prize isn't very significant if people were publishing work in big name journals already. 

Since citations are usually given freely, the study does seem to support the idea of prizes as a mark of quality and a signal that reading work from that particular person is more likely to be worthwhile, and in general awards serve to build up a personal brand that's similar to that of big-name journals.  Hot journals generally contain quality work, so work from someone who's been recognized with an award should also be interesting (though it's not hard to find lukewarm papers in hot journals, and reading work from HHMI investigators is no guarantee that it will be hot).

See the original paper here.  Unfortunately, it's paywalled unless you're at an institution that gives you access.

Monday, January 27, 2014

Five Short Facts About Fat Cell Biology

Cell recently posted a huge review of fat and adipose biology written by Evan Rosen and Bruce Spiegelman.

There are currently three types of fat known: white, brown, and beige.  If you're interested in where fat tissue comes from and how it behaves, skip ahead to the section titled "The Developmental Origins of Adipose Tissue: A Bloody Mess", which means that it's actually bloody and that it's just bloody confusing to understand how all the genes involved relate to each other.  There you'll find a handful of good factoids:
  • The total number of fat cells humans carry as adults is set by adolescence.
  • Humans turn over about 8% of their fat cells per year.
  • Mice turn over about 0.6% of their fat cells every day.
  • Fat cells can be derived from stem cells that can also create blood cells.
  • Brown fat cells are derived from stem cells that actually reside in muscles, not fatty tissue.  A single gene controls the switch between the two.
Besides giving you the 50,000 ft view of fat biology, another key take-home message in this review is that having basic stakes in the ground to frame research questions is a necessary catalyst before triggering a lot of research down the road.  This may be obvious, but it's worth repeating when good examples arise.

In this review, it's best shown in the first figure, where it's not until several fat related cytokines were identified in the mid-1990's that work in the fat field really took off.  Though interest in the field slowly grew, it wasn't until reaching milestones like the discovery of leptin and adiponectin that both raw and relative numbers of papers (blue and red), respectively, shot up from a baseline that spanned more than two decades.

Monday, January 20, 2014

Franklin's List: Helping Scientists Become Politicians

GEN has an excellent and timely article on an emerging political group, Franklin's List, that's helping scientists get involved in politics in the United States, by directly helping them become politicians.  Though the group is new, they already acknowledge several obstacles that need to be paid attention to. The largest once are human issues and have little to do with a need for money:
One key roadblock those recruits, and Franklin’s List, will need to surmount is cultural: Until lately, investigators and other STEM professionals have balked at going into politics. [Shane] Trimmer (Franklin's List Executive Director) says that’s starting to change following years of flat or reduced spending on NIH. ... “They’re seeing how the decisions made in Congress by politicians are directly affecting their ability to do research. Now they’re seeing that if they do not get more involved, then these things will just keep on happening,” he added.
It's almost as if that imaginary world of scientists cloistered in their labs ignoring reality is real, and represents a major liability to the research enterprise.  You just can't get tenure and skive off from the rest of the world to do research until your retirement at the age of 79.

The whole idea of scientists forming a lobby group reminds me of a conversation I had with another trainee long ago at a Stem Cell Network conference.  He was a postdoc and I was a PhD student, and he took the position that that scientists, being paid to manage government funds, couldn't use those same government funds to lobby the government for more money.

I argued that that wasn't true; once grant money was paid to people (researchers, technicians, students, etc.), they could do whatever they wanted.  That included spending it on professional bodies that, like those for teachers and physicians, spend a lot of time and energy on negotiating better terms for their members.  Why scientists aren't very good at doing this puzzles me to this day.

But Franklin's List seems like it can partly fill this need for a scientific lobby group, at least in the United States.  Interestingly, it looks like it'll focus on gathering scientists at local levels to try and grow out candidates for higher political levels.  Kind of like running farm teams.
“The STEM candidates we’ll be searching for who have been in the lab or in academic circles, their idea was always to be in academia as a biologist or a physicist. They don’t have the network that somebody might have who has been a businessperson or an attorney in the community and might always have, in the back of their mind, thought about politics as an option,” Trimmer said. “It will be much easier for them to work their way up and to build that grassroots support.”
The GEN article is worth the few minutes to read, and it definitely portrays Franklin's List as a movement to watch.

Tuesday, January 14, 2014

Stunning Protein Animations by Nanobotmodels Studio

Yuriy Svidinenko, head of Nanobotmodels, is running a crowdfunding campaign to produce more jaw-dropping animations like this one of nanoparticles delivering drugs to cancer cells:

He's proposing to use the crowdfunding proceeds to produce an animated video about cancer biology and proteins involved in the process, and his IndieGoGo pitch video can be seen below.

You have to wait to see the cool renderings of human IgG at 1:00, what appears to be a protein encapsulated in a lipid nanoparticle at 1:12, and a translucent cell (a neuron?) starting at 1:38.

Apparently rendering costs are a significant fraction of making these videos (~40%), which he estimates at about $65-85 per secondThe campaign runs until February 25th, 2014.

Best of luck Yuriy!

Friday, January 10, 2014

Science Transforms War, Transforming Science

At Nature, David Kaiser, an MIT Professor and Head of the Program in Science, Technology, and Society, writes about how the Second World War's need for physicists to run huge research programs transformed the model of science:
Until the war, most scientific research in the United States had been supported by private foundations, local industries and undergraduate tuition fees. After the war, scientists experienced a continuity — even an expansion — of the wartime funding model. Almost all support for basic, unclassified research (as well as for mission-oriented defense projects) came from the federal government.
While the main point here is that government became the major funder of research, the point that's more important to remember is Kaiser's description of research, pre-WW2, as being paid for by (and probably driven by) foundations, industry, and tuition. 

But in the context of changing government funding, these are the same sources of money that seem to be becoming more and more important today.  Could it be that the model of running science for the last 60 to 70 years has been 'abnormal'?

Part way through, Kaiser throws in another interesting historical quip:
Veterans of the intense, multidisciplinary wartime projects came to speak of a new type of scientist. They touted the war-forged 'radar philosophy' and the quintessential 'Los Alamos man': a pragmatist who could collaborate with everyone (emphasis is mine) from ballistics experts to metallurgists, and who had a gut feeling for the relevant phenomena without getting lost in philosophical niceties.
Learning to work in collaborations and to do collaborative science is probably one of the more important and useful skills to pick up during a PhD, and it seems like the idea of a pragmatic 'serial collaborator' who manages to identify common ground with others in other disciplines seems to have also originated in the post-war period.

Thursday, January 9, 2014

Frozen Human Brains, Stem Cells, and Ice Cream

Signals just posted a short summary I wrote on this paper, where a team at Columbia University and the New York Stem Cell Foundation managed to create iPS cells from human brain tissue that was frozen for 11 years. 

The paper itself is actually a neat example that human cells are pretty resilient, as the team specifically used tissue samples that weren't protected against freezing with glycerol or DMSO, common additives to prevent ice crystals from forming and damaging the cells.

And since stem cells were created from patients with four different neurological diseases, it also means that other kinds of poorly stored samples, not protected with an antifreeze, might be used to make model cells as well. 

Since Signals has the summary of the paper, I'll digress on the topic of antifreezes here.  You might also know that antifreezes aren't just useful for storing biological samples in freezers; many organisms protect themselves from freezing with antifreeze compounds or antifreeze proteins in their own bodies.  

Several antifreeze protein structures.  From RCSB.

Besides keeping organisms alive, antifreeze proteins also have a variety of useful applications for humans (which obviously don't have any), with the best one I know of is the use of fish antifreeze protein as an additive to ice cream.

I first heard of this in a talk by Peter Davies, a scientist at Queen's University, who described how antifreeze proteins were identified in mealworms. He describes some of that work in this short interview at NPR, where he also adds that the proteins were quickly rebranded by companies using the proteins in food:
DAVIES: Unilever, which is a big company in Europe, who make frozen foods like ice cream for example, they have for some time now been putting the antifreeze proteins into especially low-fat ice cream. Now they don't call them antifreeze proteins because the public would, the consumers would be perhaps nervous about the idea of antifreeze being in food. So they actually call them ice structuring proteins.
By whatever name you call them, the proteins are yet another example of something very useful that came out of purely academic research.