Thursday, January 31, 2013

Amazon as an example of the havoc caused by unbridled competition

Matthew Iglesias at Slate comments on Amazon's drop in Q4 profit:
It's a truly remarkable American success story. But if you own a competing firm, you should be terrified. Competition is always scary, but competition against a juggernaut that seems to have permission from its shareholders to not turn any profits is really frightening.
This quote can be applied to so many contexts I think it's thoroughly amazing.  Generalizing: competition is good, but competition against any company (or anyone) that doesn't have an eye on their bottom line can be terrifying.  In this case, you either drive the competition from the market or you pack your bags and leave - waiting it out just places you in a war of attrition.

On the details of storing data in DNA

This Economist article explains the DNA data storage scheme from this paper:
Previous approaches have often mapped the binary 1s and 0s used by computers directly onto these bases. For instance, A and C might represent 0, while G and T signify 1. The problem is that sequences of 1s or 0s in the source code can generate repetition of a single base in the DNA (say, TTTT). Such repetitions are more likely to be misread by DNA-sequencing machines, leading to errors when reading the information back.  The team’s solution was to translate the binary computer information into ternary (a system that uses three numerals: 0, 1 and 2) and then encode that information into the DNA. Instead of a direct link between a given number and a particular base, the encoding scheme depends on which base has been used most recently.
Very clever.  Besides the 17 bases of indexing and parity check information per fragment, the dependence on the previous base is supposed to work like that damned SoLiD colourspace sequencing format.  Anyone who's had the pleasure of working in colourspace knows how frustrating it is to read. 

But the difficulty in decoding this information yields a problem that actually works for you: if an alteration or misread of a base is encountered when recovering information in this format, you can't decode beyond it.  It also means that if you can read the sequence completely, you're more confident that the data is accurate.  When reading data from DNA, I suspect you'd have to throw away this misread DNA fragment, which doesn't really matter since you've sequenced many redundant fragments in the first place.




How to detect ghostwriting in medical journals


Gary Switzer, from Health News Review, explains:
In our view there are three simple steps for finding ghosts. Step one is to examine the internal company documents to see who made significant contributions to the paper. Step two is to open up the published paper and see who is on the byline. Step three is to see if the byline has omitted some of those individuals who made significant contributions. 
These steps are necessary, but incredibly labour intensive, if not impossible, for an average person to do!

One of the examples given in a subsequent post walks through InFuse, a bone growth promoting product sold by Medtronic, which was a target of a huge lawsuit and in the process made a pile of internal company documents available.  This is the critical piece of information, but only comes to light in the most grievous of ghostwriting cases. 



Wednesday, January 30, 2013

Independence of Scotland, Catalonia, and Quebec cuts both ways

Colin Macilwain in Nature:
Many scientists in Scotland are apprehensive at the prospect of constitutional change. Hugh Pennington, a prominent bacteriologist at the University of Aberdeen, has said that Scotland’s researchers should reject independence in the referendum, lest they lose their right to compete for grants from the UK research councils.
In Canada, Quebec has long fought for it's independence from the rest of Canada, affectionately referred to as the ROC to avoid implying that two countries already exist.  But I digress.

Macilwain raises a very good point: Would an independent Scotland be locked out of funding from the UK, and would the same happen to an independent Quebec, if it were to separate from the ROC? Would Quebec's funding from federal sources like the CIHR and Genome Canada slow to a trickle?  If this were so, it obviously wouldn't serve what's now a vibrant biotech and pharma community in Montreal, and it's leading academic institution, McGill.  Just another thing to consider.

Tuesday, January 29, 2013

Nature Genetics on postdoc career development

Here's a very interesting Nature Genetics editorial on developments in postdoctoral fellow training:
We offer a draft community standard for a postdoctoral career workshop that can be used as a template and resource for career development at any institution. This document addresses the issue of what can be standardized to improve the lot of postdoctoral researchers, given that this is a professional group with diverse objectives that is motivated mainly by scientific curiosity.
The rest of the content describes many areas that postdoctoral training can be improved, but it's nice to see that things are improving, albeit slowly.

Genomics will make cancer a chronic disease

So says Alan Ashworth, in this National Post article:
All patients will soon have their tumour’s DNA, its genetic code, sequenced, enabling doctors to ensure they give exactly the right drugs to keep the disease at bay.  Doctors hope it will be an important step towards transforming cancer into a chronic rather than fatal disease.
Yes. DNA-based prediction of tumour drug responses is an absolute requirement over tissue-centric estimates of cancer pathology.
The Institute of Cancer Research, London wants to build a DNA database to identify lots of genes responsible for cancers. It is launching a three-year, £3-million project called the Tumour Profiling Unit to advance knowledge of the genetic profile of cancer.
Prof Ashworth said: “None of this is science fiction. One would think in five or 10 years this will be absolutely routine practice for every cancer patient.”
Though the 5-10 year horizon for this kind of practice is optimistic for 'every' cancer patient, it's definitely possible for some cancers and some subtypes within individual cancers. 
But one of the huge challenges, which remains unspoken, is to actually get this kind of technology from a stage where it's working in well-equipped labs, to becoming efficient and cheap tests used by physicians to make treatment decisions.  
We're in a stage were a handful of mutation-drug response combinations have been proven and it's commonly held plausible that the idea will extend to many more combinations.  The next decade will be much more welcoming to new predicted mutation-drug responses coming out of this kind of research (which is being done at all major cancer centres, by the way).

Monday, January 28, 2013

Data scientists are in demand: but sexy?

The Harvard Business Review claims as much.  'Data scientist' seems like a catch-all marketable title that many quantiative researchers can fall under if you're speaking to someone outside of academia.  As a computational biologist, I will vouch for HBR and tell you they definitely hit home with several observations in this article.  Here's a bit of what 'data scientists' can do:
More than anything, what data scientists do is make discoveries while swimming in data. It’s their preferred method of navigating the world around them. At ease in the digital realm, they are able to bring structure to large quantities of formless data and make analysis possible. They identify rich data sources, join them with other, potentially incomplete data sources, and clean the resulting set.
Why they do it:
The data scientists we’ve spoken with say they want to build things, not just give advice to a decision maker. One described being a consultant as “the dead zone—all you get to do is tell someone else what the analyses say they should do.” By creating solutions that work, they can have more impact and leave their marks as pioneers of their profession.
And how they like to do things:
Data scientists don’t do well on a short leash. They should have the freedom to experiment and explore possibilities. That said, they need close relationships with the rest of the business. The most important ties for them to forge are with executives in charge of products and services rather than with people overseeing business functions. As the story of Jonathan Goldman illustrates, their greatest opportunity to add value is not in creating reports or presentations for senior executives but in innovating with customer-facing products and processes.
 Link

Saturday, January 26, 2013

Lessons learned at Roche

Here's an interesting interview in Nature with Luca Santarelli, Senior Vice President and Head of Neuroscience Research and Early Development at Roche.  It's behind a paywall, so you need to have institutional access, but in essence he's optimistic of drug development for neurological diseases:
We think that neuroscience is now where oncology was about 15 years ago, at the brink of a revolution that will allow a deep understanding of brain function and a true comprehension of disease mechanisms.
and
What we have learned from cancer is that you can get more efficacy and less toxicity if you target the disease biology directly. This shift was made possible in cancer in the 1990s thanks to the discovery of the molecular pathways that lead to tumour formation. I think we're doing the same in neuroscience now by focusing on those conditions where we have a better understanding of disease biology.
Further in he describes progress in the development of Alzheimer's disease drugs, their Phase II/III SCarlet RoAD programme that's testing gantenerumab in patients with prodromal Alzheimer's disease (clinical trial is here).  Santarelli claims Roche can identify patients developing Alzheimer's disease with a test on cerebral spinal fluid.

Friday, January 25, 2013

Carl Zimmer just wrote a wonderful article describing the use of microbiology, DNA sequencing, bioinformatics, and phylogenetic analysis to explain a carbapenem-resistant Klebsiella pneumoniae outbreak in Wired:
On September 19, 2011, Evan Snitkin sat staring at a computer monitor, its screen cluttered with Perl script and row after row of 0s sprinkled with the occasional 1. To Snitkin, a bioinformatician at the National Institutes of Health, it read like a medical thriller. In this raw genetic-sequencing data, he could see the hidden history of a deadly outbreak that was raging just a few hundred yards from where he sat. 
And further on:
As word of the outbreak circulated among the NIH staff, Snitkin and his boss, Julie Segre, approached the Clinical Center with an unusual offer. In their jobs at the NIH’s National Human Genome Research Institute, the two scientists had previously sequenced genomes from a bacterial outbreak long after it had died out. But today, sequencing technology had become so fast and so cheap. Why not analyze the bacteria in the middle of an outbreak? By tracking the bug’s transmission route through the hospital, they might be able to isolate it and stop its lethal spread. 
Zimmer did an excellent job in this article translating an overwhelming amount of lab techniques and jargon into easily understandable explanations.  This story of Bethesda, Maryland's Klebsiella outbreak is a long, but excellent read, for anyone curious about what "real science" is all about.

I only have one constructive disagreement with Zimmer, and it's on the following.  He claims:
It’s unlikely that most US hospitals will be able to fight their superbug outbreaks the way that Tara Palmore, Julie Segre, and Evan Snitkin fought theirs—at least not anytime soon. The NIH Clinical Center had access to a scientific brain trust and a massive genome sequencing center to go with it. For now, smaller hospitals don’t have a labful of sequencing equipment, let alone the necessary expertise.
He emphasizes a lack of equipment, but machines aren't the limiting factor: it's having the people capable of doing Snitkin's kind of molecular detective work.  I probably know over a hundred people (I'm not kidding) that could be taught how to do this in a few weeks.  Genomics-based public health approaches are easily within reach, almost anywhere in North America, today.

DNA samples from future outbreaks in small hospitals can simply be couriered to sequencing centers and data can be put into a scientist's hands in a few days, if need be. But ultimately, scientifically minded people must be readily available to do something with it!

Thursday, January 24, 2013

Publishing and Perishing in a Connected World

David Dobbs, at The Guardian writes this about science communication:
Here's the essential fact: science has no importance or value until it enters the outside world. That's where it takes on meaning and value. And that's where its meaning and value must be explained.

Scientists implicitly recognize this at a limited scale: They want their colleagues to understand their work, so they go to conferences and explain it. But that's not enough. They need to go explain it at the Big Conference — the one outside of academe. They need to offer the larger world not just a paper meaningful only to peers, but a friendly account of the work's relevance and connections to the rest of life. That means getting lucid with letters columns or op-ed pages or science writers or science cafes or schoolchildren or blog readers.
Dobbs wrote this in 2010 and it will probably resonate for years to come.   Scientific papers are supposed to share new data with other researchers and the public at large, but he suggests that they've become a horribly slow and expensive vehicle for doing so.

It's true, it can easily take years to collect enough information for a single paper, and many more months just to put together that information in a report and "make a story".  This data and story can then be pitched, for lack of a better word, to journals that might publish it.  There's your basic unit of publishing currency.

The problem is that small fragments of information that don't fit in a huge story are a hard sell to journals, because of the relatively low impact of incremental work.

These fragments end up in a huge amount of biological papers indexed in PubMed.  Small reports of a few figures here and there make for difficult reading, and conclusions are usually valid but usually predictable.  In the result, if you're not specifically looking for something you won't read those papers.

In fact, the impact of many of these papers are equivalent to bits of interim data I see from colleagues on a regular basis.   Usually they are part of a bigger project, but sometimes they fall into the category of "interesting, but not what we're investigating".  Wouldn't it be better to simply post these incremental advances - they truly are small advances - to blogs or similar electronic resource? 

The big problem with blogging scientific results is that they usually aren't credited as publications, and Dobbs correctly points out that researchers need these publications for credit.

But it seems that electronic publishing will soon give credit where credit is due through a growing movement to support article metrics, or altmetrics.

If it became more acceptable to promote ones career by publishing small bits of research and making contributions that were published electronically, more information that's otherwise overlooked in labs could be used to engage the public in scientific activities.

Wednesday, January 23, 2013

Louis Gerstner: New Chairman of the Board at the Broad

Straight from the Broad's news page:
Renowned business leader Louis V. Gerstner, Jr. will become the next chairman of the Broad Institute’s Board of Directors. Mr. Gerstner served as CEO and chairman of the board of IBM Corporation from 1993 to 2002. He is widely recognized as the chief architect of the company’s transformation, reinventing both its organization and its culture.
Going by IBM's experience, the Broad's choice of Louis Gerstner as the new chairman of the board looks like it's going to have some impact.  At IBM, Gerstner was transformative in major decisions that in hindsight set up IBM to become the robust company it is today: Abandoning the money-losing OS/2 operating system, ditching IBM's position in a commoditized PC hardware market, and shifting IBM's employees away from a lifetime employment mindset to one of lifetime employability.

Gerstner's experience as the former Chairman of The Carlyle Group will probably play out to the Broad's advantage give the institute a tailwind in research commercialization.  Carlyle is an asset management company overseeing $157 billion in investments across 11 industries, including healthcare.

So it seems that in retaining Gerstner, the Broad Institute has moved to up the bar that research administrators will be measured against.

Tuesday, January 22, 2013

Working in stealth mode will kill your projects

Adrian Crook writes this in Techvibes:
Just last week I was on a call with a potential investor when he mentioned a colleague whose startup was in "stealth mode."  Which instantly reminded me of one of my favourite quotes. It came from Howard Aiken, anearly pioneer of computing:  "Don't worry about people stealing an idea.  If it's original, you will have to ram it down their throats."
Though the article was directed towards CEOs of start up companies, the advice is relevant for anyone conducting a scientific research project.  'Collaboration' has been a buzzword for as long as I can remember, which in research usually doesn't directly translate into 'teamwork', as green researchers are apt to believe.  Rather good 'collaborative' efforts means situations with 'multiple complementary people working towards a common goal'.
To make these take off, scientists should steal adopt strategies that sometimes become mantra among businessfolk.  Get in the mindset that the project in your care is a small startup and don't fall into these stealth mode traps mentioned on Techvibes:
  1. Nobody wants to steal your idea.
  2. Most people will never even hear of your startup.
  3. You leave stones unturned.
  4. Opportunity often takes a long time to knock.
  5. Get honest feedback when you can use it.
Link to original article

Monday, January 21, 2013

Creativity is literally in your mind

Jeffrey Paul Baumgartner at TAXI writes about the thought processes occurring during creative thinking:
The dorsolateral prefrontal region of the brain is responsible for, among other things, intellectual regulation. It includes the brain’s censorship bureau; the bit of the brain that prevents us from saying or doing inappropriate things. It allows us to control impulses and to choose appropriate courses of behavior according to circumstances. It seems that in highly creative people, this part of the brain becomes much less active than normal during the period of creation. This makes sense. If you can reduce the level of thought regulation when generating creative work (whether ideas, music, artwork), then fewer ideas will be filtered out as inappropriate and more will be developed and shared.
Among other things, the dorsolateral prefrontal region is also involved in deception.  Which, I guess, can fall under the umbrella of "preventing us from saying or doing inappropriate things".

Flickr as your new tour guide

Henry Grabar at The Atlantic Cities, writes this about using Flickr geotags:
What do people photograph when they visit Fort Mason, an army base-turned-cultural center on the San Francisco waterfront?
Instead of heading down with a clipboard to do interviews, UC-Berkeley researcher Alexander Dunkel analyzed data from Flickr. Using geotags, which relay the exact location of the photographer, he was able to place over 125,000 photos on a map of the area, with expanding colored disks indicating the popularity of a certain viewpoints.
Using tags, with which users describe the content of the photos, he presented popular subjects as word clouds, located at the weighted center of frequency. We can see, for example, that visitors are photographing the Golden Gate Bridge mostly from two places: the three Fort Mason piers, and halfway out on the Van Ness pier. For photographs of Alcatraz, the legendary island-prison, one viewpoint, at the end of the jetty, is predominant. 
The maps created by Alexander Dunkel are a thing of beauty, but to view the connections between Flickr's geolocations and photograph subjects (via metatags) you have to view the 'Sightlines' layers, currently only available for the San Francisco Bay Area and Yosemite Valley.
That said, this use of Flickr data is a great example of how straightforward data aggregation and mining reveals how people behave.  Though the article mentions the use of these data for city planning, other sources of data can be used to find out where people have been, like cell phone data (though not without controversy).  The unique value provided by combining geotags with metatags is in visualizing views that should not be blocked, by say, condo developments, especially in cities like Toronto that seem to have an addiction to new glass buildings.  I'm not sure how much information geotagged photos could provide on a smaller scale, for example to ask whether people tend to take pictures on the east or west side of a baseball game.  Regardless of the limitations, it seems that we'll see more of Dunkel's style of tag analysis on Flickr or Google sites within the next year.


Friday, January 18, 2013

What's in Foundation Medicine's "Secret Sauce"?

Forbes reports on Bill Gates recent investment in Foundation Medicine:
Founded in April 2010 by Third Rock Ventures, Foundation is one of the first companies to aim to take advantage of the explosion in DNA sequencing technology. ... It now costs as little as $1,000 to get a fairly accurate readout of the 6 billion letters of DNA code for any single person.
In cancer, the approach right now is usually not to sequence all a patient’s DNA or that of his tumor, but instead to focus on particular genetic mutations in the tumor that might provide clues as to what medicines to try. Major cancer centers are using this approach with patients for whom it’s not obvious which medicine represents the best bet. Foundation’s approach has been to provide that kind of testing to a larger audience.
Without a doubt, Foundation Medicine is one company that's translating basic research in a big way.  Here's how:

Wednesday, January 16, 2013

Big Data firm nets Big Bucks

Ayasdi, a company that developed a suite of network based analysis and visualization tools, has netted a big investment, writes Guardian:
A US big data firm is set to establish algebraic topology as the gold standard of data science with the launch of the world's leading topological data analysis (TDA) platform.
Ayasdi, whose co-founders include renowned mathematics professor Gunnar Carlsson, launched today in Palo Alto, California, having secured $10.25m from investors including Khosla Ventures in the first round of funding.
The article cites health care, and specifically cancer research, as the primary beneficiary of topological data analysis, otherwise commonly known as network analysis in biological circles.  Plenty of resources like Reactome, Cytoscape, GeneMANIA are well known to researchers interested in biological networks.

Among other things, network based approaches have also been used to redefine basketball player positions and move the number of player types from five to thirteen.  There's a good video from the Sloan Sports Conference on the approach here.



Tuesday, January 15, 2013

Are you using "Ethical Antibodies" in your research?

Helen Shen at Nature reports on some trouble that Santa Cruz Biotechnology, a leading provider of research antibodies, has run into concerning an "unreported antibody production facility" involving 841 animals:
“The existence of the site was denied even when directly asked” of employees during previous inspections, according to a US Department of Agriculture (USDA) report finalised on 7 December, 2012. ...  The USDA, which typically inspects research and commercial animal facilities once or twice a year, inspected Santa Cruz Biotechnology operations at least nine times in 2012. A tip-off led inspectors on 31 October to the remote barn, where they found the additional goats, some of which were lame, anaemic or had protruding bones.
Several researchers I know were shocked when they heard the news about Santa Cruz.  Though apparently only 12 animals were truly sick, requiring either veterinary care or euthanasia.  I'm betting that scientists will be considering, if not requiring, proof that the antibodies they order and use in their work (at upwards of several hundred dollars each) are being produced humanely.
 


Monday, January 14, 2013

Illumina still holds the sequencing crown, for now

Keith Robinson at Omics! Omics! writes about last year's developments in genomics sequencing technologies, specifically about several advances brought to market by Illumina:
All of these should enable Illumina to maintain its dominance of the sequencing research market, while also helping set up the bigger payoff clinical applications such as those developed by Verinata.  The speed of HiSeq rapid mode should largely hold off Ion Proton (when it finally reaches a human genome per run), and may well make Complete Genomics technology nearly obsolete in the clinical space, given that "actionable medical result" and "needed stat" routinely show up in the same sentence.
The prediction of Complete Genomics' obsolescence, at least for clinical sequencing, is partly due to their business model of sequencing as a service.  In reality, rapid clinical sequencing can (should?) be performed by a number of distributed laboratories near the sites requesting analysis, i.e. local hospitals.  Labs like this are able to analyze sample in nearly real-time as compared to Complete Genomics, which still requests that samples be sent to a centralized service.  This is in addition to the challenges created by their odd fragmented read format and short (35bp) read lengths that make using any software except that provided by Complete Genomics difficult.  With a constantly sliding stock price, I don't think I'm the only one with questions about Complete Genomics' long-term value.

Sunday, January 13, 2013

Data-driven scientists are lazy

Ouch.  So says petrkiel at R-bloggers as a commentary to Hans Rosling in the BBC's Joy of Stats.  Maybe not as concisely as 'lazy', but 'failure of imagination' when one is expected to be imaginative is pretty damn close:

Data-driven scientists (data miners) ... believe that data can tell a story, that observation equals information, that the best way towards scientific progress is to collect data, visualize them and analyze them (data miners are not specific about what analyze means exactly). When you listen to Rosling carefully he sometimes makes data equivalent to statistics: a scientist collects statistics. He also claims that “if we can uncover the patterns in the data then we can understand“. I know this attitude: there are massive initiatives to mobilize data, integrate data, there are methods for data assimilation and data mining, and there is an enormous field of scientific data visualization. Data-driven scientists sometimes call themselves informaticians or data scientists. And they are all excited about big data: the larger is the number of observations (N) the better.
And the punchline:
Emphasisizing data at the expense of hypothesis means that we ignore the actual thinking and we end up with trivial or arbitrary statements, spurious relationships emerging by chance, maybe even with plenty of publications, but with no real understanding. This is the ultimate and unfortunate fate of all data miners. I shall note that the opposite is similarly dangerous: Putting emphasis on hypotheses (the extreme case of hypothesis-driven science) can lead to a lunatic abstractions disconnected from what we observe. Good science keeps in mind both the empirical observations (data) and theory (hypotheses, models).

Statistics is not Mathematics

Rafael Irrizarry posted a thoughtful argument on why work focused on Statistics should live within it's own department, separate from a Division of Mathematical Sciences in the National Science Foundation, where Statistics currently falls under. 
Statistics is analogous to other disciplines that use mathematics as a fundamental language, like Physics, Engineering, and Computer Science. But like those disciplines, Statistics contributes separate and fundamental scientific knowledge. While the field of applied mathematics tries to explain the world with deterministic equations, Statistics takes a dramatically different approach. In highly complex systems, such as the weather, Mathematicians battle LaPlace’s demon and struggle to explain nature using mathematics derived from first principles. Statisticians accept  that deterministic approaches are not always useful and instead develop and rely on random models. These two approaches are both important as demonstrated by the improvements in meteorological predictions  achieved once data-driven statistical models were used to compliment deterministic mathematical models.
Given the huge importance of statistics in genome sciences and other big data sciences, new tools need to be created by statisticians wholly dedicated to solving problems created by new technologies in the biosciences.

Aaron Swartz found dead prior to trial for "theft" of electronic journal articles

Many readers are already aware that Aaron Swartz, one of Reddit's co-founders, committed suicide a few weeks prior to his trial for "theft"of millions of articles from JSTOR via his institutional access while at MIT.

These two posts at Boing Boing describe Aaron Swartz's purported crimes (downloading most if not all of the article in JSTOR) and post a statement from Swartz's family.  His family blames an overly aggressive legal system, which was seeking penalties that would probably lead to severe restrictions on Aaron Swartz's ability to freely pursue his passions and irrevocably change his life.   This terrible tragedy could have been avoided and my deepest condolences go out to all those who personally knew Aaron.


Saturday, January 12, 2013

Protecting Innovations

I recently wrote a piece for Signals Blog on how awareness of intellectual property protection is essential.  It describes some of the problems I've seen with people in science not being trained to be "IP aware".
Many examples indicate that such [people], arguably the next generation of innovators, aren’t being taught to consider the wider implications of what they are studying, outside of academic applications.
An example I cited in the article describes a whole class of PhD students, none of which was aware of essential terms used to discuss patentability of inventions. 
Even if they were all ex-patent agents, it would be difficult to do much with a patented invention without knowing what to do with it.
Fast forward to 2020 and imagine that you’re an IP-savvy postdoc or researcher and have patented a new invention: companies generally won’t bang on your door to use it. There are huge costs in making your patent work, such as identifying potential buyers or users of the IP and establishing the value and terms for licensing agreements.
This can whittle away at the economics of your invention until no options are good enough to pursue.
Thankfully, there are a couple of options for scientists with patents, which I describe in the article.

Friday, January 11, 2013

More on the flu from New Scientist

New Scientist: Is the US facing Flu-maggedon?

Don't forget to check out Google Flu Trends and this old article on predicting flu outbreaks using Twitter feeds:
"The Centers for Disease Control produces weekly estimates," he added, "but those reports typically lag a week or two behind. This approach produces estimates daily."
[Aron Culotta, assistant professor of computer science] and two student assistants analyzed more than 500 million Twitter messages over the eight-month period of August 2009 to May 2010, collected using Twitter's application programming interface (API). By using a small number of keywords to track rates of influenza-related messages on Twitter, the team was able to forecast future influenza rates.

This year's flu is probably peaking

The New York Times reports that the U.S. (and probably Canadian) flu season is past it's peak, even though this year's vaccines have middle ground effectiveness:
A preliminary study rated this year’s vaccine as 62 percent effective, even though it is a good match for the most worrisome virus circulating. That is considered “moderately” effective — the vaccine typically ranges from 50 percent to 70 percent effective.

A note to whoever's curious: You can get real-time influenza virus reports from the World Health Organization's FluNet, which provides links to a huge number of charts describing flu subtypes detected worldwide.  Have a look, it's an interesting diversion.

Thursday, January 10, 2013

Anticancer snake venom

Snake venom contains tons of useful molecules that can be used to fight cancer, explains The Economist.  The first example goes over eristostatin, derived from the Asian sand viper, nature has evolved as an anti-coagulant:
Eristostatin’s day job is to stop victims’ blood clotting and thus plugging up damaged blood vessels after a bite. By increasing blood loss, it weakens victims. The molecule does this by glomming onto cellular fragments called platelets that are crucial to the process of clotting, thus disabling them. Dr Hailey hopes to make use of this tendency to encourage the immune system to attack melanoma cells. His idea requires eristostatin to be as attracted to cancer cells as it is to platelets.
Eristostatin goes one further and attracts T-lymphocytes, immune system cells that attack cells they're targeted to.  Hacking Harnessing the immune system to fight cancer is a clever idea, and one that's been bounced around before.

Thomas Kuhn's "Scientific Revolutions"

The New Atlantis has a great essay commemorating Thomas Kuhn's book The Structure of Scientific Revolutions.
Kuhn held that the historical process of science is divided into three stages: a “normal” stage, followed by “crisis” and then “revolutionary” stages. The normal stage is characterized by a strong agreement among scientists on what is and is not scientific practice. In this stage, scientists largely agree on what are the questions that need answers. 
This is probably where most research lies.  A lot of work that follows the same outline of previous studies is done.  Much of it is great work that's important, but the design features of the studies follow a previous template.
A crisis occurs when an existing theory involves so many unsolved puzzles, or “anomalies,” that its explanatory ability becomes questionable. Scientists begin to consider entirely new ways of examining the data, and there is a lack of consensus on which questions are important scientifically.
The catalyst for this crisis can be a new discovery, like RNA interference, for example.  Or the discovery of RNA splicing.
Eventually, a new exemplary solution emerges. This new solution will be “incommensurable” — another key term in Kuhn’s thesis — with the former paradigm, meaning not only that the two paradigms are mutually conflicting, but that they are asking different questions, and to some extent speaking different scientific languages. Such a revolution inaugurates a new period of normal science. Thus normal science can be understood as a period of “puzzle-solving” or “mopping-up” after the discovery or elucidation of a paradigm-shifting theory. ...  But since every paradigm has its flaws, progress in normal science is always toward the point of another crisis.
 Taking the RNAi example further, recent years have seen many RNA interference papers that follow the same general design.  Now that the "RNAi paradigm" has been firmly established, what will the next big crisis point in biosciences be? 

Link

Wednesday, January 9, 2013

Brilliant Wired piece on Steven Hawking

Helene Mialet writes about Steven Hawking in Wired.  It was his birthday yesterday.  She writes:
Today, January 8, is Hawking’s birthday, yet on this day it’s worth examining just who and what we are really celebrating: the man, the mind or … the machines?
To understand Hawking, you [have] to understand the people and the machines without whom he would be unable to act and think; you [have] to understand the ways in which these entities augment and amplify Hawking’s competencies. For example: The specialties of his students, which are spread across very different research fields, enable him to integrate diverse information and the different facets of a problem in a way that others cannot. His secretary provides him with a mental assistant many of us would never have, by sorting and arranging his data according to his interests and what he is able to process.

"Alternative" products of research gaining recognition

The US National Science Foundation is changing the way they evaluate scientists, writes Nature.
What a difference a word makes. For all new grant applications from 14 January, the US National Science Foundation (NSF) asks a principal investigator to list his or her research “products” rather than “publications” in the biographical sketch section. This means that, according to the NSF, a scientist's worth is not dependent solely on publications. Data sets, software and other non-traditional research products will count too.
This is great news for anyone concerned about how credit accrues to academics, for good reasons or poor ones.  Don't despair, credit won't be given to anyone for anything, and the usefulness of academic contributions must still be measurable:
The new NSF policy states: “Acceptable products must be citable and accessible including but not limited to publications, data sets, software, patents, and copyrights.”
The new framework better recognizes people that are good at making "infrastructural" contributions to the research community: those that create high quality data that others use; wet lab tools or constructs that are usually shared (and end up with only an acknowledgement); bioinformatic tools that aren't in a major piece of software (again, probably only acknowledged).

See the original story at Nature (Subscription required).

Tuesday, January 8, 2013

GEN's 7 biotech trends for 2013

A good summary by Alex Phillipidis over at GEN:

FINANCING & FUNDING: Warmer Climate, but VCs Still Wary

A resolution to the political tug-of-war over the federal budget and agency funding can be expected to warm the long chilly climate for financing biopharma and other life sciences companies in the U.S., G. Steven Burrill, CEO of Burrill & Company, predicted late last month. He expects the industry to raise $100 billion in capital in 2013, “with financings heavily weighted to the large companies and to the use of debt.”
Link

Fast growing, transgenic, salmon coming soon

AquaBounty is a company looking to have their transgenic salmon approved for human consumption, and the FDA appears to be moving the fish through their approval process, says New Scientist.  Since the fish contain genes from other fish, they're not really as modified as something like, say, Bt cotton:
Since 1995, a company called AquaBounty, based in Maynard, Massachusetts, has been seeking approval from the US government to sell its AquAdvantage fish. These Pacific salmon have been modified with a growth hormone gene from Chinook salmon, which causes them to grow twice as fast as normal fish.
The degree of genetic modifications seem to be enough to prevent escape and spread of the fish in the wild:
The company plans to engineer its eggs in highly secure tanks in Canada, then ship them to Panama to mature. As a precaution, the fish are all female and contain three copies of each chromosome rather than two, rendering them sterile.
Link

Monday, January 7, 2013

NIH nips at publisher's business models

In what many consider an inevitable move, the NIH has announced that it will withhold funds from researchers who receive NIH funds and who don't make their papers freely available, reports Nature.  It's a bold move toward Open Access.

Yes, it's a good move for the public but is it a bad one for publishers that own flagship journals like Nature (Holtzbrink), Science, (The AAAS, a not-for-profit), and Cell (Reed Elsevier, a public company)?  I don't think so.  The NIH states that articles must be made freely available no later than twelve months after publication, which may seem quick but it's an eternity as far as new science is concerned.  There's still huge value for a university or research institute in immediate access to new papers, so high impact publishers will still be able to place articles behind paywalls, charge their fees, and still allow researchers to comply with the NIH within the twelve month window.  Publishers can still adapt their policies to the NIH's changes without alienating academic authors.