Thursday, September 18, 2014

Academic Conference Networking Tips

There's a nice article on networking at scientific conferences over at Cheeky Scientist. The best point of advice, which I unfortunately learned the hard way:
1. Skip the scientific talks.
You love science. I get it. Science is why we all went to graduate school. But you shouldn’t go to a conference to learn the science. Not if you want to get an industry job. ... Everything in the talk is either published or in an abstract in the conference booklet. Plus, you can always seek out the conference speakers (or their posters) later.
Point taken.  If you're watching presentations, you're not meeting anyone new.  Conferences are not about taking supercharged doses of PowerPoint slides over three days; Conferences are about conferring with people.

As I found out through experience, my best contacts were always made when I walked out of talks that didn't interest me or were just plain boring and tried to find people I wanted to talk to.  If you happen to run into someone walking out of the same talk, you at least have something common to start a conversation with.

Skipping conference talks brings me to a digression about how departments dole out travel funds for students.  Some places require students to return to headquarters and give a 'conference presentation', usually intended to inform people back home of interesting news from the conference.

If this applies to you, try to balance your news-gathering efforts with networking efforts.  You're not obligated to attend every single talk, and if you come back and bring people up to speed with 'what was hot' at the conference, you've probably done your job.

Back After A Long Hiatus

It's been a long while since I last posted, and there's a good reason why.  I'm currently putting together a post to describe the additional project that I've taken up, which required a lot of time away from blogging in order to tie up loose ends, prepare, not to mention take a decent vacation beforehand.

So in short, I expect to be contributing posts more regularly going forward.   The easiest ways to follow for new content are still @CheckmateSci and via RSS.

Cheers,
Paul


Tuesday, July 8, 2014

Five Tips on Doing Business in Silicon Valley. Actually, Five Tips on Doing Business Anywhere.

The folks at MaRS just released this little video highlighting five tips for doing business in Silicon Valley.  The advice is applicable anywhere.
1. To succeed, first understand the area’s history.
Whenever you're working with people outside of your area, be it geographical or outside of your area of expertise, you need to be able to relate to where they came from.  How do their values differ from yours?  What is important to them?  Is there something about that location or field that attracts certain type of people, or encourages a particular kind of behaviour (think entrepreneurship, research excellence, etc.)?
2. Spend your time there legally and intelligently.
Plan ahead to get the biggest return on your time investment. What
3. Be open to collaboration.
Share ideas with your potential partners.  Help them develop their ideas and they will help do the same for yours. 

I've written before about how operating in stealth mode stifles research projects and exposes scientists to several traps.  Collaboration takes effort, but can pay off in spades when you find good partners, especially in high risk, pre-commercial (i.e. basic) research.
4. Steer clear of the myths about Silicon Valley.
Not sure I fully agree with this one.

Myths exist about every place and every institution.  However, there are bad myths and good myths. 

Bad ones will usually serve to drive you to inaction.  They're the ones about cutthroat competition, backstabbing, politics, and favoritism.

Good myths, on the contrary, will encourage you to make connections and build on your ideas.  The good myths may turn out to be false, but at least they've led you to break that inertia of doing nothing.
5. Recognize that San Francisco is not Silicon Valley.
Aron Solomon's point is that they may be a 45 minute car ride away, but they are not the same kind of place.  The same is true about the many organizations that may exist in a technology cluster, even if they're within a 45 minute walk. 

Universities are different from research institutes, and independent research institutes are different from those associated with hospitals. 

A Big Cash Prize is a Great Motivator

A business plan competition for 'young' (under 36) scientists by Oxford Biotech Roundable and GSK figures out how to motivate scientists to come out to bat:
Our fundamental challenge was to generate enthusiasm for a biotech business plan competition and get people excited about entrepreneurship in a sector and region not known for its risk-taking culture. But we also knew that the caliber of researchers and students we sought to engage would need an attractive value proposition to incentivize them to invest their time and energy. In this respect, the grand prize (£100,000 or about $180,000) provided an attractive reason for entrants to engage with the competition rather than pursue more established career trajectories.
The rest of the article includes many other bits of useful knowledge, like the main obstacles young researchers face when considering entrepreneurship (think networks and poor mentors), but the importance of setting the value of prize, grant, or fellowship is clear: If you want quality applicants, the chance of getting a prize must be worthwhile.

Thursday, July 3, 2014

More Data Doesn't Mean More Interesting Data

David Beer, at Adaptive Computing, writes:
One of the keys to winning at Big Data will be ignoring the noise. As the amount of data increases exponentially, the amount of interesting data doesn’t.
He describes the problem of predicting what online video a user is going watch next, and how an analysis can quickly run the number of predictions up into thousands of possible 'next steps' to evaluate.
These are then compared with all of the other empirical data from all other customers to determine the likelihood that you might also want to watch the sequel, other work by the director, other work from the stars in the movie, things from the same genre, etc. As I perform these calculations, how much data should be ignored? How many people aren’t using the multiple user profiles and therefore don’t represent what one person’s interests might be? How many data points aren’t related to other data points and therefore shouldn’t be evaluated as a valid permutation the same as another point?
Thes points are probably the biggest value that an experienced scientist can provide to the scale of these data problems.  This kind of person has at least several years of work experience in a hypothesis driven research environment and is able to solve problems using incomplete data.  They probably have a PhD to go with that quantitative experience.

The first point, working in a hypothesis driven environment, demonstrates that that person should be able to devise a strategy to prove/disprove the hypothesis (I hypothesize that this customer will watch video Y after video X), and figure out how to do that efficiently without getting stuck in the weeds, or the irrelevant data Beer describes.  Unfortunately, it does take some skill to interview a person before you determine that they can actually do this, especially there are differences between yourself and the interviewee.

The second point, being able to use incomplete data, is something seems to come from experience.  Most people trained in research fields start off trying to collect the most data possible, and don't make a decision until 'more data is collected'.  It's easy to get stuck in a data collection rut, but eventually most people realize that it's actually OK to come to a conclusion before seeing the whole picture.

Collecting a lot of extra data costs time, resources, and puts a demand on your attention span until that elusive point of having 'enough data' is reached.  Sometimes that data is worth it, but many times it's not.  It just sits there because no one has time to do anything with it, so the data remains idle and risks becoming stale.  Unless it's actually your job to do so, be careful of making data for the sake of making data.
 
ASIDE: One of the neatest things I find about the customer analytics field (as compared with genomics or computational biology) is that data is basically being generated by the study population itself, for what is essentially free.

Tuesday, July 1, 2014

Snowflakes Visualize Wind Turbine Effects on Airflow

Oh yeah, by the way, 'Here we use snowflakes from a winter snowstorm as flow tracers to obtain velocity fields downwind of a 2.5-MW wind turbine'
say the authors of a really neat article at Nature Communications.

Checkmate Scientist is Closing Comments

Regular readers (there are about 200) may be sad to find out that I'll be closing comments going forward.

I've become much more busy over the last six months (as you may have noticed by the decrease in posts) and I've unfortunately been moderating an increasing amount of comments that are clearly from spammers.  I'd rather spend time reading and writing than deleting spam, and I think you'll agree.

As always, you can send in comments to comments@checkmatescientist.net or via Twitter to @pmkrzyzanowski and I'll do my best to respond.

Thursday, June 26, 2014

Ties in Science

First to market.

First to the finish line.

First to know.

First to file.

First to climb Mount Everest.

First country to land someone on the moon.

Human achievement is defined by one group out-competing another.  When the release of multiple research papers is coordinated, it may look like a tie but in reality one of two things has happened: Journal editors synchronized the release of papers to create a bigger impact, or two research groups shared enough information to synchronize their submissions.

If it's driven by editors, one of the groups is still first to submit.  In comparison, if it's driven by the groups, they've acted as one larger collective that's first to publish over all their competitors.

A long time ago, Andrew Carnegie quipped that "The first man gets the oyster, the second man gets the shell". 

There are no ties.

Wednesday, June 25, 2014

Hey TTC, we need Tax Credits, not Low-Income Transit Fares

Tess Kalinowski, at The Toronto Star, writes that the Toronto Transit Commission is considering implementing special fares for low-income riders:
The issue of income-based fares has been raised at the TTC and other city departments individually. Now, however,a report before Toronto’s executive committee July 2 recommends that staff from social development, the TTC, public health, planning and others develop joint guidelines for affordable fares. The policy would come back to council in early 2015.
The article later points out that six dollars per day on transit fares is a lot of money for lower income people, like the unemployed, but should include students as well.

Instead of rolling out a special Low-Income Metropass (I'm not holding my breath for the rollout of Presto just yet) and creating yet another class of fares, the TTC should work with the Ontario government to provide refundable tax credits on a single class of transit passes.

That's right, get rid of Post-Secondary and Senior Metropasses.  One class of TTC Metropass would probably simplify the TTC's operations to a small extent.

The best feature of this scheme is that since most post-secondary students are low income, and arguably some seniors are also low income, everyone the existing policies are intended to cover is still covered.  It would basically work like the Federal Transit Credit, except that if you earn less than $20k per year, you get 50% of your transit costs refunded, as an example.

As an aside, note that I said 'some seniors' as it's not really fair for me to be subsidizing people with pensions that exceed my income.  The same with the rare students getting by on dividends from inherited stock market investments.  Hey, it's not just me saying so: “It’s not necessarily fair to ask other customers to pay more [to subsidize low income fares],” says TTC chief customer officer Chris Upfold.

Running this scheme through the income tax system keeps everyone's income information more or less private.  It's hard enough for people to live on a low-income, and giving them a special card to identify them as such isn't really the hand up that they need.

Thursday, June 5, 2014

Learning to Start Businesses, in the Ivory Tower?

Over at Entrepreneur, Isaiah Hankel wrote what's, overall, just another article criticizing academic culture, but if you read it with an open mind you'll find a paragraph that's probably the most optimistic description I've ever read of what a research community can pull off, especially since Hankel's describing the academic community:
The ivory tower shouldn’t be perceived as a safe haven or a place for professionals to bide their time when the economy goes south. Rather it should be considered the best place to learn how to start and run a business [emphasis mine]. Academia’s sole purpose should be developing people not to just be professors, doctors and lawyers but ones who innovate and invent products and services.
Agreed, 100%.

Thursday, May 15, 2014

Yes, Science Can Save Government Money

Mark Strauss, at io9, writes this about the abolished U.S. Office of Technology Assessment:
Last week, Rep. Rush Holt (D-NJ) tried to reopen the agency with minimal funding.

He failed.

The legislation—a proposed amendment to the Legislative Branch Appropriations Act that would have provided $2.5 million for the Office of Technology Assessment (OTA) —was defeated in the House by a 248-164 vote, with 217 Republicans opposing and 155 Democrats supporting.
It's amazing that $2.5 million can't be spared for objective science advice - by a government.  The NIH alone spends a thousand times that amount each year.  In the absence of a strong pro-science lobby group, similar to what Franklin's List is trying to become, an OTA is a much needed government service.

Maybe the OTA was disbanded because it didn't offer good value for money.  If only the benefits of something as intangible as science could be accounted for:
OTA had always saved taxpayers far more money than it cost. An OTA study on Agent Orange, for instance, helped save the government $10 million. Another report recommended changes in computer systems at the Social Security Administration that saved more than $350 million.
OK, maybe they can.  All governments should take notice.

The rationale for dissolving a office like the OTA was even more perplexing when you consider how Newt Gingrich suggested he could compensate for the lack of one:
"Gingrich's view was always, 'I'll set up one-on-one interactions between members of Congress and key members of the scientific community,'" recalls Bob Palmer, former Democratic staff director of the House Committee on Science. "Which I thought was completely bizarre. I mean, who comes up with these people, and who decides they're experts, and what member of Congress really wants to do that?"
That's a lot of running around chasing busy members of the scientific community.  I'd say that would cost more than $2.5 million worth of politician's time alone, never mind the additional burden and cost of all the meetings stemming from that.  If you don't pay for an office like the OTA, the costs are just spread around like a giant game of hide the umbrella. Either way, taxpayers pay.

Similar efforts to create a Parliamentary Science Officer in Canada have been set in motion by Kennedy Stewart, the MP for Burnaby-Douglas in BC.  Bill C-558 is the one to keep an eye on.

Monday, May 5, 2014

Paper Pusher, PhD

GEN reports on a recent report from the National Science Board (NSB) in the United States:
"Regulation and oversight of research are needed to ensure accountability, transparency, and safety," said Arthur Bienenstock, chair of the NSB task force that examined the issue. "But excessive and ineffective requirements take scientists away from the bench unnecessarily and divert taxpayer dollars from research to superfluous grant administration."
The rest of GEN's article can be found here and here's the original NSB report.  Here's how big the problem is, as summarized by the NSB:
A 2005 Federal Demonstration Partnership (FDP) survey of investigators found that PIs of federally sponsored research projects spend, on average, 42 percent of their time on associated administrative tasks. Seven years later, and despite collective Federal reform efforts, a 2012 FDP survey found the average remained at 42 percent.
That's right: 42%.  If you're thinking of going the research route, on average, you'll spend two hours of administration to do three hours of actual research.  This depends on what you actually consider 'administration' but I'd say 2 out of 5 hours is a good estimate: you might spend less if you have a small lab or are a postdoc, more if you run a big lab or have a fragmented mix of funding greasing the wheels.

Scientists reactions are mixed.  On one extreme, you'll find scientists who claim that anything non-research related is a waste of their time. I don't take this position; reporting and auditing is necessary to ensure that the buyer of research is getting a good return on their research dollars.  There are many questions that report docs should answer.

Some are obvious: Is the project making effective use of funding, and is time and money being spent on things that contribute to the research research goals?  Are funds being spent efficiently, or can the researcher be helped to spend better?

Some are more subtle: Are the right students and postdocs being hired to do the work, and are they actually being trained to do something the funder values?  Is the project fulfilling the goals set out by the sponsor, or has the scope drifted?

But regardless of the purpose, the problem with 'excessive' paperwork isn't that it's utterly pointless, it's that it detracts from the value scientists are best able to produce.  Scientists, in general, want to discover things, build things, create things, and write about all that experience.  But administrative tasks don't generally fall under what scientists would consider cool and if the tasks truly enter 'excessive' territory you take away the cool part of the job.

Tuesday, April 22, 2014

Where are the STAP cells?

Signals Blog just posted a short commentary I wrote on STAP cells and why I think they're too good to be true:
Esophageal cancer is the end point on a spectrum of diseases. At the beginning, chronic acid reflux exposes the esophagus to the low pH levels of stomach acid, which is a risk factor for Barrett’s Esophagus. Barrett’s, in turn, is a risk factor for developing this type of cancer, which may take several decades to appear.

My original thoughts when STAP cells were first reported were that there would be a clear link between acidic conditions and stemness. ... But there’s no overwhelming evidence to suggest that that happens.
Acid reflux seems like the ideal natural experiment to prove that STAP cells exist, yet the esophagus doesn't turn into a mess of iPS cells every time someone gets heartburn.

Read on at Signals.

Monday, April 14, 2014

Called It: Artificial Blood from iPS Cells

Genetic Engineering & Biotechnology News reports that the Scottish National Blood Transfusion Service is looking into the safety of stem cell derived blood:
A team of scientists led by SNBTS director Marc Turner, M.D., Ph.D. is heading the project, which reflects the combined efforts of the BloodPharma consortium, which has benefited from a Strategic Award, provided by the Wellcome Trust, in the amount of £5 million (approximately $8.4 million). 

The research funded by the award involves multiplying and converting pluripotent stem cells into red blood cells for use in humans, with the aim of making the process scalable for manufacture on a commercial scale.
It's a study to test transfusions using small amounts of blood (5mL), but nevertheless it will be real stem cell derived blood.

I called it back in 2010.

Friday, April 11, 2014

Answer the Why of Your Work

All too often, scientists (and other research minded people) are drawn into a never ending spiral of questions.  Answers lead to questions which lead to answers, leading someone to inevitably describe the next line of inquiry and cap off their thoughts with "We need to do experiments to answer these questions."

The problem that usually arises is that no one objects.

Why not? It's easier to let someone go ahead and do their work than it is to stop and think about other things that can be done.

But assuming they've already decided that the questions are worthy of work, it should be easy for them to articulate why those questions need to be answered and why now is a good time to answer them.  Is it because there's a key conundrum in your field of specialization?  Will the answer tell us something useful about a disease, a key point about cells or disease, or a physical process?  On an extremely practical level, will your answer contribute to a publishable paper or getting a grant?

Or, most commonly, will your answer tell you that Gene X amongst 20,000 genes goes up or down because you poked a particular cell the right way?  That, too, may be important but you need to state why.

The reality is that not all questions need to be answered, at least not immediately.  Unanswered questions can simmer for a little while longer.