Category: Bio-Business

U.S. Office of Science and Technology Policy soliciting your feedback on “Improving Public Access to Results of Federally Funded Research” until Dec 20, 2009

Posted by – December 12, 2009

The U.S. Office of Science and Technology Policy, under directives from the President Obama administration, is soliciting public feedback. Note the deadline!  (Dec. 10th-20th)

Policy Forum on Public Access to Federally Funded Research: Implementation

Thursday, December 10th, 2009 at 7:25 pm by Public Interest Declassification Forum

By Diane DiEuliis and Robynn Sturm

Yesterday we announced the launch of the Public Access Forum, sponsored by the White House Office of Science and Technology Policy.  Beginning with today’s post, we look forward to a productive online discussion.

One of our nation’s most important assets is the trove of data produced by federally funded scientists and published in scholarly journals. The question that this Forum will address is: To what extent and under what circumstances should such research articles—funded by taxpayers but with value added by scholarly publishers—be made freely available on the Internet?

The Forum is set to run through Jan. 7, 2010, during which time we will focus sequentially on three broad themes (you can access the full schedule here). In the first phase of this forum (Dec. 10th-20th) we want to focus on the topic of Implementation.   Among the questions we’d like to have you, the public and various stakeholders, consider are:

  • Who should enact public access policies? Many agencies fund research the results of which ultimately appear in scholarly journals. The National Institutes of Health requires that research funded by its grants be made available to the public online at no charge within 12 months after publication. Which other Federal agencies may be good candidates to adopt public access policies? Are there objective reasons why some should promulgate public access policies and others not? What criteria are appropriate to consider when an agency weighs the potential costs (including administrative and management burdens) and benefits of increased public access?
  • How should a public access policy be designed?
    1. Timing. At what point in time should peer-reviewed papers be made public via a public access policy relative to the date a publisher releases the final version? Are there empirical data to support an optimal length of time?  Different fields of science advance at different rates—a factor that can influence the short- and long-term value of new findings to scientists, publishers and others. Should the delay period be the same or vary across disciplines? If it should vary, what should be the minimum or maximum length of time between publication and public release for various disciplines? Should the delay period be the same or vary for levels of access (e.g. final peer reviewed manuscript or final published article, access under fair use versus alternative license)?
    2. Version. What version of the paper should be made public under a public access policy (e.g., the author’s peer-reviewed manuscript or the final published version)?  What are the relative advantages and disadvantages of different versions of a scientific paper?
    3. Mandatory v. Voluntary. The NIH mandatory policy was enacted after a voluntary policy at the agency failed to generate high levels of participation. Are there other approaches to increasing participation that would have advantages over mandatory participation?
    4. Other. What other structural characteristics of a public access policy ought to be taken into account to best accommodate the needs and interests of authors, primary and secondary publishers, libraries, universities, the federal government, users of scientific literature and the public?

We invite your comments […]

Give government your feedback on how to release data and publications from publicly funded research.

More information is in the U.S. Office of Science and Technology Policy video:

More

“Meat 2.0”

Posted by – December 1, 2009

In synthetic biology conferences, engineering improvements of food is listed in the top three applications of the new technology. As an example, George Church’s lab developed a genetic engineering technology specifically aimed at evolving super-tomatoes containing high amounts of the anti-oxident lycopene, as proof-of-concept.  Frequent brainstorming “what could syn bio do?” sessions include ideas of growing thick beef steaks without the cow: in essence, this is presumed to be an improvement on quality, cleanliness, nutrition, and animal rights, than today’s factory-farming method of bringing steak to the table.

What if there is already a better “steak”?  Let’s call it Meat 2.0.  How about modifying Rhizopus oligosporus, the fungus used in making tempeh, to create new tastes or additional vitamins?  Note that the below article states, “cost of preparing 1.5 kg of tempeh was less than US$1.”

Nutritional and sensory evaluation of tempeh products made with soybean, ground-nut, and sunflower-seed combinations

M. P. Vaidehi, M. L. Annapurna, and N. R. Vishwanath
Department of Rural Home Science and Department of Agricultural Microbiology, University of Agricultural Sciences, Bangalore, India

INTRODUCTION

Tempeh products made from soybeans and from combinations of soybeans with ground-nuts and sunflower seed at ratios of 52:48 and 46:54 respectively were tested for their appearance, texture, aroma, flavour, and over-all acceptability. In addition, tempeh was prepared with and without the addition of bakla (Vicia faba) to soybeans in various ratios to obtain a tempeh of acceptable quality and nutritional value (1). Bakla tempeh at a 1:1 ratio was found to be crisper and more palatable than plain soybean tempeh, but at 3:1 the tempeh had a mushroom odour.

EXPERIMENTS

Materials

Tempeh culture (Rhizopus oligosporus) was obtained from the New Age Food Study Center, Lafayette, California, USA. It was grown on a rice medium and inoculated while different blended tempehs were prepared. A 2.5 9 packet of culture was used for 250 9 of substrate on a dry weight basis.

Soybeans (Hardee), ground-nuts (TMV-30), and sunflower seed (Mordon) were obtained from the University of Agricultural Sciences, Bangalore. Three varieties of tempeh -100 per cent soy, soy-ground-nut (52:48), and soy” sunflower seed (46:54)-were prepared under identical conditions.

Preparation of Tempeh and Products

More

Battery-powered, Pocket-sized PCR Thermocycler

Posted by – November 1, 2009

A few years ago, some bright students at Texas A&M improved upon the most basic tool for manipulating microbiology: a thermocycler.   Thermocyclers are typically large tabletop instruments which require a large sample, a lot of electrical power, and a lot of time to heat and cool.  Alternatively, the process can be done by hand with a pot of boiling water, a bucket of cold water, a stopwatch, and a lot of free time and patience.   The sample itself, for the purpose of “amplifying” the desired material in the sample, runs through many iterations of heat/cool cycling, such as the following:

  • Denature: 95°C, 15 mins

Thermocycling

  • No. of cycles: 39
  • Denature: 94°C, 30 secs
  • Anneal: 62°C, 30 secs
  • Elongate: 68°C, 3.5 mins

Termination

  • Elongate: 68°C, 20 mins
  • Hold: 4°C, until removed from machine

The Texas team created a pocket-sized version, which could run on batteries as well, and most notably, was able to create a new patent.  By creating a pocket-sized, battery powered device, the team accomplished several very important features:

  • The device can be used easily at the point-of-care, as a field unit;
  • The device is much lower engineering cost, and much lower patent royalty cost: from thousands of dollars down to hundreds of dollars;
  • The device uses a much smaller sample, and has faster heat/cool times, thus reducing the experimental cost and experimental time.

This new thermocycler has created some excitement in the bio community for some time — however, it still took over a year to finish patent issues.  The university owned the patent; they requested royalties, and up-front option fees, and meanwhile, the device itself remained in limbo.  The great news is that the manufacturing prototype is announced (see below).  The bad news is that such patent hassles are typical, and this was a simple case where a university owned the patent in whole rather than multiple holders owning the patent in part.

From the business angle, the thermocycler market is a billion-dollar market, since it is a fundamental tool for all microbiology or genetic engineering labs.

Some of original papers and articles for the “$5 thermocycler” are:

From Rob Carlson’s synthesis.cc blog:

This week Biodesic shipped an engineering prototype of the LavaAmp PCR thermocycler to Gahaga Biosciences.  Joseph Jackson and Guido Nunez-Mujica will be showing it off on a road trip through California this week, starting this weekend at BilPil.  The intended initial customers are hobbyists and schools.  The price point for new LavaAmps should be well underneath the several thousand dollars charged for educational thermocyclers that use heater blocks powered by peltier chips.

The LavaAmp is based on the convective PCR thermocycler demonstrated by Agrawal et al, which has been licensed from Texas A&M University to Gahaga.  Under contract from Gahaga, Biodesic reduced the material costs and power consumption of the device.  We started by switching from the aluminum block heaters in the original device (expensive) to thin film heaters printed on plastic.  A photo of the engineering prototype is below (inset shows a cell phone for scale).  PCR reagents, as in the original demonstration, are contained in a PFTE loop slid over the heater core.  Only one loop is shown for demonstration purposes, though clearly the capacity is much larger.

lavaamp.png

The existing prototype has three independently controllable heating zones that can reach 100C.  The device can be powered either by a USB connection or an AC adapter (or batteries, if desired).  The USB connection is primarily used for power, but is also used to program the temperature setpoints for each zone.  The design is intended to accommodate additional measurement capability such as real-time fluorescence monitoring.

We searched hard for the right materials to form the heaters and thin film conductive inks are a definite win.  They heat very quickly and have almost zero thermal mass.  The prototype, for example, uses approximately 2W whereas the battery-operated device in the original publication used around 6W.

What we have produced is an engineering prototype to demonstrate materials and controls — the form factor will certainly be different in production.  It may look something like a soda can, though I think we could probably fit the whole thing inside a 100ml centrifuge tube.

If I get my hands on one myself, I’ll post a review.

Don’t Always Trust Open Source Software. Why Trust Open Source Biology?

Posted by – August 7, 2009

The software you are happily using may be.. unnecessarily brittle. Recently I’ve been developing a little bit of high-level software using open source libraries.  Sometimes it amazes me that open source software works at all.  Here’s an excerpt from the internals I found in the open source library when I looked at why it might not be working properly:

        if (Pipe){
                while(iFlag){
                        vpData = Pipe->Read(&dLen);
                        iFlag = 0;
                                //      If we have more data to read then for God's sake, do it!

                                //      I don't know if this will work ... it would return an
                                //      array. This may not be good. Hmmmm.
                        if(!vpData && GetLastError() == ERROR_MORE_DATA){
                                iFlag = 1;
                        }

                        if(dLen){
                                XPUSHs(sv_2mortal(newSVpv((char *)vpData, dLen)));
                        }else{
                                sv_setsv(ST(0), (SV*) &PL_sv_undef);
                        }
                }

The standard responses from the “open source rah-rah crowd” are something like the following:

  • “Yeah, that’s a crazy comment, but at least you can see it!  In proprietary software, there’s the same problems, it’s just hidden!”
  • “At least you’re given the source code so you can fix it!  In proprietary software, you’re never given access to the source code so you couldn’t fix it if you wanted to!”

These responses miss the big point that commercial software is often much more fully tested for it’s specific environment, and undergoes a much more rigorous design process.  (Beyond the designed-for environment, things might break.  However, the environment is usually described.)

Having something that works — even if it isn’t “great” software — is better than not having anything at all; so on the whole, we can’t complain too much.  Open source is expected to evolve, over the long term (meaning, decades), into a better system: it’s assumed that eventually, most of the oddities will be ironed out.  The Linux kernel itself contains similar comments (I’ve seen them in debugging the UDP/IP stack) which is astounding considering that non-professionals consider Linux to be “stable”.  Kernel hackers know the truth — it “mostly” works (with “mostly” being better than “nothing”)..   Next time someone offers you “free software” take a moment to think:  how much do I have to trust that software to work in a situation which may be different than the author’s original working environment?  How much of the code’s architecture might have comments such as “Hmm.. This isn’t supposed to work or might not work..”?  How much is it going to cost ($$$) to find the oddities and dig into the internals to fix them?

The connection to Biology here is that these crazy design comments like “Hmm.. It really isn’t proper design to build it this way.. but it seems to be work” in synthetic life will be too small to ever read.  (In the RNA or DNA.)   At least with open source software, there’s a big anti-warrantee statement; don’t use the software if there is liability involved.  As I posted last year, the “Open Biology License” hasn’t touched on liability issues at all — only patent issues.  How much can Open Biology be trusted, how much might it cost ($$$) to dig in to find the strange biological behavior, and attempt to fix them?   Debugging biology is much, much harder than debugging software.

Commercial Development of Synthetic Biology Products

Posted by – July 20, 2009

BIO hosted a round-table discussion with leading-edge companies on technical and commercial advances in applications of synthetic biology. Speakers in the session represent leading firms in the field, Amyris, BioBricks Foundation, Verdezyne and Codexis.”

The Progress in Commercial Development of Synthetic Biology Applications podcast can be listened to at this link.

BIO is a biotechnology advocacy, business development and communications service organization for research and development companies in the health care, agricultural, industrial and environmental industries, including state and regional biotech associations.

Below are my notes and summary from the conference call.  (Disclaimer: all quotes should be taken as terse paraphrases and see the official transcript, if any, for direct quotes.)

BIO:

“BIO sees synthetic biology as natural progression of what we’ve been doing all along [previous biology and biotech commercial research]. […] Industrial biotechnology gives us tools to selectively add genes to microbes, to allow us to engineer those microbes for the purposes of [biofuels] or production of other useful products.  Synthetic biology is another tool which allows us to do this, and is an evolutionary technology, not a revolutionary technology.  It grows out of what our companies have always been doing with metabolic shuffling or gene shuffling, etc.  [Synthetic biology] has become so efficient that new ways of thinking about this field are necessary.  We are beginning to build custom genomes from the ground up, a logical extension of the technologies [biotech companies] have developed. […] “

Industrial biotechnology’s phases:

1. Agriculture (previous phase)
2. Heathcare (previous phase)
3. and today’s phase: biofuel production, food [enrichment], environmental cleanup

Challenges in today’s world are: energy and environment (greenhouse gases, manufacturing processes, … how to also develop these in the developing world);  Synthetic biology can help to address these problems.

“Every year the development times [of modifying organisms for specific tasks] are shortened [due to availability of more genomic information].”

“There is unpredictability in synthetic biology [however] this is still very manageable.”

This comment was a response to a ‘fluffy’ question about the ‘risks/dangers’ of the technology.

“[This technology is accessible because as we have heard in the news] there are now home hobbyists experimenting with this in their garage laboratories.”

Hmm; I wonder who they are talking about..

Amyris:

“We have been moving genes around for quite a while.  [The difference today which yields Synthetic Biology is that] we can do things easily, rapidly and at small [measurement] scale.” Synthetic biology allows scientists to integrate all the useful [genomic, bioinformatics] data into a usable product [much more rapidly than before].  Previously it would take months to modify a microorganism, now we are down to 2-3 weeks [which is] limited only by the time required for yeast to grow [and we aren’t looking to speed that part up]; this is a rapid increase in the ability to test ideas and [measure] outputs.  We view synthetic biology as very predictable [in the sense that un-intended consequences are inherently reduced].  We engineer microorganisms to grow in a [synthetic environment for fermination in a ] steel tank which reduces it’s ability to grow in a natural environment [thus] the organism loses out against environmental yeast [so modified organisms won’t cause problems in the environment since they will die].   We need more people who can understand complete pathways, complete metabolisms.”

Verdezyne:

“Synthetic Biology is a toolset to create renewable fuels and chemicals.  […] The benefits of Synthetic biology are, 1. profitability, as sugar is a lower cost of carbon; 2.  efficiency, from use of [standard high efficiency] fermentation processes; 3. from efficiency improvements, this improves margin, 4.  decreased capital costs; 5. Use of bio-economy, using local crops [for biomass] or local photosynthetic energy to yield [chemicals for local use].    Now we can explore entire pathways in microorganisms [compared to previously when we could only look at single genes].  Traditionally, chemical engineering is the addition of chemicals to create a functionality [whereas in microbial engineering the microorganism directly creates the outputs desired].  We retooled for synthetic biology very easily [from originally building chemical engineering systems].”

Codexis:

“Biocatalysts [are] enzymes or microbes with novel properties [for commercial use].  Green alternatives to classic manufacturing routes.  Biocatalysts require fewer steps and fewer harmful chemicals.  Synthetic biology is one tool towards this [to] quickly create genes and pathways [using the massive amounts of genomic information now available].  [Use of] Public [genome] databases [allow us to] chop months off the [R&D] timeline.  [One desire] of scientists in synthetic biology is making the microorganisms [predictable, as in in engineering] however in commercial environments we can make variants very quickly [so we can deal with variants].  There are many companies which focus on commodification of biological synthesis and we use a variety of suppliers.  The analysis [the R&D] required for designing new pathways is [what is lacking in skillsets of today’s biologists].”

Drew Endy:

Patents costs are drastically more than the cost of the technology itself.  The technology of the iGEM competition costs $3-4 million per year for all international teams, whereas the costs of patenting all submitted Biobricks every year would be 25k per part for 1,500 parts for a total of over $37 million dollars; thus, the patent costs are much more expensive than the technology, so this is an area which is being worked on.  The next generation of biotech is hoped to “run” on an open “operating system” made from an open foundation [where new researchers can use existing genetic parts as open technology rather than having to build everything from scratch].

There was an additional analogy on the call which related synthetic biology to the emergence of vacuum tubes for electrical engineering, which ushered in incredible tools for the advancement of technology and creation of new products.  I’m on the fence about these analogies, because vacuum tubes were well defined and characterized, and the shapes of their mechanical parts was well known (glass, wire, heater filaments, gas fillers, contact length, arc potentials, etc); whereas, the shapes (thus, the function) and characteristics of biological “parts” is still mostly unknown (microbiology is more than the “software strings” of nucleic acid’s A-C-G-T; it is mechanical micro-machines which interact in various ways depending on chemical context and the mechanical shapes or fittings of many of the parts are not well understood yet).

There you have it. Synthetic biology is the leaner, meaner biotech for the future.

BioMOO, the biologists’ (biohackers) virtual meeting place; in 1994

Posted by – July 2, 2009

Sometime in 1994, a university obtained some funding and set up BioMOO:

BioMOO is a virtual meeting place for biologists, connected to the Globewide Network Academy. The main physical part of BioMOO is located at the BioInformatics Unit of the Weizmann Institute of Science, Israel.

BioMOO is a professional community of Biology researchers. It is a place to come meet colleagues in Biology studies and related fields and brainstorm, to hold colloquia and conferences, to explore the serious side of this new medium.

More

Comments Re: Woodrow Wilson International Center’s Talk on Synthetic Biology: Feasibility of the Open Source Movement

Posted by – June 26, 2009

The Woodrow Wilson International Center for Scholars hosted a recent talk on Synthetic Biology, Patents, and Open Source.  This talk is now available via the web; link below.  I’ve written some comments on viewing the talk, also below.

WASHINGTON – Wednesday, June 17, 2009Synthetic biology is developing into one of the most exciting fields in science and technology and is receiving increased attention from venture capitalists, government and university laboratories, major corporations, and startup companies. This emerging technology promises not only to enable cheap, lifesaving new drugs, but also to yield innovative biofuels that can help address the world’s energy problems.

Today, advances in synthetic biology are still largely confined to the laboratory, but it is evident from early successes that the industrial potential is high. For instance, estimates by the independent research and advisory firm Lux Research indicate that one-fifth of the chemical industry (now estimated at $1.8 trillion) could be dependent on synthetic biology by 2015.

In an attempt to enable the technology’s potential, some synthetic biologists are building their own brand of open source science. But as these researchers develop the necessary technological tools to realize synthetic biology’s promises, there is as yet no legal framework to regulate the use and ownership of the information being created.

Will this open source movement succeed? Are life sciences companies ready for open source? What level of intellectual property (IP) protection is necessary to secure industry and venture capital involvement and promote innovation? And does open source raise broader social issues? On June 17, a panel of representatives from various sectors will discuss the major challenges to future IP developments related to synthetic biology, identify key steps to addressing these challenges, and examine a number of current tensions surrounding issues of use and ownership.

________________________________
Synthetic Biology: Feasibility of the Open Source Movement
Presenters:

  • Arti K. Rai, Elvin R. Latty Professor of Law, Duke Law School
  • Mark Bünger, Director of Research, Lux Research
  • Pat Mooney, Executive Director, ETC Group
  • David Rejeski, Moderator, Director, Synthetic Biology Project

Synthetic Biology: Feasibility of the Open Source Movement

While viewing the webcast (which we are all lucky to have viewable online), I wrote some comments.  Since others were interested in the comments, I’ll post ’em here.
More

Apple iPhone 3.0 as next generation Biomedical device

Posted by – March 17, 2009

Apple’s developer preview today, of iPhone 3.0 software, included the interesting news of support for external accessories, either connected through the physical docking connector or through Bluetooth wireless.


A spokesman from Johnson & Johnson announced an iPhone-blood-pressure-monitor accessory, which provides health biometrics and allows the biometrics to be sent over the iPhone’s network connection as an emergency alert.  Their goal is to make diabetes monitoring easier.

The details of the new iPhone interface are in a thin draft document, External Accessory Framework Reference. This doesn’t include the hardware details necessary to connect arbitrary devices, though once it does, I’ll be hooking lots of different devices to the “iPhone-smart-phone-turned-general-purpose-minicomputer”.

I’m sure the game companies already have external joysticks in the works. A recent interview with Pangea software owner revealed their earnings of $1.5 million from downloads of a single iPhone game (Enigmo), with over 800,000 downloads. His biggest complaint: “no D-pad game controller.” Rest assured, that will be solved soon.

Games aside, the iPhone (or iTouch) offers a solid software environment which includes graphical presentation, ease of data entry, network support, wireless roaming, audio support, and now external device data accessories. This is exactly the kind of tool that medical and bioscience needs to help with a deluge of patients.

Synthetic Biology Conference 4.0 videos now online

Posted by – March 15, 2009

Videos of the Synthetic Biology Conference 4.0 from Hong Kong are now available.

One of the best all-around talks as an introduction to synthetic biology, and biotech business aspects of syn bio, is the lecture by Amyris Technologies, and an antidote for malaria using synthesis of the precursors to artimesinin; watch the video below.


Amyris’s Artemisinin Project is completely not-for-profit. The company received a large grant from the Gates Foundation for this commercializable research.

The talk also includes a discussion regarding biofuel breakthroughs now possible through syn bio techniques; their project is currently ramping up to make biodiesel sugarcane bioreactors in Brazil.

Everyone Needs a PCR Machine

Posted by – March 9, 2009

In the early 1970’s, groups of nerdy engineers with hacked-up electronics would meet at “homebrew computing clubs” to share technology and share the vision of a world where “everyone has a home computer for running personal software.”  A couple of these guys like Steve Wozniak and Steve Jobs were part of the tornado, and look around today to marvel at the innovation created. Neither Steve anticipated or guessed that the first personal spreadsheet program, Visicalc, meant for small business and personal finance management, would serve as a catalyst for the rapid rise in adoption of personal computers.

Today, groups of nerdy bioengineers with hacked-up hardware are meeting at “DIYBio” clubs to share technology and share the vision of a world where “everyone has biotech tools for making personal biology.” Mark the calendar: the wave has just begun.

We Make the News Headlines: “Amateurs are trying genetic engineering at home”

Posted by – December 25, 2008

As a nice holiday surprise for me this week, my project (Melaminometer) & a team member (Meredith L. Patterson) made it into Associated Press science news: “Amateurs are trying genetic engineering at home”.  The article is accurate, and quoted below.  For the melaminometer project, we are also collaborating with Taipei National Yang Ming University.

http://news.yahoo.com/s/ap/20081225/ap_on_sc/do_it_yourself_dna

Amateurs are trying genetic engineering at home

Meredith L. Patterson, a computer programmer by day, conducts an experiment in Meredith L. Patterson, a computer programmer by day, conducts an experiment in the dining room of her San Francisco apartment on Thursday, Dec. 18, 2008. Patterson is among a new breed of techno rebels who want to put genetic engineering tools in the hands of anyone with a smart idea. Using homemade lab equipment and the wealth of scientific knowledge available online, these hobbyists are trying to create new life forms through genetic engineering – a field long dominated by Ph.D.s toiling in university and corporate laboratories.
(AP Photo/Noah Berger)

SAN FRANCISCO – The Apple computer was invented in a garage. Same with the Google search engine. Now, tinkerers are working at home with the basic building blocks of life itself.

Using homemade lab equipment and the wealth of scientific knowledge available online, these hobbyists are trying to create new life forms through genetic engineering — a field long dominated by Ph.D.s toiling in university and corporate laboratories.

In her San Francisco dining room lab, for example, 31-year-old computer programmer Meredith L. Patterson is trying to develop genetically altered yogurt bacteria that will glow green to signal the presence of melamine, the chemical that turned Chinese-made baby formula and pet food deadly.

More

The Fundamental Problems in Open Source — What’s the Bio Fix?

Posted by – December 3, 2008

Synthetic biology aims to create biological parts which can be connected together to form larger functional devices, and many hope the most poplar library of parts will be “Open Source”.  Openly publishing large collections of biological parts is great, as it would rapidly accelerate engineering progress and rapidly diseminate the technology.

There’s one big drawback to open source though:  Where do you go when it doesn’t work? This is called the support issue. Presumably, there’s a “community of experts” who monitor problems and provide fixes for others. More often, though, the users themselves have to become expert, or they abandon the project.   (A secondary question is:  Who do you sue when it does something wrong? which is a question I posed in my licensing discussion.)

I recently ran across the following blog article from a popular web hosting company (bluehost.com) describing their use of Linux (properly called GNU/Linux, since Linux is only a small part of the operating system, and a tapestry of GNU software makes up more than 90% of a “Linux system”).  This web hosting company is very popular with many individuals and small companies, and it’s profitable existence owes much to open source software (although it’s reported that their servers experience unhealthy downtime).  Without open source software, the company couldn’t exist; the cost of their software would make their service very unprofitable.

The following quote is telling [1]:

“Whenever we see ANY bottleneck in the system whether it be CPU, I/O Block Device, Network Block Device, Memory, and so on we find out EXACTLY what is causing the problem. When I say we find the problem, I mean we go down to the actual code in the kernel and see exactly where the issue is. Sometimes that gives us the answer we need to the solve the problem and other times it is a bug in the kernel itself that we need to create a patch for.” (The full article is quoted below)

More

Average Americans are Scared of “Synthetic Biology”

Posted by – November 20, 2008

Yes, believe it, non-synthetic biologists have a poor, even fearful, associations when synthetic biology is described to them:

Q: How do the descriptions of these technologies [synthetic biology] make you feel?

Female Respondent: I really thought of sci-fi movies, where, um, something is created in a laboratory, and it always seems great in the beginning, um, but, down the line, something goes wrong because they didn’t think about this particular situation or things turning this way.

Male Respondent: The “Jurassic Park” movie came to mind.

Female Respondent: It’s scary, why do we need to have new organisms? Why do we need to have, you know, you know, genetic engineering? Does it really help with anything? It’s really, it’s not going to help a common person like us. I don’t think, it’s not going to be for helping any of us.

Watch the video for yourself — promise, though, that you won’t throw your mouse at your screen:
Nanotech and Synbio: Americans Don’t Know What’s Coming: “This survey was informed by two focus groups (video – focus groups) conducted in August [2008] in suburban Baltimore [by The Project on Emerging Nanotechnologies Synbio Poll]. This is the first time—to the pollsters’ knowledge—that synthetic biology has been the subject of a representative national telephone survey.”

One of the men states he’s a biologist, and later says, “Who’s playing god here? Who are we as humans to think we can design or redesign life? It’s nice to be able to do it but is it right?”

While watching the video, keep in mind the benefits and limitations of focus groups (wikipedia: Focus groups).

Skunkworks Bioengineering — Prerequisites to Success?

Posted by – November 13, 2008

“Despite all the support and money evident in the projects, there is absolutely no reason this work could not be done in a garage. And all of the parts for these projects are now available from the Registry.” Rob Carlson, iGEM 2008: Surprise — The Future is Here Already, Nov 2008.

The question which should be posed is:

  • What does it really take to actually do this in a garage?

Of course I’m interested in the answer.  I actually want to do this in my garage.

(Let’s ignore the fact for a moment, that many of the iGEM competition projects don’t generate experimental results due to lack of time in the schedule, thus actual project results don’t mirror the project prospectus.)

Here is my short list of what is required:

  • Education (all at university level)
  • Experience
    • 1 year of industry or grad-level engineering lab research & design
    • 1 year of wet lab in synthesis
    • 2 more years of wet lab in synthesis if it’s desired to have a high probability of success on the project (see my SB4.0 notes for where this came from)
  • Equipment
    • Most lab equipment is generally unnecessary, since significant work can be outsourced.
    • Thermocycler
    • Incubator
    • Centrifuge
    • Glassware
    • Example setup: See Making a Biological Counter, Katherine Aull, 2008 (Home bio-lab created for under $500.)
    • Laptop or desktop computer
    • Internet connection
  • Capital
    • About $10k to $20k cash (?) to throw at a problem for outsourced labor, materials, and equipment (this cost decreases on a yearly basis).
  • Time (Work effort)
    • Depends on experience, on the scope of the problem, on project feasibility — of course.
    • 4 to 7 man-months to either obtain a working prototype or scrap the project.

Although some student members of iGEM teams are random majors such as economics or music, somehow I’m not sure they qualify towards the “anyone can do this” mantra.  Of the iGEM competition teams who placed well for their work, all of the members were 3rd year or 4th year undergrads or higher.  The issue isn’t the equipment or ability to outsource — it’s the human capital, the mind-matter, that counts: education and experience.  (Which, in the “I want to DIY my Bio!” crowd, is a rare find.)

With all that covered, it seems anyone can have their very own glowing bacteria.

“Biology is hard, and expensive, and most people trained enough to make a go of it have a lab already — one that pays them to work.”   — Katherine Aull (see above ref.)

Share-Alike Genetic Engineering Intellectual Property Licenses

Posted by – November 9, 2008

A draft legal license for BioBricks was created early in 2008, though as far as I know, it has not been “tested” by industry use of the intellectual property (anyone know?).  Surprisingly, to me, the draft BioBrick license doesn’t contain any liability statements.  The BioBrick license attempts to solidify the “open source”ness of biological components.

Compare the BioBrick license to the original open source software license from MIT, below.

MIT License for Software (circa 1992?)

Copyright (c) [year] [copyright holders]

Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

The BioBricks license is simple and mandates only the sharing intent of the license.  Whereas the original MIT and GNU Copyleft licenses contain a significant statement of liability-reduction (which, as far as I know, hasn’t actually been tested in court, though is generally accepted), the BioBricks discussions don’t seem to mention liability at all.

Draft of the BioBricks Legal Scheme (January 2008)

1. You are free to modify, improve, and use all BioBrick parts, in systems with other BioBricks parts or non-BioBrick genetic material.
2. If you release a product, commercially or otherwise, that contains BioBrick parts or was produced using BioBrick parts, then you must make freely available the information about all BioBrick parts used in the product, or in producing the product, both for preexisting BioBrick parts and any new or improved BioBrick parts. You do not need to release information about any non-BioBrick material used in the system.
3. By using BioBrick parts, you agree to not encumber the use of BioBrick parts, individually or in combination, by others.

The BioBrick license seems similar to the Creative Commons Share-Alike license.  The legal scheme is based on the latest legal meetings organized by the BioBrick Foundation:

BioBrick Foundation / Samuelson Clinic Materials from March 2008 UCSF Workshop

  1. Legal Options Backgrounder & Draft BBF Legal Scheme: PDF
  2. Executive Summary of Findings: PDF
  3. Slides from March UCSF Workshop: PPT

Further BioBrick related legal documents are at Open Wet Ware: http://openwetware.org/wiki/The_BioBricks_Foundation:Legal

Open questions as of this writing:

  • Has liability been addressed in Biobricks?  (Especially considering the implications of biosafety that surrounds the field.)   By liability, this means a license term which boils down to: “The author of this BioBrick is not responsible if anything bad happens when/if anyone creates/clones/uses/modifies it.”
  • Has industry brought BioBrick technology to market which would “test” the BioBrick license?
  • What is the roadmap for future license drafts/official versions?

2008’s Thinking on Biological Engineering Business

Posted by – November 8, 2008

One set of perspectives on systems biology startup business for 2008.

Institute of Biological Engineering’s

Bio-Business Nexus 2008

From OpenWetWare

Presenter Title Presentation
Dr. Rob Whitehead North Carolina State University Office of Technology Transfer-putting ideas to work Media:1.Whitehead – IBE NCSU March2008.pdf
Michael Batalia, Ph.D. Avant-Garde Technology Transfer Leading Innovation at Wake Forest University Health Sciences Media:2. Batalia – 2008 IBE BioBusiness Nexus_MAB.pdf
John C. Draper, President, First Flight Venture Center Business Incubation, A Research Triangle Park Resource Media:3. Draper – IBE 13thAnnualConf-03062008c.pdf
Lister Delgado NC IDEA Grants Program Media:4. Delgado – NCIDEA Grants Program Overview – IBE Conference.pdf
Rob Lindberg, PhD, RAC The North Carolina Biotechnology Center Media:5. Lindberg – IBE 2008 BTD presentation 030708.pdf

Links

2007’s Thinking on Biological Engineering Business

Posted by – November 7, 2008

The presentations below were given at the  Institute of Biological Engineering annual meeting March 30, 2007 in St. Louis, Missouri, under the topic of BioBusiness.

The Mellitz presentation is very good reading.

BioBusiness Nexus Presentations 2007

Mellitz presentation: Commercialization of University IP: Translational Research in BME Leading to Company Formation

Nidus Center presentation

BioGenerator presentation: Bridging the Gap Between Technologies and Viable Companies

Akermin presentation: Biofuel Cells for Portable Electronic Applications

Chlorogen presentation: Production of a Human TGF-beta Family Protein with Potential as an anti-Cancer Therapeutic Protein From Plant Chloroplast

Kereos presentation: Targeted Imaging / Targeted Therapy

Apath presentation: Automated Antiviral Drug Screening Using Engineered Replication Systems

Orion Genomics presentation: DNA Methylation & Cancer

Sequoia Sciences presentation: Bringing Back Nature to Drug Discovery Natural Molecules in an Antibacterial Program

Somark Innovations presentation: BIOCOMPATIBLE RFID INK TATTOO

Towards a Market Model for Synthetic Biology

Posted by – November 4, 2008

If you ask most incumbents in the field of biology, they’ll likely say: “What exactly is synthetic biology?”

Maybe they should watch Drew Endy’s video on YouTube.

However, really, synthetic biology is the simple extension of modern biology.  Not too long ago, it wasn’t possible to “make” biology.  Now, it is possible (also known as: synthesis).  And the cost of synthesis keeps getting lower every year.  Some say the drop in the cost of synthesis looks curiously like the curves to Moore’s Law: doubling in technological capability every X months (where X is sometimes debated, usually quoted at 18 months, often misquoted as “every year”).

Synthetic biology is often compared to the computer industry, to leverage the historical perspective.

In the computer industry, there are three big pieces of the pie (usually seen as two; I want to purposely highlight as three).

  • Hardware companies
  • Software companies that sell source code (“source software companies” for the purposes of this article)
  • Software companies that sell binaries (“binary software companies” for the purposes of this article)

In the early days of the personal computer revolution, some bright guys saw that the hardware companies had a great product.. but software could be a much, much more profitable product:  with software, the cost of manufacturing is ZERO.  With hardware, the cost of manufacturing weighs down profits, so the maximum margin might be 20% to 30% for very glamorous products, and maybe 5% to 10% for less glamorous products.  These bright guys immediately bluffed their ways into IBM’s business center and negotiated what turned out to be one of the most profitable deals (if not the most profitable deal!) in the history of the world (Microsoft’s model).  In parallel to this, some other bright guys decided that they could instantly boost their overall profits by both building hardware and including all the fundamental software: hence, the first “personal computer systems company” (hardware plus all necessary software) was created (Apple’s model).

It’s worth keeping in mind at all times that the computer revolution existed before the “personal” computer revolution.  At that time, there were only mainframes (IBM: “big blue”).  During that time, though I’m not totally sure, I believe the market likely segmented like this:

  • Mainframe system companies (hardware + software)
  • Mainframe service companies (people required to run & maintain the machines)

Mainframe system companies charged heafty prices because they could: the only purchasers were governments and incredibly large (deep pocket) companies.  Yet the mainframe hardware business was killed by the personal computer market, which offered enough technology to the mass market to undercut most of the need for mainframes.  Of course, a mainframe company would never want to make a personal computer — it would erode their own profit potential (eventually, IBM caved in and created the IBM PC, but it was originally unsuccessful and only the reverse-engineered clones from other companies were accepted by the market).

The innovation in computer technology occurred so rapidly that unhealthy monopolies were created as a result. (Microsoft, AT&T, IBM)  In the case of AT&T, they were forced to split into different operations and allow more market competition (both the short and long term benefits of this forced split are still debated).  Microsoft avoided being split through government ignorance, entrenchment, lawyers, and luck.

Biology is a different from the story above. Biology does have “soft” ware, of a sort — it’s DNA.  The software is sometimes distributed as “source” code, of a sort — it’s as genes, protocols, primers and vectors.  The software is sometimes distributed as “binary” code, of a sort, too — it’s the modified microbes that “just run” when placed in the right environment.  But after this, the analogy kind of breaks down; the cost of manufacturing is never near zero.  Additionally, the fundamental “source” code can’t be protected under copyright, because it’s DNA.  And, the goverment has a heavy hand in determining what “software binaries” you can get ahold of in order to run.

Of course, I’m still a rank amateur at biology, though, currently, this is what others seem to see in biology.  And of course, I’m predicting the future, so maybe no one can definitely claim I’m incorrect.

  • Hardware companies, supplying machines and tools.
  • “Software” companies: supplying digital DNA sequences, cellular models (like BioBricks), and bioinformatics programs which simulate & verify the cellular models for fabrication.  Additionally, much of the intellectual property here will be public domain or Share-Alike licensed.
  • Fabrication companies: supplying physical biological material based on the digital sequences.  Most people will outsource fabrication to these companies and only the “large pharmas” will perform fabrication in-house.

Does this fit reality?  I say, no.  The fabrication companies will quickly starve, since the prices continue to fall — just like the DRAM computer companies closed with the falling prices of the transistor and transistor memory (Intel bailed out of manufacturing DRAM as Moore’s Law eroded their profits beyond repair).  The idealized “Software” companies can’t actually operate in the prescribed manner, because biology consists of chemicals, and such a company is not set up as a physical laboratory; the Share-Alike licensing will remove profit potential; and the company that sells the chemicals isn’t even on the map.

Here’s what seems to mirror the current market more closely.

  • Hardware companies: supply machines and lots of glass hardware.  Presumably lower profit margin except for large equipment sold to big pharma.
  • Wet Lab companies (biological engineers): supplying primers, enzymes, reagents, chemicals.  High profit margins, due to patent protection and high barrier to entry (requires highly specialized education and some number of years of experience).
  • Dry Lab companies (bioinformatics engineers): Design and supply digital DNA and cellular models, via computational models, and design bioinformatics progams and wet lab protocols for use.  Funky profit margin, because, if design is made Share-Alike, then profits don’t exist; if design is kept secret, then standards may not evolve well; and, the DNA intellectual property is already mandated as public domain.
  • Fabrication service companies: encompass limited rage of Wet Lab + Dry Lab, but don’t create their own protocols.  Margins vary, depending on level of the service.

The big winner right now seems to be the Wet Lab guys and the Hardware guys.  By leveraging patent protection, the Wet Lab competition is locked out of competing.  Although no one in the industry has anything nice to say about patents, everyone files them, and all investors demand them.  The Hardware guys currently have big profits, high prices, and little competition, as no one is forcing the prices down — sound familiar?  This should; it’s the same phenomenon that occurred in the mainframe days.

The shakeout seems to be that the Dry Lab guys, the Hardware guys, and the Fabrication guys will need to get together in some way.

Yet, there’s another interesting aspect of biology: organisms are different.  Each organism has it’s own unique pathways and in-compatibilities.  It is not possible, in general, to run “software” from one genetically engineered machine on another genetically engineered machine.  In fact, that’s why biologists usually argue against synthetic biology, claiming it will never work.

So rather than the universal “PC platform” that exists in the computer world (a derivative of both unhealthy monopolistic practices and the market requiring a common environment), the biological environments will number in the thousands.  Yeast grows differently than e. Coli, and both Hardware and Dry Lab are customized to individual species.  That could be the market segmentation: biological compatibility itself, creating multiple competitive hardware and “software” markets, with some market segments Share-Alike, and some not.

If someone has a crystal ball, let me borrow it for a second.