Category: Notes

Happy Holidays, say the recombinant bacteria

Posted by – December 22, 2009


lacZ{2ML5F-T7GFP} plates bring holiday cheer!

  • Category: Notes
  • Comments Closed

“ELISA Redux” 96-Well Plate Cryptography Challenge

Posted by – November 11, 2009

The publication GEN is running a contest, with $1,500 plus fancy biotechnology equipment as the prize, for the first one who can decode the cryptographic message hidden in this 96-well plate:

GEN's ELISA Redux contest well plate

"A message has been encrypted into the ELISA plate image, called ELISA Redux, based on the color of each well."


Battery-powered, Pocket-sized PCR Thermocycler

Posted by – November 1, 2009

A few years ago, some bright students at Texas A&M improved upon the most basic tool for manipulating microbiology: a thermocycler.   Thermocyclers are typically large tabletop instruments which require a large sample, a lot of electrical power, and a lot of time to heat and cool.  Alternatively, the process can be done by hand with a pot of boiling water, a bucket of cold water, a stopwatch, and a lot of free time and patience.   The sample itself, for the purpose of “amplifying” the desired material in the sample, runs through many iterations of heat/cool cycling, such as the following:

  • Denature: 95°C, 15 mins


  • No. of cycles: 39
  • Denature: 94°C, 30 secs
  • Anneal: 62°C, 30 secs
  • Elongate: 68°C, 3.5 mins


  • Elongate: 68°C, 20 mins
  • Hold: 4°C, until removed from machine

The Texas team created a pocket-sized version, which could run on batteries as well, and most notably, was able to create a new patent.  By creating a pocket-sized, battery powered device, the team accomplished several very important features:

  • The device can be used easily at the point-of-care, as a field unit;
  • The device is much lower engineering cost, and much lower patent royalty cost: from thousands of dollars down to hundreds of dollars;
  • The device uses a much smaller sample, and has faster heat/cool times, thus reducing the experimental cost and experimental time.

This new thermocycler has created some excitement in the bio community for some time — however, it still took over a year to finish patent issues.  The university owned the patent; they requested royalties, and up-front option fees, and meanwhile, the device itself remained in limbo.  The great news is that the manufacturing prototype is announced (see below).  The bad news is that such patent hassles are typical, and this was a simple case where a university owned the patent in whole rather than multiple holders owning the patent in part.

From the business angle, the thermocycler market is a billion-dollar market, since it is a fundamental tool for all microbiology or genetic engineering labs.

Some of original papers and articles for the “$5 thermocycler” are:

From Rob Carlson’s blog:

This week Biodesic shipped an engineering prototype of the LavaAmp PCR thermocycler to Gahaga Biosciences.  Joseph Jackson and Guido Nunez-Mujica will be showing it off on a road trip through California this week, starting this weekend at BilPil.  The intended initial customers are hobbyists and schools.  The price point for new LavaAmps should be well underneath the several thousand dollars charged for educational thermocyclers that use heater blocks powered by peltier chips.

The LavaAmp is based on the convective PCR thermocycler demonstrated by Agrawal et al, which has been licensed from Texas A&M University to Gahaga.  Under contract from Gahaga, Biodesic reduced the material costs and power consumption of the device.  We started by switching from the aluminum block heaters in the original device (expensive) to thin film heaters printed on plastic.  A photo of the engineering prototype is below (inset shows a cell phone for scale).  PCR reagents, as in the original demonstration, are contained in a PFTE loop slid over the heater core.  Only one loop is shown for demonstration purposes, though clearly the capacity is much larger.


The existing prototype has three independently controllable heating zones that can reach 100C.  The device can be powered either by a USB connection or an AC adapter (or batteries, if desired).  The USB connection is primarily used for power, but is also used to program the temperature setpoints for each zone.  The design is intended to accommodate additional measurement capability such as real-time fluorescence monitoring.

We searched hard for the right materials to form the heaters and thin film conductive inks are a definite win.  They heat very quickly and have almost zero thermal mass.  The prototype, for example, uses approximately 2W whereas the battery-operated device in the original publication used around 6W.

What we have produced is an engineering prototype to demonstrate materials and controls — the form factor will certainly be different in production.  It may look something like a soda can, though I think we could probably fit the whole thing inside a 100ml centrifuge tube.

If I get my hands on one myself, I’ll post a review.

When a needle is not a needle: inside & outside needle diameter variations

Posted by – August 27, 2009

In various discussions in biology circles there’s often the lament that “biology is hard” (which I agree) and from the biologists there are continued remarks that repeating a protocol in a slightly different way will have poor results. As an engineer I am fascinated by this because reproducibility is the key to making biology “easier to engineer”. Once a method is reproducible in different environments then it can be made into a black box for reuse without worrying about whether it will work under slightly varying conditions.

In one research paper, the chemical engineers dug into part of the reason why their experiment had differing results. They found that the size of needles varied considerably even within the same gauge and same vendor.


Don’t Always Trust Open Source Software. Why Trust Open Source Biology?

Posted by – August 7, 2009

The software you are happily using may be.. unnecessarily brittle. Recently I’ve been developing a little bit of high-level software using open source libraries.  Sometimes it amazes me that open source software works at all.  Here’s an excerpt from the internals I found in the open source library when I looked at why it might not be working properly:

        if (Pipe){
                        vpData = Pipe->Read(&dLen);
                        iFlag = 0;
                                //      If we have more data to read then for God's sake, do it!

                                //      I don't know if this will work ... it would return an
                                //      array. This may not be good. Hmmmm.
                        if(!vpData && GetLastError() == ERROR_MORE_DATA){
                                iFlag = 1;

                                XPUSHs(sv_2mortal(newSVpv((char *)vpData, dLen)));
                                sv_setsv(ST(0), (SV*) &PL_sv_undef);

The standard responses from the “open source rah-rah crowd” are something like the following:

  • “Yeah, that’s a crazy comment, but at least you can see it!  In proprietary software, there’s the same problems, it’s just hidden!”
  • “At least you’re given the source code so you can fix it!  In proprietary software, you’re never given access to the source code so you couldn’t fix it if you wanted to!”

These responses miss the big point that commercial software is often much more fully tested for it’s specific environment, and undergoes a much more rigorous design process.  (Beyond the designed-for environment, things might break.  However, the environment is usually described.)

Having something that works — even if it isn’t “great” software — is better than not having anything at all; so on the whole, we can’t complain too much.  Open source is expected to evolve, over the long term (meaning, decades), into a better system: it’s assumed that eventually, most of the oddities will be ironed out.  The Linux kernel itself contains similar comments (I’ve seen them in debugging the UDP/IP stack) which is astounding considering that non-professionals consider Linux to be “stable”.  Kernel hackers know the truth — it “mostly” works (with “mostly” being better than “nothing”)..   Next time someone offers you “free software” take a moment to think:  how much do I have to trust that software to work in a situation which may be different than the author’s original working environment?  How much of the code’s architecture might have comments such as “Hmm.. This isn’t supposed to work or might not work..”?  How much is it going to cost ($$$) to find the oddities and dig into the internals to fix them?

The connection to Biology here is that these crazy design comments like “Hmm.. It really isn’t proper design to build it this way.. but it seems to be work” in synthetic life will be too small to ever read.  (In the RNA or DNA.)   At least with open source software, there’s a big anti-warrantee statement; don’t use the software if there is liability involved.  As I posted last year, the “Open Biology License” hasn’t touched on liability issues at all — only patent issues.  How much can Open Biology be trusted, how much might it cost ($$$) to dig in to find the strange biological behavior, and attempt to fix them?   Debugging biology is much, much harder than debugging software.

Perl Bio-Robotics module, and Robotics::Tecan

Posted by – July 30, 2009

FYI for Bioperl developers:

I am developing a module for communication with biology robotics, as discussed recently on #bioperl, and I invite your comments. Currently this module talks to a Tecan genesis workstation robot. Other vendors are Beckman Biomek, Agilent, etc. No such modules exist anywhere on the ‘net with the exception of some visual basic and labview scripts which I have found. There are some computational biologists who program for robots via high level s/w, but these scripts are not distributed as OSS.

With Tecan, there is a datapipe interface for hardware communication, as an added $$ option from the vendor. I haven’t checked other vendors to see if they likewise have an open communication path for third party software. By allowing third-party communication, then naturally the next step is to create a socket client-server; especially as the robot vendor only support MS Win and using the local machine has typical Microsoft issues (like losing real time communication with the hardware due to GUI animation, bad operating system stability, no unix except cygwin, etc).

On Namespace:

I have chosen Robotics and Robotics::Tecan. (After discussion regarding the potential name of Bio::Robotics.)  There are many s/w modules already called ‘robots’ (web spider robots, chat bots, www automate, etc) so I chose the longer name “robotics” to differentiate this module as manipulating real hardware. Robotics is the abstraction for generic robotics and Robotics::(vendor) is the manufacturer-specific implementation. Robot control is made more complex due to the very configurable nature of the work table (placement of equipment, type of equipment, type of attached arm, etc). The abstraction has to be careful not to generalize or assume too much. In some cases, the Robotics modules may expand to arbitrary equipment such as thermocyclers, tray holders, imagers, etc – that could be a future roadmap plan.

Here is some theoretical example usage below, subject to change. At this time I am deciding how much state to keep within the Perl module. By keeping state, some robot programming might be simplified (avoiding deadlock or tracking tips). In general I am aiming for a more “protocol friendly” method implementation.

To use this software with locally-connected robotics hardware:

    use Robotics;
    use Robotics::Tecan;

    my %hardware = Robotics::query();
    if ($hardware{"Tecan-Genesis"} eq "ok") {
    	print "Found locally-connected Tecan Genesis robotics!\n";
    elsif ($hardware{"Tecan-Genesis"} eq "busy") {
    	print "Found locally-connected Tecan Genesis robotics but it is busy moving!\n";
    	exit -2;
    else {
    	print "No robotics hardware connected\n";
    	exit -3;
    my $tecan = Robotics->new("Tecan") || die;
    $tecan->attach() || die;    # initiate communications
    $tecan->home("roma0");      # move robotics arm
    $tecan->move("roma0", "platestack", "e");    # move robotics arm to vector's end
    # TBD $tecan->fetch_tips($tip, $tip_rack);   # move liquid handling arm to get tips
    # TBD $tecan->liquid_move($aspiratevol, $dispensevol, $from, $to);

To use this software with remote robotics hardware over the network:

  # On the local machine, run:
    use Robotics;
    use Robotics::Tecan;

    my @connected_hardware = Robotics->query();
    my $tecan = Robotics->new("Tecan") || die "no tecan found in @connected_hardware\n";
    $tecan->attach() || die;
    # TBD $tecan->configure("my work table configuration file") || die;
    # Run the server and process commands
    while (1) {
      $error = $tecan->server(passwordplaintext => "0xd290"); # start the server
      # Internally runs communications between client->server->robotics
      if ($tecan->lastClientCommand() =~ /^shutdown/) {

    $tecan->detach();   # stop server, end robotics communciations

  # On the remote machine (the client), run:
    use Robotics;
    use Robotics::Tecan;

    my $server = "";
    my $password = "0xd290";
    my $tecan = Robotics->new("Tecan");
    $tecan->connect($server, $mypassword) || die;

    ... same as first example with communication automatically routing over network ...
    $tecan->detach();   # end communications

Some notes for those who may also want to create Perl modules for general or BioPerl use:

  • Use to get Module-Starter
  • Run Module-Starter to create new module from module template
  • Read Module::Build::Authoring
  • Read Bioperl guide for authoring new modules
  • Copy/write perl code into the new module
  • Add POD, perl documentation
  • Add unit tests into the new module
  • Register for CPAN account (see CPAN wiki), register namespace
  • Verify all files are in standard CPAN directory structure
  • Commit & Release

Software for Biohackers

Posted by – July 30, 2009

Some open source software collections of biology interest are noted here. I’ll update this list as time goes on. If you would like to have your project listed too, leave a comment with all the fields of the table and I’ll add your project. If any of these links do not work, let me know too.

Name Status Field Language Description
Eclipse Stable Programming, editing, building, debugging Java, C, C++, Perl, .. Eclipse is the most widely adopted software development environment in terms of language support, corporate support, and user plugin support. It is open source. It’s the “Office” suite for programming.
BioPerl Stable Bioinformatics Perl, C BioPerl has many modules for genomic sequence analysis/matching, genomic searches to databases, file format conversion, etc.
BioPython Stable Bioinformatics Python, C BioPython has many modules for computational biology.
BioJava Stable Bioinformatics Java BioJava has many modules for computational biology.
BioLib Stable Bioinformatics C, C++ BioLib has many modules for file format conversion, integration to other Bio* language projects, genomic sequence matching, etc.
Bio-Linux Stable Operating System with Bundled Bioinformatics Applications Many “A dedicated bioinformatics workstation – install it or run it live”
DNA Linux Stable Operating System with Bundled Bioinformatics Applications Many “DNALinux is a Virtual Machine with bioinformatic software preinstalled.”
Several Synthetic Biology editors, simulators, or suites, listed at OpenWetWare Computational Tools, such as:
Synthetic Biology Software Suite (SynBioSS), BioJADE, GenoCAD, BioStudio, BioCad,TinkerCell, Clotho
Work In Progress Synthetic Biology Moslty Java, some Web based, some Microsoft .NET Pathway modeling & simulation for synthetic biology genetic engineering, editing, parts databases, etc
APE (A Plasmid Editor) Stable Genetic engineering Java DNA sequence and translation editor

“Centrifuge the column(s) at ≥10,000×g (13,000 rpm) for 1 minute, then discard the flow-through.”

Posted by – July 30, 2009

A basic equation of physics, for those out there building their own centrifuges:

What are RPM, RCF, and g force and how do I convert between them?

The magnitude of the radial force generated in a centrifuge is expressed relative to the earth’s gravitational force (g force) and known as the RCF (relative centrifugal field).  RCF values are denoted by a numerical number in “g” (ex. 1,000 x g).  It is dependent on the speed of the rotor in revolutions per minute (RPM) and the radius of rotation.  Most centrifuges are set to display RPM but have the option to change the readout to RCF.

To convert between the two by hand, use the following equation:

RCF  =  11.18 (rcm) (rpm/1000)^2

Where rcm = the radius of the rotor in centimeters.

3G Cellphone as Biotech Tool: “Cellular Phone Enabled Non-Invasive Tissue Classifier”

Posted by – July 5, 2009

A recent paper in PLoS ONE describes a diagnostic system which uses a common 3G cellphone with bluetooth to assist in point-of-care measurement of tissues, from tissue samples previously taken, with remote data analysis [1].  The hope, of course, is that this could be used for detecting cancer tissue vs. non-cancer tissue.  In general this technological approach is important for the following reasons: it allows data analysis across large populations with server-side storage of the data for later refinement; not all towns or cities will have expert medical staff to classify tissues at a hospital; and sending the sample to another city for classification takes time and creates measurement risk (mishandling, contamination, data entry error, biological degredation, etc).  Since the tissues are measured by a digital networked device, the results can be quickly sent to a central database for further analysis, or as I hint below, for geographically mapping medical data for bioinformatics.

From my interpretation, the complete system looks like this:

The probe electronics are described in [2]; unfortunately that article is not open access, so I can’t read it.  The probes located around the sample are switched to conduct in various patterns and a learning algorithm is used to isolate the probe pair with the optimal signal.  The sample is placed at the center of the petri dish and covered in saline.

Sending the raw data to a central server for analysis allows for complex pattern recognition across all samples collected; thus, the data analysis and the result can improve over time (better fitting algorithms or better weighting in the same algorithm).  The impedance analysis fits according to the magnitude, phase, frequency, and the probe pair.

The article does not explain the technologies used with the cell phone for communicating between the measurement side and the cellular side (USB / Bluetooth communication link, Java, E-mail application link, etc).  Though these technologies are cellphone specific, it is part of the method, and it is not described.  The iPhone would be a good candidate for this project as well.  A cellphone with integrated GPS would allow for location data to be sent to the server, which may be able to provide better number-crunching in the data processing algorithms, for recognition of geographic regions with high risk.

[1] Laufer S, Rubinsky B, 2009 Cellular Phone Enabled Non-Invasive Tissue Classifier. PLoS ONE 4(4): e5178. doi:10.1371/journal.pone.0005178

[2] Ivorra A, Rubinsky B (2007) In vivo electrical impedance measurements during and after electroporation of rat liver. Bioelectrochemistry 70: 287–295.

Comments Re: Woodrow Wilson International Center’s Talk on Synthetic Biology: Feasibility of the Open Source Movement

Posted by – June 26, 2009

The Woodrow Wilson International Center for Scholars hosted a recent talk on Synthetic Biology, Patents, and Open Source.  This talk is now available via the web; link below.  I’ve written some comments on viewing the talk, also below.

WASHINGTON – Wednesday, June 17, 2009Synthetic biology is developing into one of the most exciting fields in science and technology and is receiving increased attention from venture capitalists, government and university laboratories, major corporations, and startup companies. This emerging technology promises not only to enable cheap, lifesaving new drugs, but also to yield innovative biofuels that can help address the world’s energy problems.

Today, advances in synthetic biology are still largely confined to the laboratory, but it is evident from early successes that the industrial potential is high. For instance, estimates by the independent research and advisory firm Lux Research indicate that one-fifth of the chemical industry (now estimated at $1.8 trillion) could be dependent on synthetic biology by 2015.

In an attempt to enable the technology’s potential, some synthetic biologists are building their own brand of open source science. But as these researchers develop the necessary technological tools to realize synthetic biology’s promises, there is as yet no legal framework to regulate the use and ownership of the information being created.

Will this open source movement succeed? Are life sciences companies ready for open source? What level of intellectual property (IP) protection is necessary to secure industry and venture capital involvement and promote innovation? And does open source raise broader social issues? On June 17, a panel of representatives from various sectors will discuss the major challenges to future IP developments related to synthetic biology, identify key steps to addressing these challenges, and examine a number of current tensions surrounding issues of use and ownership.

Synthetic Biology: Feasibility of the Open Source Movement

  • Arti K. Rai, Elvin R. Latty Professor of Law, Duke Law School
  • Mark Bünger, Director of Research, Lux Research
  • Pat Mooney, Executive Director, ETC Group
  • David Rejeski, Moderator, Director, Synthetic Biology Project

Synthetic Biology: Feasibility of the Open Source Movement

While viewing the webcast (which we are all lucky to have viewable online), I wrote some comments.  Since others were interested in the comments, I’ll post ‘em here.

Playing with the $100K Robots for Biology Automation

Posted by – June 26, 2009

The Tecan Genesis Workstation 200: It’s an industrial benchtop robot for liquid handling with multiple arms for tray handling and pipetting.

The robot’s operations are complex, so an integrated development environment is used to program it (though biologists wouldn’t call it an integrated development environment; maybe they’d call it a scripting application?), with custom graphical scripting language (GUI-based) and script verification/compilation. Luckily though, the application allows third party software access and has the ability to control the robotics hardware using a minimal command set. So what to do? Hack it, of course; in this case, with Perl. This is only a headache due to Microsoft Windows incompatibilities & limitations — rarely is anything on Windows as straightforward as Unix — so as usual with Microsoft Windows software, it took about three times longer than normal to figure out Microsoft’s quirks. Give me OS/X (a real Unix) any day. Now, on to the source code!


HVPS for Systems Biology: A Low Cost, High Voltage Power Supply with Schematics + Board Layout

Posted by – June 22, 2009

I have designed this high voltage, low current power supply for various experiments in systems & synthetic biology. I have cleaned up the design and I am placing the schematic and board layout online below!  This circuit outputs up to +1,866VDC at under 1 mA or can be tapped at various points for +622VDC or +933VDC. This is useful for either DIY Biology or institutional research experiments such as:

  • capillary electrophoresis
  • digital microfluidics using electrowetting-on-dielectric
  • possibly electroporation
  • various electrokinetic experiments, such as dielectrophoresis
  • (other uses?? Let me know)
  • and, lastly of course:  making huge sparks that go PAHHHHH-POP

Below is the schematic; read the full post below for the board layout information.  Click on the schematic for the full sized version.  The schematic operates in stages, so leaving out or bypassing before the 2nd final stage will yield only +933VDC, and leaving out that stage will yield only +622VDC, etc.

Schematic for the HVPS "Tripler1"

Schematic for the HVPS


DIY Digital Microfluidics for Automating Biology Protocols (sub-microliter droplets)

Posted by – June 3, 2009

Systems biologists and synthetic biologists spend a large amount of time moving small liquids from one vial to another. I would say it makes up the majority of their work day, even in a technologically cutting-edge lab which has robotics.  Strange, isn’t it, that the most advanced biological science labs in the world are dependent on a human physically moving small drops of liquid samples and reagents around a lab?

Microfluidics aims to move liquids without humans, under computer control.  A small flow of DNA in water, for example, might trace a path between two glass plates, within a tiny, etched microchannel.  The movement of the flow is controlled by numerous micro-mechanical valves connected to electronics.

Digital microfluidics aims to move liquids without humans, under computer control, using only single droplets under electrical control: no micro-mechanical valves. It works by using electric fields (electrowetting-on-dielectric properties, abbreviated “EWOD”), which polarize water atoms enough to move a very small water droplet across the surface of a computer board.  Droplets on the board can split into two, or join together into one.

Standard PCB etching techniques can be used to make low-tech digital microfluidics devices

Standard PCB etching techniques can be used to make low-tech digital microfluidics devices


Make some simple biology lab tools

Posted by – February 3, 2009

Sometimes biology lab tools are really simple, and ridiculously obvious (i.e., a petri dish?).   Yet most of the general public form the idea that biology, chemistry, or even nanotechnology, is impossible because we don’t have high-tech tools. Maybe it’s because the research papers always use big words, half of which are some form of Latin, or the tools are named after a dead guy with a crazy name (erlenmeyer flask anyone?).

Look at some of the tools used in most biology labs for proof.  Cheap office supplies are re-used as laboratory supplies.  Many biology tasks only need the most basic tool for pouring, scraping, mixing, holding, or electrocuting something.  These are easy jobs which only require simple tools.

A few more examples are provided in The Scientist article, Let’s Get Physical -
How to modify your tools to prevent pain at the bench
.   (Free registration might be required to view the article.)

Maybe some of the Makers out there will read this article and Make me something.  (Meredith wants to make something; I need simple lab tools.)

We Make the News Headlines: “Amateurs are trying genetic engineering at home”

Posted by – December 25, 2008

As a nice holiday surprise for me this week, my project (Melaminometer) & a team member (Meredith L. Patterson) made it into Associated Press science news: “Amateurs are trying genetic engineering at home”.  The article is accurate, and quoted below.  For the melaminometer project, we are also collaborating with Taipei National Yang Ming University.

Amateurs are trying genetic engineering at home

Meredith L. Patterson, a computer programmer by day, conducts an experiment in Meredith L. Patterson, a computer programmer by day, conducts an experiment in the dining room of her San Francisco apartment on Thursday, Dec. 18, 2008. Patterson is among a new breed of techno rebels who want to put genetic engineering tools in the hands of anyone with a smart idea. Using homemade lab equipment and the wealth of scientific knowledge available online, these hobbyists are trying to create new life forms through genetic engineering – a field long dominated by Ph.D.s toiling in university and corporate laboratories.
(AP Photo/Noah Berger)

SAN FRANCISCO – The Apple computer was invented in a garage. Same with the Google search engine. Now, tinkerers are working at home with the basic building blocks of life itself.

Using homemade lab equipment and the wealth of scientific knowledge available online, these hobbyists are trying to create new life forms through genetic engineering — a field long dominated by Ph.D.s toiling in university and corporate laboratories.

In her San Francisco dining room lab, for example, 31-year-old computer programmer Meredith L. Patterson is trying to develop genetically altered yogurt bacteria that will glow green to signal the presence of melamine, the chemical that turned Chinese-made baby formula and pet food deadly.


Average Americans are Scared of “Synthetic Biology”

Posted by – November 20, 2008

Yes, believe it, non-synthetic biologists have a poor, even fearful, associations when synthetic biology is described to them:

Q: How do the descriptions of these technologies [synthetic biology] make you feel?

Female Respondent: I really thought of sci-fi movies, where, um, something is created in a laboratory, and it always seems great in the beginning, um, but, down the line, something goes wrong because they didn’t think about this particular situation or things turning this way.

Male Respondent: The “Jurassic Park” movie came to mind.

Female Respondent: It’s scary, why do we need to have new organisms? Why do we need to have, you know, you know, genetic engineering? Does it really help with anything? It’s really, it’s not going to help a common person like us. I don’t think, it’s not going to be for helping any of us.

Watch the video for yourself — promise, though, that you won’t throw your mouse at your screen:
Nanotech and Synbio: Americans Don’t Know What’s Coming: “This survey was informed by two focus groups (video – focus groups) conducted in August [2008] in suburban Baltimore [by The Project on Emerging Nanotechnologies Synbio Poll]. This is the first time—to the pollsters’ knowledge—that synthetic biology has been the subject of a representative national telephone survey.”

One of the men states he’s a biologist, and later says, “Who’s playing god here? Who are we as humans to think we can design or redesign life? It’s nice to be able to do it but is it right?”

While watching the video, keep in mind the benefits and limitations of focus groups (wikipedia: Focus groups).

Skunkworks Bioengineering — Prerequisites to Success?

Posted by – November 13, 2008

“Despite all the support and money evident in the projects, there is absolutely no reason this work could not be done in a garage. And all of the parts for these projects are now available from the Registry.” Rob Carlson, iGEM 2008: Surprise — The Future is Here Already, Nov 2008.

The question which should be posed is:

  • What does it really take to actually do this in a garage?

Of course I’m interested in the answer.  I actually want to do this in my garage.

(Let’s ignore the fact for a moment, that many of the iGEM competition projects don’t generate experimental results due to lack of time in the schedule, thus actual project results don’t mirror the project prospectus.)

Here is my short list of what is required:

  • Education (all at university level)
  • Experience
    • 1 year of industry or grad-level engineering lab research & design
    • 1 year of wet lab in synthesis
    • 2 more years of wet lab in synthesis if it’s desired to have a high probability of success on the project (see my SB4.0 notes for where this came from)
  • Equipment
    • Most lab equipment is generally unnecessary, since significant work can be outsourced.
    • Thermocycler
    • Incubator
    • Centrifuge
    • Glassware
    • Example setup: See Making a Biological Counter, Katherine Aull, 2008 (Home bio-lab created for under $500.)
    • Laptop or desktop computer
    • Internet connection
  • Capital
    • About $10k to $20k cash (?) to throw at a problem for outsourced labor, materials, and equipment (this cost decreases on a yearly basis).
  • Time (Work effort)
    • Depends on experience, on the scope of the problem, on project feasibility — of course.
    • 4 to 7 man-months to either obtain a working prototype or scrap the project.

Although some student members of iGEM teams are random majors such as economics or music, somehow I’m not sure they qualify towards the “anyone can do this” mantra.  Of the iGEM competition teams who placed well for their work, all of the members were 3rd year or 4th year undergrads or higher.  The issue isn’t the equipment or ability to outsource — it’s the human capital, the mind-matter, that counts: education and experience.  (Which, in the “I want to DIY my Bio!” crowd, is a rare find.)

With all that covered, it seems anyone can have their very own glowing bacteria.

“Biology is hard, and expensive, and most people trained enough to make a go of it have a lab already — one that pays them to work.”   — Katherine Aull (see above ref.)

Witch Hazel? Triclosan? Horsetail Extract? Camellia Sinesis Leaf?

Posted by – October 29, 2008

Breeze by the cosmetics section of the department store next time. There is a 1+ billion-dollar market being bought and sold right under, and on top of, everyone’s noses, using mostly experimental organic chemistry, wrapped up in advertisements of sexualizing, anti-aging, softening, cleansing, wrinkle-reducing, organic, non-animal tested, oil-free, oil-reducing, oil-enhancing, whitening, darkening, clarifying, and most of all, “all natural”.

The cosmetics industry seems ripe for synthetic biology chemical factory creation. A short list of ingredients in skin care products, as some examples, are below (disclaimer: quoted from wikipedia).  Many of these ingredients do have supposed medicinal properties.. some are questionable.

Witch Hazel

Witch hazel is an astringent produced from the leaves and bark of the North American Witch Hazel shrub (Hamamelis virginiana) which ranges from Nova Scotia west to Ontario, and south to Florida, and Texas[1]. This plant, native to Canada and the United States was widely used for medicinal purposes by American Natives. The witch hazel extract was obtained by steaming the twigs of the shrub.

The essential oil of witch hazel is not sold separately as a consumer product. The plant does not produce enough essential oil to make production viable. However, there are various distillates of witch hazel (called hydrosols or hydrolats) that are gentler than the “drug store” witch hazel and contain alcohol.

Now for a PubMed article:

Highly galloylated tannin fractions from witch hazel (Hamamelis virginiana) bark: electron transfer capacity, in vitro antioxidant activity, and effects on skin-related cells, Touriño S, Lizárraga D, Carreras A, Lorenzo S, Ugartondo V, Mitjans M, Vinardell MP, Juliá L, Cascante M, Torres JL. Chem Res Toxicol. 2008 Mar; 21(3):696-704. Epub 2008 Mar 1.

Institute for Chemical and Environmental Research (IIQAB-CSIC), Jordi Girona 18-26, 08034 Barcelona, Spain.    PMID: 18311930

Witch hazel ( Hammamelis virginiana) bark is a rich source of both condensed and hydrolizable oligomeric tannins. From a polyphenolic extract soluble in both ethyl acetate and water, we have generated fractions rich in pyrogallol-containing polyphenols (proanthocyanidins, gallotannins, and gallates). The mixtures were highly active as free radical scavengers against ABTS, DPPH (hydrogen donation and electron transfer), and HNTTM (electron transfer). They were also able to reduce the newly introduced TNPTM radical, meaning that they included some highly reactive components. Witch hazel phenolics protected red blood cells from free radical-induced hemolysis and were mildly cytotoxic to 3T3 fibroblasts and HaCat keratinocytes. They also inhibited the proliferation of tumoral SK-Mel 28 melanoma cells at lower concentrations than grape and pine procyanidins. The high content in pyrogallol moieties may be behind the effect of witch hazel phenolics on skin cells. Because the most cytotoxic and antiproliferative mixtures were also the most efficient as electron transfer agents, we hypothesize that the final putative antioxidant effect of polyphenols may be in part attributed to the stimulation of defense systems by mild prooxidant challenges provided by reactive oxygen species generated through redox cycling.

That doesn’t mean everyone should go rubbing witch hazel all over themselves..  though it does show that witch hazel does “something.”


Triclosan (IUPAC name: 5-chloro-2-(2,4-dichlorophenoxy)phenol) is a potent wide spectrum antibacterial and antifungal agent. Triclosan is found in soaps (0.15-0.30%), deodorants, toothpastes, shaving creams, mouth washes, and cleaning supplies and is infused in an increasing number of consumer products, such as kitchen utensils, toys, bedding, socks, trash bags, and some Microban treatments. Triclosan has been shown to be effective in reducing and controlling bacterial contamination on the hands and on treated products. More recently, showering or bathing with 2% triclosan has become a recommended regimen for the decolonization of patients whose skin is carrying methicillin resistant Staphylococcus aureus (MRSA)[1] following the successful control of MRSA outbreaks in several clinical settings.

Horsetail Extract

What! Yes, it says “Horsetail Extract” on the container. Though I didn’t find this as a real ingredient in wikipedia, I found it in the following patent.

United States Patent 5415861

Abstract: A method for reducing the visible size of facial skin pores by applying a novel composition which comprises an oil absorbing powder, a botanical astringent and a biological compound that alters the structure of the skin and/or the function of the sebaceous glands. [...]

Horsetail extract (Equisetum arvense) is a preferred compound because it contains significant amounts (>8%) of organic silicones. These silicones are known to regulate collagen cross linking and improve the structural framework of connective tissues in the skin. Like the alternative compositions, Horsetail extract functions on and below the skin surface to reduce pore size with regular application.

Camellia Sinensis Leaf

Camellia sinensis is the tea plant, the plant species whose leaves and leaf buds are used to produce tea. It is of the genus Camellia (Chinese: 茶花; pinyin: Cháhuā), a genus of flowering plants in the family Theaceae. White tea, green tea, oolong and black tea are all harvested from this species, but are processed differently to attain different levels of oxidation. Kukicha (twig tea) is also harvested from camellia sinensis, but uses twigs and stems rather than leaves.

Tea extracts have become field of interest, due to their notional antibacterial activity. Especially the preservation of processed organic food and the treatment of persistent bacterial infections are being investigated.

  • Green tea leaves and extracts have shown to be effective against bacteria responsible for bad breath.
  • The tea component epicatechin gallate is being researched because in-vitro experiments showed that it can reverse methicillin resistance in bacteria like Staphylococcus aureus. If confirmed, this means that the combined intake of a tea extract containing this component will enhance the effectiveness of methicillin treatment against some resistant bacteria.

An amazing aspect of cosmetics is the historical basis for many of the ingredients, many of them in use for hundreds or thousands of years.

  • Category: Notes
  • Comments Closed

“SynBioSS: The Synthetic Biology Modeling Suite”

Posted by – October 20, 2008

SynBioSS (Synthetic Biology Software Suite) is a suite of software for the modeling and simulation of synthetic genetic constructs. SynBioSS utilizes the registry of standard biological parts, a database of kinetic parameters, and both graphical and command-line interfaces to multiscale simulation algorithms. SynBioSS is available under the GNU General Public License. Anthony D. Hill, Jonathan R. Tomshine, Emma M. B. Weeding, Vassilios Sotiropoulos, and Yiannis N. Kaznessis, Bioinformatics 2008 24(21):2551-2553; doi:10.1093/bioinformatics/btn468

Sounds neat, let’s try it. Interestingly, the iGEM participants and biologists, in discussions of modeling, have thrown their hands in the air & state that it is difficult or impossible to model biology. Maybe SynBioSS can do the impossible?  Except: There is no specific installer available for OS/X (as of this writing) and it seems there are many assorted packages required.

Here are my install summary/notes/fixes for getting SynBioSS (version 1.0.1) running on OS/X (Leopard 10.5.5):

Word on the Street @ Synthetic Biology 4.0 – Day 3

Posted by – October 12, 2008

Word on the street from Synthetic Biology 4.0..  take a word or leave a word.

  • Venture capitalists and private investors are very interested in synthetic biology.  Significant buzz in the Bay Area regarding the term.  Large capital is required when making the transition from proven concept to, for example, a pilot plant for biofuel production.  Venture capitalists love to get excited because it’s their job to both get themselves excited and get others excited (excited enough to give them loads of money).
  • As on day2, a laboratory-proven “built from the ground up” organism (a thing, separate from other things, that eats, replicates, grows, and divides) may only be years away.
  • Very innovative solution proposed to slow the rate of spreading HIV, using synthetic biology to create a “get-the-HIV-away” preventative medicine.  Seemed very well received as a different track than current antiviral cures.
  • No one really knows how to model cells or modified cells.  (Again)
  • An interesting group at Cal Tech has proposed a method of designing, or compiling the design by converting a hardware netlist into DNA sequences, combinatorial logic which has unique signal outputs, thus eliminating the cross-talk problem; theoretically modeled up to 1,000 gates, currently tested up to 4 gates.
  • A cynical undergrad says that engineering biology will never work and will never be able to be modeled; though he has an iGEM project.
  • A Hong Kong local university biology professor also mentions that engineering biology will never work; it is too unreasonable to expect biology to behave under known rules: “biology is not like that.”
    • The word “never” in both cases might be very surprising to some, especially coming from two people in the field, one of whom has built Biobrick device(s).
  • DIY Bio is real!  Bio can be done as a hobby!  We want to let amateurs hack biology just like scientists do!  We need to apply rules to make it non-biohazard, and then just do it.”
    • Strongly contrasting opinion to the “it will never work” biologists.
  • Cells can be made to change shape dramatically with specific (laser) light input.  Very freakily amazing video.
  • Which opinion is more correct, that engineering biology will work or that it doesn’t or where we fall as of right now?
    • Drew Endy:  “The truth is somewhere in the middle.  Years ago, we had a lot of iGEM teams, and nothing worked.  Last year, we had let’s say 100 iGEM teams, and 10 teams had working devices.”  I conclude, the engineering process is improving through lab experience and raw data feedback.  Engineers eventually make nearly anything work (just ask Scotty).
    • Reshma Shetty (now at Ginko BioWorks): “It takes about 3 years to ‘get it’ [collect enough experience to be successful at creating biological devices].  Everyone seems to struggle until then.”
  • More back & forth related to the licensing issues of an “open source” biological library.
  • The Bay Area may have an accessible “Bio Fab Lab” in the years ahead, funded by public sources and aimed at improving the “open source” biological library.
  • Even the venture capitalists and synthetic biology company owners get history wrong; mistakenly stating facts.  “This is like the IBM PC architecture, completely open, and enabling things like the open source movement later”.. Wrong!; in fact, the IBM PC was completely locked down and very proprietary and backed by lawyers from the huge deep pockets of IBM — it was Compaq who, through a legal process of reverse engineering to work around the patent and intellectual property process, completely cloned the IBM PC firmware to a compatible version, thus inventing the clone-PC market (while IBM vehemently objected and litigated against).  Please read the history books (I would suggest Hackers, by Steven Levy, as a starting point). Most of the “this is like open source with computers” analogies are.. well.. off by a factor of two. At least a factor of two.
All quotes above are not to be taken literally.  Any resemblance to actual persons is entirely coincidental.  The contents of this article and this web site (web log) are Copyright with All Rights Reserved.  No content may be used without explicit written permission.  (This is to prevent quoting out of context.)

For those who aren’t familiar with synthetic biology, I will quote the Synthetic Biology 4.0 web site:

What are the applications of Synthetic Biology?

BioEnergy. Cells are being engineered to consume agricultural products and produce liquid fuels. British Petroleum and the US DOE granted $650 million dollars for research in the San Francisco Bay Area.

Drug Production. Bacteria and yeast can be re-engineered for the low cost production of drugs. Examples include the anti-malarial drug Artemisinin and the cholesterol-lowering drug Lipitor.

Materials. Recombinant cells have been constructed that can build chemical precursors for the production of plastics and textiles, such as Bio-PDO and spider silk.

Medicine. Cells are being programmed for therapeutic purposes. Bacteria and T-cells can be rewired to circulate in the body and identify and treat diseased cells and tissues. One such research program is the NIH-funded Cell Propulsion Laboratory at UCSF.

Synthetic Biology is a new approach to engineering biology, with an emphasis on technologies to write DNA. Recent advances make the de novo chemical synthesis of long DNA polymers routine and precise. Foundational work, including the standardization of DNA-encoded parts and devices, enables them to be combined to create programs to control cells. With the development of this technology, there is a concurrent effort to address legal, social and ethical issues.

How is this different from genetic engineering?

Synthetic Biology builds on tools that have been developed over the last 30 years. Genetic engineering has focused on the use of molecular biology to build DNA (for example, cloning and PCR) and automated sequencing to read DNA. Synthetic Biology adds the automated synthesis of DNA, the setting of standards and the use of abstraction to simplify the design process.