Synthetic biology aims to create biological parts which can be connected together to form larger functional devices, and many hope the most poplar library of parts will be “Open Source”. Openly publishing large collections of biological parts is great, as it would rapidly accelerate engineering progress and rapidly diseminate the technology.
There’s one big drawback to open source though: Where do you go when it doesn’t work? This is called the support issue. Presumably, there’s a “community of experts” who monitor problems and provide fixes for others. More often, though, the users themselves have to become expert, or they abandon the project. (A secondary question is: Who do you sue when it does something wrong? which is a question I posed in my licensing discussion.)
I recently ran across the following blog article from a popular web hosting company (bluehost.com) describing their use of Linux (properly called GNU/Linux, since Linux is only a small part of the operating system, and a tapestry of GNU software makes up more than 90% of a “Linux system”). This web hosting company is very popular with many individuals and small companies, and it’s profitable existence owes much to open source software (although it’s reported that their servers experience unhealthy downtime). Without open source software, the company couldn’t exist; the cost of their software would make their service very unprofitable.
The following quote is telling [1]:
“Whenever we see ANY bottleneck in the system whether it be CPU, I/O Block Device, Network Block Device, Memory, and so on we find out EXACTLY what is causing the problem. When I say we find the problem, I mean we go down to the actual code in the kernel and see exactly where the issue is. Sometimes that gives us the answer we need to the solve the problem and other times it is a bug in the kernel itself that we need to create a patch for.” (The full article is quoted below)
The translation to this, is that the users of the software must continually monitor the performance of the software, and if it exhibits any problems, then sometimes the users are required to dig deep into the code of the Linux kernel! If “Open Source Biotech” follows the “Open Source Software” model, then users will end up with biological devices exhibiting a variety of behaviors, and when the behavior isn’t desired… the user may have to dig far down into the DNA sequence of the biological device to see what exactly is going “wrong”.
And by the way, the Linux kernel doesn’t have very much design documentation (compared to commercial software). In fact, documentation to the software is pretty much considered as an afterthought. Looking at the Linux kernel source itself even turns up some code comments such as these: ” /* Do something with X here. Not sure why this works. */ “
Many companies in the open source software field actually make more money from fixing problems other people have with open source than from anything else; these are service companies which simultaneously sell “mostly working” open source software and, later on, offer to fix it, configure it, or customize it — IBM is perhaps the largest example, and RedHat is another. This is an interesting natural occurance, as the initial authors of the open source software are usually not interested in the “service and customization” aspect; they’re only interested in the early design & implementation of the software.
The full article [1] is below (I’ve added some color for emphasis); after the quote, I’ll throw out some problem-solving ideas for the synthetic biology community.
The Linux Kernel = The Solution = The Problem
November 3rd, 2008
Linux is an amazing operating system. I have written about it several times over the years. All of our company’s servers run on CentOS Linux (about a thousand) and a good chunk of our desktops and work stations run Ubuntu linux. The speed and reliability of the main linux kernel (Think engine of a car) is unmatched in my estimation. What is even more impressive about the Linux kernel is its unbelievably rapid pace of development. Herein lies our problem.
The linux kernel is developed simultaneously for so many different work loads that it is impossible to ship a Linux Distribution that is tuned for your specific workload. If I could use the car analogy one more time – Linux could just as easily be an all electric 50 HP car, or a 1,000 HP Dragster, or a Semi Truck. It literally can and is used to power your phone to the fastest super computer in the world. With that type of flexibility how do you wrangle the most power out of the linux kernel for your specific needs?
Unfortunately, right now the answer is – With a GREAT DEAL of effort. I mentioned earlier that we use CentOS. This is a free rebranded version of Redhat Enterprise Linux which is a server class linux distribution. Meaning it is geared toward customers with a heavy server workload. The problem is the linux kernel they use in CentOS is SLOW, outdated, and certainly not tuned to our workload.
In a way this is a big benefit for us. I know of no other hosting company on the planet that spends more time and effort squeezing the most out of the linux kernel than Bluehost and Hostmonster. Whenever we see ANY bottleneck in the system whether it be CPU, I/O Block Device, Network Block Device, Memory, and so on we find out EXACTLY what is causing the problem. When I say we find the problem, I mean we go down to the actual code in the kernel and see exactly where the issue is. Sometimes that gives us the answer we need to the solve the problem and other times it is a bug in the kernel itself that we need to create a patch for.
Using these techniques we have been able to solve disk I/O issues and many other bottlenecks that have plagued EVERY linux distribution that I have ever tried. Linux is extremely capable, but it needs brilliant people that understand all aspects of the kernel to achieve that kind of speed and performance from a stock kernel.
I feel that given the same exact hardware specifications that we could generally squeeze at least twice the performance from MySQL, Apache, PHP, and underlying CPU and IO intensive tasks compared with any server loaded up with a standard install of CentOS or Redhat Enterprise Linux.
While I think that is great, and it gives us a huge advantage over our competitors I don’t really think it ought to be that way. Many of these tunable options simply need to be applied to a system to make it faster, while other methods require a historical approach before tuning becomes effectively.
Regardless, my opinion is that the linux kernel DESPERATELY needs some software that will real time evaluate you servers CPU, Memory. I/O bottlenecks and makes these adjustments for you on the fly based on a constantly changing workload. This should have been written a long time ago. Having a thousand different tunable options is great if you know what every one does and how it affects every other setting. My experience is that 1% of the users know about 10% of the possible settings to change and the rest never gets touched. Thats a shame, because as fast as Linux is, as long as these “default” kernel settings are in place you will NEVER get the performance that you could out of existing hardware without your own kernel guys on staff (Like we do).
Realtime kernel adjustment by the kernel itself for the workload being operated on is what the kernel needs now. I can tell you that we have some aspects of this done already. If nobody else steps up the plate, we may release our own version of this to the community at large to build up and use for the betterment of all.
Thanks,
Matt Heaton / Bluehost.com
In summary, the owner of the company is stating that the GNU/Linux operating system is both the solution and the problem, that fixing the problem requires enormous expertise, that using the operating system in an efficient manner also requires enormous expertise, that the need to have this expertise in essence gives him a step-up from any competiting companies, and that the GNU/Linux operating system has needed a feature for years which would dramatically lessen the need for everyone to have expertise in managing a particular software function.
To highlight the specific problems with open source I am posing here:
- Open source has a documentation problem. The designers usually don’t document the parts in either user documentation or design documentation.
- Open source has an expertise problem. The users are required to dig deep into the part internals, in order to find bugs or to figure out how to properly connect or optimize the parts.
- Open source has a debugging problem. The users are required to design & build part internals necessary for debugging “which should have been written a long time ago”. (The example mentioned is real time monitoring; considering the length of time the Linux kernel has been usable, it is amazing that real time monitoring, to this day, does not exist as a built-in function. I have had to implement it myself into the Linux kernel multiple times, and subsequently throw it away multiple times, since the official Linux kernel is upgraded and becomes incompatible with my unofficial monitoring software.)
There are a couple solutions which could help smooth out these problems. These solutions are simple, yet require some effort — and on open source projects, the designers often need extra encouragement in these areas.
- Work on the documentation problem by:
- Pairing a good documenter with the designers. The designers are usually focused on the internals, and spending time on “write-ups” is a big distraction.
- Make it ridiculously easy to write documentation. In some cases, good documentation is lacking because the designer could document a part of the design quickly, but “starting up the Word Processing Application takes too long.” This is where using web tools
- Begin the documentation before the project starts. Document the design before building anything. This almost always exposes problems in the design, so it’s worth doing anyway. (Yet, few designers do this.)
- Create template documentation that allows for “fill in the blanks” writing. This could be a template for a wiki page, or focusing all project documentation efforts on a single design until a very good example documentation is completely polished and available for cut-and-paste.
- Follow the “agile programming” or “extreme programming” design methodology, which iteratively collects project requirements, writes thin documentation, and allows designers to build, mostly as parallel efforts. This way, the documentation evolves with the design (rather than being an “end of project corner-cutting effort”).
- Creating standard ways of including documentation inside the design. I’m not sure if this is possible with DNA, though, this is definitely possible in the Biology markup languages. This is one issue specific to biology.
- Archive and publish intermediate results electronically, for later retreival. In some cases, the original design tests provide chunks of data for later documentation.
- Work on the expertise problem by:
- Creating open source communities where the original designers stick around. This could mean that they are kept on project mailing lists, or register accounts on project web sites, or transition into “mentor” roles as soon as the design is done.
- Archive the original design discussions (emails, meeting notes, etc), which should include informal discussion of the design choices.
- Creating a larger community; one primary principle of the “open source bizarre” is that larger communities will offer more help and provide a larger breadth of experience.
- Create a tighter community; reward members for participation and bring community members physically together. This provides more dedication to projects.
- Allow users to easily contribute their issues back to the designers, for quick diagnosis. This is done with bug reporting systems and issue tracking in open source software. In the biology realm, I haven’t seen this yet — the many BioBrick users aren’t connected to each other at all, for example, and usually don’t share their experimental failures outside their own labs.
- Work on the debugging problem by:
- Encourage designers to build testing methods in parallel with the design. This philosophy has many different names: “Design for Test” was a popular name for this method a couple years ago.
- Publish methods of testing the system along with the system itself.
- Pair the design engineers with a test engineer. The test engineer documents and archives results of the tests. This is a different task than a designer’s focus, and can be distracting for the designer.
- Have designers use the “unit test” design method where possible, where each part is designed to work with a specific test. When the design is published, the unit test is also published. If the design is enhanced, then the unit test is also enhanced.
Hopefully, OpenWetWare and BioBricks can improve on the process of “Open Source” for biology. Does anyone really want to be stuck in the desert debugging down to the DNA level of their BioBrick arsenic detector when the water doesn’t turn red? I don’t think so.
One final point: From a profitability standpoint, this kind of servicing is the last thing a business wants in quest for financial earnings. “Keeping experts on staff” for eventual problems costs money and time; the quality of the expertise is never certain (i.e. it is usually unknown if the open source problems can be fixed); and fighting open source issues takes time, which delays revenues or allows the competition to beat a product to market. From a business point of view, open source creates a fundamental risk factor which is very difficult to quantify: questions such as, “will these parts work together? Will they work in this configuration and this environment? Can they be tuned to work better?” can quickly become bottomless pits for engineering resource to fill. Business prefers defined-risk environments, such as buying a part or device which has been fully qualified and quantified to work, and if it doesn’t work, a well-defined warrantee- and support-agreement provides a backup. It is very important to solve these open source issues, in order for open source technology to be attractive to businesses.