Dr. Gideon Lapidoth
Why I started Enzymit
I know it sounds cliché, but from an early age I had a fascination with science. I remember as a kid sitting in front of the Israeli Science channel and watching Caltech’s video lectures by Prof. David Goodstein, ‘The Mechanical Universe’ and ‘Beyond the Mechanical Universe’. Prof. Goodstein’s lectures had a way of intertwining science with great story telling. But for me it was more than that, they presented equations that had an almost magical power. These mysterious symbols drawn on a chalkboard could concisely describe complex physical phenomena like an ark forming in a Van de Graaff generator or iron filing following the invisible lines between magnetic poles. I knew I wanted to be part of that cadre. I wanted to be a scientist.
When I was 15, my dad bought me a book by Ed Regis called ‘Nano’. This was my first in-depth exposure to this field of nanotechnology. I remember two specific images from that book. The first was an image of the IBM logo made by manipulating individual atoms using Scanning Tunneling Microscopy (STM) by IBM research scientists. The second was a circle of atoms, made by the same team at IBM.


The figure on the left is the first example of scientists able to manipulate individual atoms, created by IBM scientists in ‘89. The figure on the right, also created at IBM on ‘93, is known as a ‘quantum coral’ and aside from being a cool example of creating nano-objects, it is also a quantum well confining surface state electrons. The wave patterns on the bottom right are actual standing waves of electron density.
These images had such sway over me. Especially the second one, that reminded me of a cog. There was something fascinating about the possibility of building miniature cogs and transmissions out of individual atoms. Just imagine building a tiny car put together atom by atom. The appeal here for me was this concept of simplicity that could generate science fiction level technology. Could I really build molecular machines using the same concepts I used to build LEGO models?
Of course, this is impossible. Unfortunately, objects on the atomic scale do not behave as we would expect from our day to day experiences. Therefore, inferring from our macroscopic world to the atomic world is misleading (for example, at an atomic scale, van der Waals forces dominate, making molecules stick together, water is no longer a continuous medium but rather a granular substance constantly bombarding our molecular machine).
In the same book I was exposed to another piece of history, a transcribed lecture by Richard Feynman, “There’s plenty of room at the bottom”. There, Prof. Feynman lays out what is considered by many the foundations of nanotechnology. Many of the predictions made about computation and miniaturization were proven incredibly accurate. But per our discussion, Feynman lays out his vision for how we will achieve nano-manufacturing. In the first approach he describes a set of ever decreasing in size slave hands, starting with full-sized hands each subsequent pair decreases by ¼ size relative to its predecessor until we reach the atomic scale.


A sculpture created by Mathew Biederman inspired by Feynman’s lecture. Utilizing a technique of two-photon polymerization, and the assistance of Christian Maibohm at the Iberian Nanosystems Laboratory. A set of hands were fabricated (the right image) at a scale of less than a millimeter. The figure on the left is the sculpture in a 1000 fold magnification.
This approach couldn’t have succeeded citing some of the reasons I mentioned above. However, continuing his thought experiment, Prof. Feynman points at another approach for achieving mastery over the nano-realm - Biology. In his lecture he poses a challenge of compressing all of the world’s books into the size of less than the head of a pin. He calculated that if we could encode one bit in 5 cubic atoms (125 total) then we could store all of the world’s books on a grain of dust. While at the time this seemed fantastical (well actually not just at the time, even today we need about 1,000,000 atoms to store one bit), Feynman goes on to point out that DNA, which has been around for billions of years can store 1 bit of information using only 50 (!) atoms.
This in a nutshell is the whole premise of synthetic biology, reverse engineering how nature does things so we can use the same manufacturing tools to build new things.
I remember reading this and immediately understanding the truism of this idea. Of course this should work, it’s been working for billions of years.
I read `Nano` back in `98, at the time computational protein design was really at its nascent stage, with the first papers published by Dr. Stephen Mayo, now a principal investigator at Caltech.
Fast forward 10 years later and I am a biology undergrad at Tel-Aviv University. As an elective I took a course called “Computational Structural Biology”, without knowing much about the subject but the premise sounded interesting. I think it was in the first lecture where a solved 3-dimensional structure of an ATP synthase was shown next to an image of an electric motor.



The ATP synthase motor uses an ion gradient to rotate the main stock which in turn induces a conformational change in the αβ subunits which bind ADP and phosphate and release ATP. The mechanism can be reversed so the same molecular engine uses ATP hydrolysis to pump ions to create a chemical potential if needed. The image on the right is of an eclectic motor. Although the same nomenclature is used (rotor and stator) and they are visually similar, the mechanisms are of course different. An electric motor uses the magnetic field created by the current run through the stator to rotate the motor vs a chemical gradient used by the ATP synthase.
I remember thinking how incredibly intuitive it looked. One can look at protein structure and infer its function. This was the same wonder I felt seeing the image of single atoms arranged in a circle. We were back in business, one could simply manipulate protein building blocks to form a desired structure!
This idea is not as far fetched as you might think. In fact, the first completely de novo designed protein was successfully designed in 2003 by Dr. Brian Kuhlman, then a postdoc at the lab of Prof. David Baker. The importance of this success cannot be overstated. What Dr. Kuhlman achieved was the ability to define a predetermined 3-dimensional protein structure and then, using very clever algorithms, find a sequence of amino acids that would produce that fold, which by the way, was not known to exist in nature back then. This moment marked the beginning of the age of Rosetta, a sophisticated software that enables scientists all around the world to design artificial proteins. Rosetta, almost 20 years after its conception, still dominates the computational design software landscape. But more about this in a later post.
My final thoughts on enzyme design OR why now?
In this first post I tried to convey what exits me about this field of computational protein design that in turn is a sub-domain of a larger technological field called “Synthetic Biology”. The terms means different things to different people, in my book it is the field of engineering biological components, anywhere from DNA molecules all the way up to whole organisms.
A number of enabling technologies have seen dramatic improvements in the past decade that have made our jobs significantly easier, chief among them being DNA synthesis and sequencing. Once we have created designs on the computer we need to get the instructions of how to make them into a production host, usually bacteria (repurposing nature). 10 years ago the cost of synthesizing one gene was a few thousand dollars. Now, thanks to the decrease in costs of synthesis as well as other developed techniques such as gene assembly we can build whole libraries of genes encoding for novel proteins for the same costs or even less and the price keeps dropping. This allows us to increase our throughput of testing computationally designed models and increase our rate of learning. This is important due to the inherent uncertainty that is still a part of computational protein design. This immediate feedback loop between design and testing can help us reduce and hopefully eliminate this gap between the computer output and the actual results. We are getting there. I know people have hailed the coming of biology as the new industrial revolution for nearly 30 years and it might look to the outside observer like the change has been slow to arrive but like with any exponential process the initiation is slow and you might miss it until you are already in its midst. I truly believe that we are in an equivalent period to that of the personal computers in the late 70’s of the previous century. The costs of building components have dropped so dramatically over the past 10 year, and the easy access to information and data has led to a creation of a subculture of “bio-hackers”, individuals who are playing with pretty sophisticated molecular biology in their garages, mirroring a similar subculture that brought us Apple, Microsoft, Dell and many others. Just kids excited about playing with new technology, the applications just grew organically out of their fascination.