May 14, 2020
In a previous interview with Culture’s CEO and cofounder, Will Patrick, we explored the origins of Culture Biosciences and why we decided to build our own bioreactors. Now, speaking with Culture’s CTO and cofounder, Matt Ball, we dive into the technical challenges that they faced building our cloud bioreactor lab infrastructure.
Max: When I speak with people about Culture and the reasons why we decided to build our own bioreactors, I often get a response along the lines of “that makes sense, and it is a great idea, but what made Matt and Will think they could just build a new bioreactor?” In other words, how did two engineers without a background in biology manage to build the hardware and software for a very complex biological system so quickly?
Matt: Will and I are very optimistic people, and we’re deep believers in both the potential of new technology and in our ability to build it. Before we started talking about the idea for Culture I was already thinking about ways to improve tools for life scientists. My girlfriend’s lab at Stanford works with mice; they have to understand and track different mouse behaviors all the time, which is incredibly manual and time consuming. I was wondering why there weren’t tools that could automate and digitize this data collection when Will reached out with some very similar ideas. Will and I have a long history of collaborating - we were college roommates and even worked together at one point- and so we were confident we could build something together.
Once we started shadowing local biotech companies, building a better benchtop bioreactor solution immediately stood out as an opportunity for us to solve a major problem through building something that fit really well with our respective skill sets. We both had past experience developing prototypes for new hardware and electronic systems, so we believed that if we followed a playbook of iteration and modular design that we could work towards a better product sequentially. Of course we knew that there would be significant challenges along the way, and there certainly were, but by using tried and true design and engineering principles to develop feedback loops we overcame them.
Max: By past experience, you mean your and Will’s work building autonomous drones for Project Wing at Google X?
Matt: Our work on Project Wing definitely influenced our approach in building the bioreactors, but I also had experience working for several other companies developing and deploying infrastructure-related technology. My first role after college was with a company that built hardware for indoor sensing and leveraged that data for a variety of applications, including energy savings and monitoring capacity and space utilization. After that, I worked at a company which developed hardware and software to analyze water infrastructure with the goal of improving quality standards. I was then at Google X for about two years before joining a startup that built equipment to enable cellular and internet connectivity in remote areas.
Max: How did these diverse hardware and software engineering experiences influence the early work at Culture? Is building telephony equipment anything like building a bioreactor?
Matt: A common thread through my career has been an interest in infrastructure; I’ve always been attracted to the challenges of building critical systems and the operational challenges of maintaining them. Even Project Wing was part of this - the vision there was for the drones to be part of a network for delivering medical equipment, and we even at one point discussed using them to enable a sharing economy.
This focus on infrastructure heavily influenced our approach to building our bioreactor system. Instead of thinking about building a bioreactor, we thought about it from the beginning as building an individual data collection unit that would be part of a larger interconnected system. I remember us thinking that we wanted the bioreactors to resemble a server rack; from the outset we were envisioning that the bioreactor lab would look like a data center with tens of thousands of reactors in a huge facility.
Max: What specifically about the design of the reactors was influenced by that vision?
Matt: For starters, the design of the bioreactors themselves. The shape of the control towers for each reactor is so narrow and tall because it mimics the stacking of a server in a rack. By thinking about each bioreactor as a singular unit of computation, we built a system to efficiently space these independent units together in our facility.
The notion that each reactor should be a completely independent unit also comes from experience working with critical infrastructure. Analogous bioreactor systems typically operate in blocks of 4, 8, or 12, but we wanted something that would have more fault tolerance. We never wanted to be in a situation where one piece of hardware could doom others or bring down a larger experiment. The software is designed in the same way - we took a similar approach of building a system made up of individual units of computation.
Max: How does that work?
Matt: Our data architecture is built such that each reactor is running only very low-level software (the embedded layer) for turning pumps or heating/cooling, while the bulk of the control systems are in the cloud. This kind of interprocess communication is a fairly standard approach in robotics: our reactors have about 15 different active processes and we communicate to them over about 150 different channels. This design was definitely influenced by what I saw work well with Project Wing - using a variety of independent processes where any can fail on their own without causing catastrophic problems.
Another major lesson learned from our work on Project Wing was the value of simulations. The fact that our software architecture is built in the cloud enables us to do testing and development work on a laptop without needing to run code on actual hardware. In addition to building confidence in our software before we ship it, which is so valuable for a small and fast moving team, this enables us to run complex simulations of bioprocesses. I saw how powerful a tool this was at Wing, where we could do “test flights” without flying an actual drone, and knew that simulating bioprocesses should be core to Culture. The ability to test a fermentation without any microbes before an experiment helps us to ensure data quality in actual runs.
"When you look at all the tools a software engineer has today in terms of collaboration, testing, or scaling at the click of a button (literally), it is painful to see that none of this really exists in the biomanufacturing space. Companies are building some really impressive “DIY” systems, but these are all bespoke and fragmented. I really believe that with better tooling we’ll enable better outcomes across the biomanufacturing industry."
Max: While the general design principles from your past work were relevant to our infrastructure, how generally translatable were your past experiences to working with biological systems? In other words, is biology unique from a software development perspective?
Matt: Many elements of the bioreactor’s software utilize classical control systems like PID loops. It is certainly helpful to understand bioprocessing when designing these, but for the most part the types of engineering challenges are well-known and there are generalizable techniques for solving them. However, as we got into more complex operations of the bioreactor, bioprocessing knowledge did become more important. For example, when building the temperature control system we had to understand not only how the electronics worked but also what kind of heat load to expect from the rapidly dividing and metabolizing cells. The basic premise of building a model to understand how applying x watts in cooling will cause y response still applied, but to characterize this relationship we needed to use a model of cellular metabolism.
As we’ve developed more complex and unique feeding strategies for the reactors, like driving adaptive feed based on off-gas or pH changes, we have had to incorporate more and more biological knowledge into the software. Our team in the lab is really involved with the software development and our software team is very interested in learning the biology. Watching that dynamic evolve and seeing how it has led to better and better control systems has been amazing. That was actually yet another lesson I learned from my time at Google X: seeing how multidisciplinary teams of very talented people with diverse experiences can come together to build something new. Will and I have made team construction a top priority, being confident that by building the right teams our people will create new value we couldn’t have predicted from their individual experiences or skill sets.
Max: On that note, what is the team working on developing now?
Matt: As we scale-out the number of reactors this year we’ll also work on continuously improving our core infrastructure, adding better control capabilities with more ways to fine-tune them. We’re also working hard on having customers interface with us- and our bioreactors- entirely through our software application. Soon customers will be able to build their own bioprocess control schemes and feeding strategies directly through our software, as well as make real-time process changes from their laptop.
We’re also starting to work on new software tools to help our customers make better use of the high-throughput data we provide them. This will include applications for designing large-scale research campaigns and leveraging their data to identify key insights that drive faster, more confident decision making.
Max: What is our ultimate goal from a software engineering perspective? What are we working towards?
Matt: We want to continue to leverage the data we are collecting to build tools that enable bioprocess scientists to design, test, and learn more quickly. We are in a great position to do this because we have both the bioprocess data from experiments as well as all of the hardware performance data, and all of it is in a structured format that lends itself to modeling and data science. If our customers’ goal is to understand the most important levers of a cell’s metabolism, that is basically a giant sensitivity analysis to see which combinations of factors will lead to the greatest improvements in a process. Through building and training models with customer-specific datasets, we’ll be able to uncover new insights for them about where they’ll get the most leverage, faster.
Similarly, we believe that there are many actions that happen in a lab that could also be important levers for a bioprocess. For example, finer elements of how the seed train is done or handling information about the strain or media- these could all provide more fodder for modeling efforts. If there is something critical about how you handle your strain or how you store your media, we want to be tracking that data and using software to flag those insights for customers so that they know these things when they transfer a process for scale-up or manufacturing.
Max: Do you think that these new tools will have an effect on how bioprocess R&D is done?
Matt: When you look at all the tools a software engineer has today in terms of collaboration, testing, or scaling at the click of a button (literally), it is painful to see that none of this really exists in the biomanufacturing space. Companies are building some really impressive “DIY” systems, but these are all bespoke and fragmented. I really believe that with better tooling we’ll enable better outcomes across the biomanufacturing industry. Companies like Twist Biosciences have done this for other aspects of biotech research, providing so much power and extra leverage to teams. We’re just really excited to be able to do something similar for bioprocess R&D.
Max: Which potential developments in this space excite you the most?
Matt: I have a lot of friends who work in CAR-T labs and I just think that the potential there is incredible. That isn’t quite what we do now- cell therapy- but it could be something we do in the future. The ability to extract cells, modify them, and then give your immune system super powers is just so wild! I’m also really rooting for all of the alternative dairy companies- I’m a huge fan of cheese and would love to be part of making animal-free cheeses a reality.
An interview with Hardware Engineering Manager and first employee Collin Edington about the new capabilities you can expect from Culture's bioreactors.
Receive occasional updates about all the interesting and exciting things bubbling up in the world of fermentation!