NeuBase CEO Dr. Dietrich Stephan discusses the important role computational drug discovery plays as we emerge from what he characterizes as “a decades-long, multi-billion-dollar chemical engineering escapade with a very low success rate to find one drug.”
Dr. Dietrich Stephan is recognized as a pioneer in the field of precision medicine. He trained with the leadership of the human gene genome project at the NIH. He went on to lead discovery research at the Translational Genomics Research Institute and served as professor and chairman of the department of human genetics at the University of Pittsburgh. He's identified the molecular basis of dozens of genetic diseases, and he's been published extensively in journals including Science, the New England Journal of Medicine, Nature, Genetics Proceedings of the National Academy of Sciences, and Cell. He's also an entrepreneur, having founded or co-founded some 14 biotech companies. He’s CEO at Pittsburgh-based neurological and neuromuscular disease therapy company NeuBase Therapeutics, and he recently joined me on an episode of the Business of Biotech podcast to discuss the advances in computational science that are aiding drug discovery.
From Trial-And-Error To Fail Fast, Fail Cheap
Dr. Stephan was an early advocate of precision, personalized medicine—a concept that he says was discounted by the investment community—even laughed at—in its formative years. He founded Navigenics, widely considered the first personal genomics company, way back in 2006. The company analyzed DNA taken from saliva samples to determine an individual’s hardwired genetic risk factors, pioneering a science that’s proved foundational to entire economies of big biotechs leveraging genetic fingerprinting to fight myriad diseases, and consumer-grade successes (i.e., AncestryDNA, Pendulum Therapeutic, 23andMe) alike.
One of the biggest changes Dr. Stephan has seen since those early days is the computational technology available to expedite DNA discovery.
“When I was in grad school, very few people had cell phones. Laptops were huge, clunky machines. There was no Google. It would be impossible to do the things that we're doing today had the compute and storage infrastructure not come alongside the human genetics push,” he says. “Those technologies had to mature in parallel. If they hadn’t, we would be completely hamstrung in terms of making use of the draft sequence of the genome and all of these multiomics data sets.”
Dr. Stephan says we’re still in the midst of a fundamental transformation in the pharmaceutical industry—one that’s essentially taking us from dumping hundreds of thousands of random chemicals onto cells and tissues and “hoping that a couple of them stick and do what we want them to do in terms of making the disease a little bit better.” While that paints a very simplified picture of manual drug discovery, it’s not hyperbole. “We really had no idea what they were sticking to, or how they were sticking to it. It amounted to a decades-long, multi-billion-dollar chemical engineering escapade with a very low success rate to find one drug. It’s why we still have so many diseases out there that are untreatable, and why drug prices are so high, because we in the pharma industry have to recoup all of those investments made in a very time consuming and tedious process.”
Putting Computational Power To Work
NeuBase is one of a growing number of companies moving drug development research upstream of cells and proteins to the DNA and RNA levels with the aid of now-ubiquitous, off-the-shelf computing power. “We know every disease has a genetic component. You inherit a variant from one of your parents, or you acquire something, somewhere in your lifetime, that causes a dysfunctional protein and manifests in disease,” explains Dr. Dietrich. “Now that we understand the genetic drivers of the vast majority of human diseases, rather than throwing random chemicals at cells or proteins, we can tweak and tune genes at the DNA and RNA level, so they never form dysfunctional proteins and dysfunctional cells. But to do that, we need a computer that houses all 6 billion letters of the diploid genome. We need a database of all of the mutations that have ever been found in the human population heretofore, and we need to be able to look at causative genes.”
At NeuBase, scientists are working to cure myotonic dystrophy, brain disorders, and cancers. To do so, it’s finding those causative genes, and using computational technologies to tile across those genes in search of regions it can target, and then map those regions against the rest of the genome and transcriptome in their entirety to make sure they're unique. Then, computational analysis allows the company to screen thousands of short pieces of DNA or RNA (oligonucleotides) across different cells and to create a quantitative data readout. “The trillions of dollars that have been invested over the last decades in the computational and storage infrastructure is being used to fuel this transformation in the pharmaceutical industry,” says Dr. Stephan. “That's keyed off of digital data and quantitative data, and there's no way we could make sense of that data but for the compute infrastructure.”
On The Adoption Of Computational Science In Biopharma
Dr. Stephan says the biopharma industry is coalescing around our newfound ability to target nature's digital information and coding schema. “Our As, Cs, Gs, and Ts stretch billions of letters long, so it’s fundamental. We're pulling in bioinformatics experts and data scientists to analyze those tens of thousands of known mutations, and every 50 hours, we can churn out hundreds of new drug screen iterations in a machine. It’s wonderful to see these two fields fully collide at this inflection point, and it’s been really interesting to see the realization on the part of big pharma that there are these tiny, disruptive precision genetic medicine plays that really promise scalable outputs going forward, using these new strategies.”
Another massive driver of computational science adoption, says Dr. Stephan, is the promise it holds in eliminating off-target effects of new therapies, which often hinder approval efforts and, even among commercially-approved drugs, can have severe negative consequences on patients. “You can't always anticipate off-target effects a priori. You only know it once you've gone all the way through a phase three trial, or even sometimes on market in phase four monitoring.”
To combat those off-target effects, NeuBase is leveraging public databases to find the sequence of disease-causing genes, and in hours it can then pair a set of drugs that match and compliment the sequence of the gene through complimentary based pairing. To turn off the “misbehavior” that’s causing off-target effects, again using public access tools, it’s loading those sequences of drug into the computer to analyze whether there are any sequences in which the drug will engage off-target. “We can then effectively retire those drugs, because we know in advance that they're probably going to cause some side effects as we move them through the pipeline. So right there within the course of a morning of work, we have cut out a massive amount of downstream work in eradicating off-target engagement, which would have otherwise been a waste of money in safety and efficacy studies and clinical trials. I could argue that savings could be measured in years and billions of dollars, and there's real evidence for that. And there's evidence that companies in our little cohort are increasing the cadence of output of drugs that are being produced faster and more efficiently because of exercises like the one I just described.”
Learn more about Dr. Stephan, NeuBase, and how the company is leveraging computational science to expedite its pipeline on episode 33 of the Business of Biotech podcast.