We use both our own and third-party cookies for statistical purposes and to improve our services. If you continue to browse, we consider that you accept the use of these.
 In Advanced Applications, Cofactor Genomics, Molecular Diagnostics, Q&A

The Future of Precision Oncology is Near:
A Q&A with Erica Barnell of Geneoscopy

The world of molecular diagnostics is growing and evolving quickly. At Cofactor, we’re eager to learn from others in the scientific community and share new innovation in the field of molecular diagnostics, including different tools and techniques that are being used for both research and clinical purposes. As this field is burgeoning, we’re seeing an increase in data analytics to take the massive amounts of genomic data that have been collected over the last couple of decades and utilize new layers of analysis. 

To dive deeper into what’s going on today, we’re having David Shifrin of Filament Communications speak to a number of people across the board to compile a new audio series. Our first recording is with Erica Barnell, CSO and Co-Founder of Geneoscopy, a company developing a screening methodology to noninvasively diagnose colorectal cancer using biomarkers in stool samples. Listen to the entire recording on Soundcloud here

David: I’m glad to be joined by Erica Barnell. Erica, I’ll let you introduce yourself and explain what Geneoscopy does and your role there.

Erica: Thank you for the warm introduction. I’m the Co-Founder and CSO at Geneoscopy and my primary role is directing the development and execution of scientific research. Specifically, I’m trying to advance our lead product which is a non-invasive diagnostic for the detection of colorectal cancer and advanced adenoma. We look at stool samples, which is in the liquid biopsy space, but I think what’s unique is that we’re evaluating gastrointestinal disease by looking at RNA biomarkers in that biofluid.

David: I’m excited about the work that you’re doing. My work involved isolating microvessels from the mouse small intestine, so I’ve done my share of stool preps over the years. Can you give a brief overview on why and how liquid differs from solid?

Erica: When you’re talking about liquid biopsy, you’re really evaluating a fluid that is distant from the primary lesion. Specifically with cancer, there will be a solid or a lump fluid malignancy that is developing in some part of the body. A liquid biopsy is a noninvasive way to assess that tumor to potentially diagnose the disease to monitor recurrence or identify biomarkers that might predict response to a specific treatment.

David: You’re looking specifically at gastrointestinal disease and intestinal cancers. This could be considered a precision medicine play.

Erica: We like to think of ourselves as a precision oncology biomarker company. We’re looking specifically at gastrointestinal health, but we can expand beyond just cancer. We have traditionally looked at cancer, and that’s our lead product, but we have other goals to look at inflammatory or infectious diseases in GI health. That’s one of the reasons we think RNA biomarkers are the optimal platform for liquid biopsy, specifically because RNA does not require that you look at mutations. The expression of RNA can be indicative of disease. One of the focuses at Geneoscopy is specifically looking at these biomarkers for precision medicine to build diagnostics. 

David: Can you expand on that by talking about the relationship between RNA and predictive medicine and what it will take to reach this ideal world of precision medicine that we’ve been working towards since the early 2000’s with the completion of the Human Genome Project? 

Erica: Many thought that the Human Genome Project would be the end of that analysis, but I think it actually opened Pandora’s box and uncovered more questions. RNA specifically has become kind of a double edged sword in biopsy studies. We’ve shown that RNA biomarkers can be used to effectively diagnose disease and we can create specific signatures that are reflective of the pathology that we’re interested in. When you transition to a liquid biopsy space in blood, stool, urine, and even in saliva samples, we’ve found that isolation and preservation of RNA biomarkers is incredibly difficult. This is especially the case when we’re looking at stool samples. The sample is overrun with bacterial noise, which is not what I’m interested in. I want the cells from the lining of the colon, and these cells are in the midst of apoptosis or degradation, so the signals are being laced and degraded. Pulling those degraded and sparse signals from these samples has become very difficult. Another technology that we’ve focused on is enhancing our ability to non-invasively isolate RNA signals from these biofluids, which will dramatically expand our ability to successfully identify the diseases that we’re looking at.

David: That’s really interesting. Let’s take a step back and talk about some of the larger scale challenges in oncology when you’re talking about finding, isolating, and analyzing biomarkers.

Erica: One of the challenges is in clinical trial development. We’ve seen a number of different liquid biopsy companies that try to feature biomarker selection and they use case control studies. They take a cohort of patients that are normal and a cohort of patients that have the disease subtype. Then they compare the expression or mutation of profiles between these two individuals. When you look at that kind of study, you can be incredibly successful at identifying biomarkers whether it be RNA or DNA. When you expand to a prospective study where you’re evaluating a population that you want to ultimately build the diagnostic for, the accuracy profile is not recapitulated. This is why it’s important to design a clinical trial where biomarker selection is performed within the cohort that you ultimately hope to employ the diagnostic on.

David: I’d like to talk about that dichotomy in the context of big data and data analysis and how we can try to extract actionable information from the data that we do have, while also acknowledging the challenges of extrapolating to different populations. 

Erica: When you have something in biology that appears to be incredibly complex on the surface and then turns out to be elegantly simple, it’s fantastic. David, you’re right that this is typically not the case. During the Human Genome Project, we looked initially at identifying biomarkers that were universal to specific subtypes and we came up with Gleevec and Herceptin. Although they are incredibly powerful drugs and are very helpful for patients with CML and breast cancer, for the other hundreds of different cancer subtypes, we have yet to find universally prevalent mutations that are targetable. To solve this issue, we actually have to improve the annotation of the variance that we’re interested in. I think people don’t recognize the annotation bottleneck that exists within precision oncology. Even though we can sequence an individual’s genome for less than $1,000 per patient, the manpower and computational power required to identify the variance of interest and annotate the variance for clinical relevance is excruciatingly expensive. By building bioinformatics tools that process the data and knowledge bases that store these annotations is required for developing these custom reports for clinical relevance.

David: We’re hearing both in life sciences and healthcare about machine learning and artificial intelligence, and putting aside the frequent conflation of those two terms, let’s focus on machine learning. In Cofactor’s case, dynamic expression data provides the real time analysis of what’s going on in the body and creating these machine learning algorithms that pass the information that’s coming back from the sample processing. How do you at Geneoscopy approach multidimensional analysis and developing the algorithms that are going to be useful?

Erica: One of the biggest issues in the field of AI and machine learning is the misuse of AI and machine learning. It seems like a black box from the outside, but when you actually delve into the models, algorithms, and bioinformatics backend, it’s really simple how they’re generated and what they can do. What we’ve tried to do at Geneoscopy is generate features that we know are important in training and developing the model and give it the best data that is reflective of our ultimate cohort. Examples include using non-machine learning approaches to filter down features that we find interesting in predicting disease, and then using a cohort of patients from a prospective screening population. When you do those types of development exercises, you can create powerful models that perform well.

David: It seems like you really focused on creating the systems up front because annotation is vitally important. You spoke about feeding quality data, which sounds obvious, but if you have to make that statement it means that not everybody is doing that. 

Erica: An excellent example of this is in an article I recently read where a pathology group developed this unbelievable machine learning algorithm to detect breast cancer cells in pathology slides. They did it with hundreds of pathologists, cases, and patients at their institution and built this beautiful model that had about a 90% sensitivity. When they tested it on a new cohort of patients at a different institution, the accuracy profile dropped to nearly guessing. In retrospect, what they realized is that their institution would put red arrows where cancer cells were in the pathology slides and the machine learning recognized these arrows and called it cancer. I think that really demonstrates that the data that you feed into the algorithm has to be applicable to every single cohort that you hope to evaluate.

David: That’s almost comical.

Erica: That’s just their processes for labeling, so it’s very difficult to prospectively determine what is going to create a good model. It’s certainly important to try and mitigate some of that risk of creating something that isn’t broadly applicable at all institutions or across all cancer subtypes etc. It also highlights the importance of standards across the industry like independent test sets and multiple cohorts. 

David: Let’s move on to comparing multi-dimensional biomarkers to single analyte approaches. In the immuno-oncology space, PDL1 has probably been the most prominent along with single DNA mutations. There’s a mutation that’s linked with a disease, but then turns out not to be, so there’s certainly some drawbacks there. We’re collecting all of this data to create these multidimensional approaches. Are there any places where looking at a single point might be advantageous?

Erica: The advantage of looking at a single analyte is that in terms of discovery, you’re limiting your false discovery rate with regards to ultimate commercialization and utilizing diagnostics that harness a single biomarker. That has become less commonplace, especially in the oncology field and I think that’s just a cost issue. Now we can sequence multiple analytes in parallel, so the ability to do a multi-analyte approach for a single patient in a single time is becoming much more common. Ultimately, the issue will come down to the ability to annotate those variants. Here at Washington University in St. Louis, we have a pan cancer diagnostic that’s being employed on all hematologic tumors. When I speak with the physicians who use this test, what they’re looking for is actually residual mutations or disease. They don’t necessarily care about which mutations are present, but rather if after the patient was treated with chemotherapy or a targeted therapy did the variant disappear from the tumor. This is a reflection of the inability to quickly annotate these variants for clinical relevance. I think as we build bioinformatics tools that can better annotate the variance and provide data to physicians, the treatment plan will change. I don’t think in the future that we’re going to see single analyte diagnostics being employed, especially in the oncology space.

David: You talked about taking this information and moving it into clinical therapeutic settings. What are you most excited about with some of the discoveries that you are making and the advances that you’re seeing moving towards getting this in the hands of clinicians to help patients? 

Erica: I think you touched on PDL1 and single analyte DNA instability, and I am so excited about the future of the immunotherapy space. I think our current understanding of immunotherapy in cancer and in other disease types is very limited. At my lab, we’re starting to build bioinformatics tools that improve our understanding of this space. Specifically, I’m really excited about the tools that are being generated to improve predictions. With these, we can better understand how the immune system responds, targets, and kills the tumor in the microenvironment. The limited toxicities for immunotherapies may be a huge benefit to the patient for advancements in this field.

David: That is certainly exciting, and another area that I feel like we’ve been talking about for a long time in terms of getting the human body to do its own dirty work. We’re finally over the last couple of years starting to see some real progress so I’m pretty excited about that. 

Erica: It seems that the true breakthroughs come as we learn more about the mechanisms of resistance. It’s kind of an arms race. The immune system is trying to find the tumor and the tumor is trying to protect itself, but I think humans have a small advantage. I ultimately see us as winning that arms race, but I’m excited to see where it ends up in the future.

David: It’s similar to antibiotic resistance in bacteria, there’s just constant adaptation. The arms race is a perfect analogy. You’ve got a pathogen, whether that’s a cancer cell or a foreign pathogen. What about some of the challenges of moving the information that you’re collecting into translational and clinical settings? Specifically in regards to reproducibility, standardization across all cohorts, and isolating good clean samples. Is there anything else that you are looking at and can you identify any major roadblock to really realizing the potential of this approach.

Erica: This applies to almost all individuals in the biomarker discovery and diagnostic development industry. Just because it can be measured, should it? I think that a lot of people are learning how to measure things better and more accurately and they’re building diagnostics and tools that can do it very effectively. But, a lot of companies are developing tests that don’t actually impact patient care. If the physician gets this information, what is the decision tree? How does it inform what they’re going to do with the patient? How does it change the therapeutic approach for the patient? That’s a big challenge. 

David: Got it. Let’s wrap it up with where do you see us heading in the next 10-15 years? Where do you want us to be in this in the space?

Erica: I would love to see precision oncology being employed in every clinical workflow. Traditionally, we’ve subtype tumors based on geography. Breast cancers and colon cancers are both called cancer, but I think the future holds a way to use molecular subtyping to classify tumors rather than location or tissue origin. That is the way we can create these precision medicine approaches and use biomarkers to treat the patient in a way that improves outcomes. 

David: Thank you again for your time. There’s been a lot of fun. Let people know where they can find out more about Geneoscopy.

Erica: Absolutely. You can learn more at www.geneoscopy.com and look out for our first published manuscript in gastroenterology.

Recommended Posts

Start typing and press Enter to search