We use both our own and third-party cookies for statistical purposes and to improve our services. If you continue to browse, we consider that you accept the use of these.

Investing in Tools to Deliver True Precision Medicine

 In Advanced Applications, Cofactor Genomics, Molecular Diagnostics, Q&A

Investing in Tools to Deliver True Precision Medicine

In our last interview, we spoke with Daryl Pritchard, Senior Vice President of Science Policy at the Personalized Medicine Coalition. We’re continuing to speak to industry experts to get their perspective on what’s going on with immune-oncology, precision and personalized medicine and where we may be headed over the next few years. In this week’s episode, David Shifrin is speaking with John Kuelper, Managing Director at Ascension Ventures. You can also listen to the entire recording on Soundcloud with John Kuelper here

David: John has experience in a number of areas across the industry. He spent time at synthetic biology and therapeutics and has worked on both sides of the table, as a founder/ entrepreneur, as well as in investments. John, great to have you. Can you give us a quick overview of yourself and what you do at Ascension Ventures?

John: Sure. I’m Managing Director at Ascension Ventures where I’ve been for the last five years or so. I focus on healthcare information technology, particularly data-driven innovations and precision medicine. Our investment model is a little bit unique because we are funded by a network of about 500 hospitals who together represent about 10% of the US healthcare market. We leverage these folks for various strategic initiatives, to drive engagement with our portfolio companies, and understand market dynamics from the provider’s point of view. About a third of my time is spent digging in with portfolio companies and these health system partners. The other two-thirds of my time is focused on investment activities, including managing existing portfolios and sourcing new opportunities. As you mentioned, I come from a technical background and originally spent some time looking at structural imaging biomarkers and then wandered my way into to venture capital like many folks do.

David: As somebody who’s both in the weeds with companies as you help advise them, and also utilizing a higher-level overview as you’re looking at investment thesis and places to deploy capital, what kind of high level trends are you seeing as far as where the precision medicine segment of the industry is heading?

John: Personalized medicine, precision medicine, and genomics are fields that the firm has looked at for the better part of 15 years. For the most part, we’ve been on the sidelines up until about five years ago. The rationale was that our health system network focused on when markets and technologies are at an inflection point and are at the point of transitioning from early adopter markets to the mass market. Around five years ago, we saw that happen with genomics, in particular, really rapidly. It happened to coincide with a lot of activity on the therapeutic side that was pulling next-generation diagnostics to the forefront and immunotherapies hitting the market. A big confluence of technological innovation was hitting at the same time, which encouraged us to take a serious look at this market and consider making some investments. Around the theme of precision medicine, which folks can define in different ways, we’ve made three investments in the category so far. Cofactor is definitely an exciting opportunity. There’s an advantage that exists in clinical care and community medicine in particular.

David: Where the field has come from and where it’s headed revolves around the data and analytics. Can you talk a bit more about the role of big data, and how people can use it effectively to succeed in this space?

John: Yes, it’s interesting. In healthcare, this data infrastructure only matured relatively recently with the shift to electronic medical records. Harnessing big data at the point of care from the clinician’s perspective is a relatively new phenomenon. Often, the point of care clinicians is throwing off dramatically more data than they are consuming or are able to consume. It’s a unique data asset that’s bubbling up in clinical medicine. I think the challenge is on a couple of fronts. One, humans only absorb so much information at a given time, so there are limitations of what oncologists and other clinicians can digest and act on in terms of their clinical practice. There’s an opportunity for tools, partners, and enablers to harness all of this data that they and external sources are generating, and translate it into a form that’s clinically actionable.

The other challenge is the fact that information systems and data generating tools are fragmenting at an incredibly rapid rate, and new therapies are coming to market, so it’s a lot of information for folks to digest. This begs a second question, even with this enormous amount of diagnostic and prognostic data that’s available, the therapeutic options are just exploding, and the diagnostic data needs to be contextualized with respect to the therapies that are available. It’s a great opportunity and one that didn’t exist ten years ago because the data infrastructure just simply wasn’t available. When you talk to clinicians, they’re drinking from a fire hose. Often standard reports that I get from reference labs or folks looking at commodity biomarkers is just too much for any one person to digest, and the state of the science is moving too quickly, so they’re looking for help.

David: Well, it makes sense. That would be an exciting time from an investor perspective; there’s a lot of opportunity for these new tools, analysis, and decision support tools to come online and potentially succeed.

John: Absolutely. It takes some time, but I think there’s a will and an effort to understand the value of this data. There’s a great ecosystem of folks partnering to make it happen. Aside from critical care, upstream in the research continuum, big data has fundamentally changed clinical research. It has opened new paradigms for real-world evidence to accelerate drugs coming to market as well as mapping existing drugs to the right patients with optimal efficiency. We’re in the early innings, but it’s a great opportunity to drive net new value from these data assets.

David: I want to take this perspective and dial it in more specifically to oncology. You talked about the explosion of therapeutics and diagnostics and the opportunities around new tools. Can you put this in the basket of immune-oncology and tell us how this has changed how we think about cancer over the last 10-15 years?

John: I think we saw this fundamental shift from classifying tumors from the site of origin to more precise molecular and genomic classifications. That’s been truly transformative, and the proof is the clinical impact that these targeted therapeutics have in the predictive accuracy of these diagnostics. The challenge, one that Cofactor is now starting to address, is that there are limits to the types of clinical questions that you can answer with point biomarkers. Of course, a number of diseases can be reduced to very specific biological processes that are just as effective and have a cascade effect. But I think for the vast majority of diseases, it’s really important to look at the system as a whole. Even though there have been great strides in adding this new data about molecular and genomic classifications of tumors, and individuals for that matter, the first wave of implementing this clinically was looking at expression factors and genes in isolation. Folks are starting to appreciate that that is insufficient for the vast majority of diseases, especially in diseases as complex as cancer.

The next wave of opportunity, which is really exciting, is looking at this in a multidimensional way. That’s both within a data domain, like RNA data and expression profiles, but also compound multi-dimensional biomarkers, looking at RNA in the context of a germline mutation, somatic mutations, and other clinical, qualitative data. It’s expanding much larger horizons. Going back to the big data conversation, we never really have the infrastructure to make those bridges within a domain, much less between domains, and that’s a brave new world we’re entering.

David: Can you talk more about that? Cofactor is working on this multidimensional approach, both in terms of the number of analytes they’re looking at as well as multiple points in time or even in real-time. What about the next couple of years as we start to figure out how to put these pieces together to make actionable patient treatment decisions? 

John: Modalities like immunotherapy, just by their nature, are going to have big systemic effects on the body. We see this with some on a safety profile, but also in terms of the positive cascade, a curative effect in many cases. There are mechanisms you need to analyze, as the state of the patient’s physiology prior to introducing a therapy as a system and then the response as a system. I think that’s a fundamental shift. These new therapies are incredibly important. Even in terms of treatments like radiation therapy, these expansive datasets that are available allow us to look at the performance of the system as a whole for the first time in medicine’s history.

The gap has always been a couple of things. First, how do you get this quality of data? There are practical, biochemical reasons why it’s extremely difficult to get the richness of data needed to understand the system in a clinical setting. Second, once you gather the sufficiently rich data, how can you make sense of it? These data points are beyond what an individual can comprehend. Digesting them and transforming them into models that represent compound subsystems or circuits within the whole biological system – that’s really what’s needed before this can be put into practice. It’s the double-edged sword of having so much data available to you but needing to hone-in on exactly what’s clinically relevant, which may change a therapy selection decision. I think there’s a lot of great proof to show that these multidimensional biomarkers can and should change therapeutic decisions.

David: What about the role of machine learning and/or artificial intelligence? 

John: This data is so complex that it’s very difficult to build hypotheses in the absence of real-world data. It’s difficult to distill patterns when it’s so voluminous and so noisy, frankly. Machine learning is a great tool to model these biological systems, and it’s an iterative process, build an initial model for a cell type, and then look at multiple cell type models in conjunction. Can you tell me how that differs in healthy and disease issue? It’s not an easy process; it takes a very specific skill set to build these models, much less fine-tune them. There’s a huge inter-dependency between this computational component and what happens in the wet lab to process the samples and manage the noise in the input to get this raw data.

Then I think that’s just step one, which is a diagnostic that’s clinically relevant and useful. Beyond that, there’s great potential for machine learning to continually optimize these models. I made that point a little bit earlier about the benefit of real-world data. We have this volume of data that’s being generated with every patient interaction and every therapeutic decision point. That should be a feedback loop back into the systems that are modeling these tissues, cells, and response to therapy, which requires some clever architecture. It requires a distribution and delivery model that can accommodate those machine learning models. Luckily the technology and infrastructure are there, and we have companies like Cofactor who’ve done the hard work to make it clinically relevant.

David: Let’s get even more specific. John, can you talk a bit about the immune profiling part of this? Where are you seeing some of the most exciting or important biological features that the field is starting to hone-in on as far as biomarkers that you think holds a lot of promise?

John: It’s not necessarily an individual feature or raw data element. It’s looking at these compound features that you can uncover only by looking at dozens, hundreds, and thousands of features in isolation. I think that’s an exciting opportunity. Cofactor has validated an assay recently looking at distributions of immune cell types in samples as a biomarker in and of itself, which is distinct from individual cells. This means looking at a population of immune cells that are distinct from the individual transcripts being expressed in the cell. This is a great example of how these compound multidimensional biomarkers manifest in something that can be so easily explained as a population of immune cells that is predictive of therapy response in many cases, or disease progression. In many other cases, there are other interesting data assets, like structural markers. Measurements like tumor mutational burden are proving to be very clinically relevant. I don’t think there’s one in isolation that can we can boil down but looking at everything available in combination is truly a game-changer in oncology care.

David: I’ll try not to get too philosophical about this here, but human nature is to try to boil everything down to its simplest point. It would be nice if we could boil every disease state down to a biomarker or a couple of them, but we know that’s not even close to the reality, which has been frustrating. On the other side of it, new tools are becoming available to simplify the complexity in a way. 

John: One way to think about it is levels of abstraction. Cofactor is looking at immune cell populations, raw substrate, and RNA-seq data. There are RNA experts that can reduce everything to RNA-seq, but the volume and complexity of that data are almost unintelligible. Extracting it to the level of immune cell populations makes sense to clinicians and makes sense biologically. We’re also not losing any of that richness. When building these machine learning models and continually optimizing, you still have that raw substrate, so as new data folds in, you can improve these models over time.

There’s another level of abstraction, which is maybe what the patient cares about most, which is, am I going to respond to therapy or not? So, you’re boiling all of this down to kind of a binary piece of information, which is very important when selecting a therapy and to patients, payers, and physicians. It’s important to retain that richness, and there’s huge merit in abstracting data for different audiences so they can manage that data and utilize it in different ways.

David: I do want to briefly touch on clinical trials because it relates to all of this and how we manage the data. What are the advantages, disadvantages, and the current state of things with trial design, how does that play into this whole process of getting the data out there and making it actionable?

John: The trial landscape is going through a fundamental shift in my opinion. The crux is a willingness to embrace some of these new data assets that are coming online, especially as clinical data infrastructure matures. We’re seeing a willingness to adopt real-world data to support regulatory submissions both on the industry side as well as through the FDA. It’s fantastic because it speeds up the process of moving drugs to market as well as access to patients once they are approved. Even post-approval, the move is now to look at label expansion almost immediately. Moving from niche high need populations to a broader mass market could have a very positive clinical impact. I think I’m extremely bullish on this move towards real-world data.

It’s particularly relevant to Ascension Venture’s health system network. We work with large community providers who are generating this mountain of data but haven’t been as engaged with the clinical trial community as our academic peers. This is one of those opportunities where these large community health systems have more value to add to the industry, to payers, to regulatory authorities because they sit on such enormous volume. Another benefit of that, in the context of precision medicine, is that academics just don’t have the catchment area to identify a lot of these rare disease populations or cohorts of cancer patients with very obscure or rare mutations or molecular profiles. With a lot of these trials, this research wouldn’t happen if it wasn’t for community providers embracing these new diagnostic modalities coming online to identify patients at scale that could benefit from therapy or be candidates for clinical trials. That’s fundamentally what I’m most excited about, and it manifests in clinical research, post-market, and expansion opportunities in a variety of different ways.

David: What is the role of the investor or the fund in this whole process? What are investors doing to drive this field forward and make precision medicine a reality for patients?

John:  This development is fundamentally risky, so I think there’s a role for risk capital to take a long-term view of these opportunities in ways that large incumbents, publicly traded companies, and even clinicians or researchers can’t just because we can take a much longer horizon towards these opportunities. The opportunity in precision medicine is a consequence of so many different factors like technology, delivery with providers, as well as therapeutic development. There are just so many different market factors that make this such a unique opportunity at this point in time.

It’s difficult for any one company or individual to see that when you’re building a product. One way that we can be helpful is just finding that market perspective, especially in venture capital. Confidentiality is the utmost importance. A lot of these companies can’t really talk about these market dynamics that they’re seeing and they’re building products, but as investors who are looking at a five, 10 or more of these companies at the same time and over a very long time horizons, can pick out patterns that may be difficult to see on the ground. We can also pick out market forces that are coalescing at a point in time in ways that could create net new opportunities. Ascension Ventures has a bit of a unique market given that the folks who fund us are also potential customers and collaborators with our companies. We use that market intelligence to serve up pointed suggestions about where the market’s going so that as these innovators are building products, they can build towards what the market is going to look like in three or five years from now. We can make those pointed introductions if there is a health system, a provider, or a physician who could be a good collaborator.

David: That’s very useful insight and a great explanation on how you operate. Thanks again for your time.

 

Recommended Posts

Start typing and press Enter to search

World CDx 2019