Keep updated.

How low can you go?

How low can you go?

Advantages of low-input RNA-seq over single-cell RNA-seq

Do you believe RNA sequencing has become an essential tool in addressing biological questions? That it is a powerful means for understanding phenotypic differences? We do! And that’s probably not surprising coming from an RNA-seq company. Traditionally, RNA sequencing refers to poly(A) RNA-seq or ribo-depleted RNA-seq (whole transcriptome). These approaches measure the average expression level for each gene across a large population of cells, often from a bulk biological sample in the form of a tissue section or pellet of cultured cells.  These samples can be heterogenous, likely comprised of multiple cell types at different stages of development, apoptosis, necrosis, or spatial differentiation.

To address the heterogeneity of these materials, and truly understand the cell-to-cell variability in a sample, many researchers have turned to single-cell RNA sequencing.  This technology is still fairly new, first published in 2009 (Tang, et al. Nat. Methods 6 (5): 377–82), and a few reagent and instrument manufacturers have built their own commercially-available kits and platforms including Fluidigm,10x Genomics, BioRad/Illumina, and BD.

Image from: arXiv:1704.01379v2 [q-bio.GN]

Single-cell RNA-seq (or scRNA-seq), as the name implies, enables the measurement of the expression level of (a subset of) genes in an individual cell.  Most often, a population of hundreds to thousands of cells is measured, and the distribution of expression levels from each gene are evaluated for the whole. While this technology has found applications in certain areas (identification of cell types, measuring heterogeneity/variability, etc), there remain many challenges associated with quantifying single-cell expression levels. Alternatively, low-input RNA sequencing, using subsets of cell populations or micro-dissected tissue, can be an ideal approach to obtaining high-quality, interpretable comparative expression information.

 

Sources of variability

–  Amplification (technical noise) – the amount of RNA present in a single cell is limited (1-50 pg, dependent on cell type), and the resulting cDNA must be amplified to generate a sequencing library.  This can introduce bias and noise in the data.  With low-input RNA-seq, there may still be some amplification required (depending on the amount of starting material), however, it is much less than scRNA-seq. Cofactor has optimized this input-to-amplification ratio in our picoRNA workflow to ensure as little amplification as possible is utilized.

–  Gene ‘dropouts’ – due to the low amount of starting material with scRNA-seq, it’s possible for some genes (especially moderate to low-level expressed genes) to be randomly “missed” in the library preparation method, even if they are present in the cell. This is less likely to happen with low-input RNA-seq experiments as the pool of molecules is larger and much more diverse.

–  Biological noise – even genetically identical cells, under identical conditions, display high variability in their gene and protein expression levels. In single cell experiments, this biological noise is quite large, and must be averaged across many cells to truly understand the cell population.  This is in contrast to low-input RNA-seq, where the biological variability between cells is represented as the ensemble expression level of the sample.

 

Statistical Significance

–  Number of samples – due to the high biological and technical noise in a scRNA-seq experiment, hundreds to thousands of individual cells must be used, and the data analyzed and integrated to draw statistically-meaningful conclusions.  While the commercially-available protocols make use of both cellular and sample multiplexing, the cost of these sequencing experiments still far outweighs a traditional RNA-seq experiment. Further, the sheer amount of data generated can be overwhelming, and depending on the provider/technology being used, you may find there is limited informatics support available. Cofactor’s low-input RNA-seq approach includes our comparative-expression analysis and ActiveSite interface, where you can easily sort and refine your data – including p-values and CVs for replicate groups with >3 samples.

–  Cost of sequencing – while the cost of sequencing continues to decrease, the reality is that even at relatively shallow depths (10k reads/cell), when studying hundreds to thousands of cells the sequencing alone can become cost prohibitive quite quickly.  Using low-input RNA-seq, once can craft an experimental design that enables sequencing deeply and with multiple biological replicates for a reasonable cost. And, with Cofactor’s expertise in this area, we can ensure high success rates and quick turnaround times, further adding value to your research.

Sample Logistics

–  Shipping and dissociation – For scRNA-seq, cells must be dissociated into viable single-cell suspensions which are then loaded onto the platform.  This presents challenges from an outsourcing and shipping perspective, as ideally, the cells are freshly dissociated prior to loading.  However, with low-input RNA-seq, the sample logistics are a bit easier. If using sorted cells, they may be sorted directly into lysis buffer and shipped; cells may be suspended in a Trizol solution and shipped;  extracted RNA may be provided (in low volumes); or FFPE core needle biopsies/sections/slides may be submitted.

–  Sample types – scRNA-seq may only be used on samples which result in viable single-cell suspensions.  This limits its utility to cell cultures and fresh or frozen tissue.  In contrast, Cofactor’s two options for low-input RNA-seq can accommodate high-quality (RIN>7 for picoRNA) or low-quality (DV200 > 30% for RNAmplify or RNAccess) samples.  This enables low-input RNA-seq to be broadly applicable, for multiple sample types, sources, and storage times.

–  Spatial context – with standard scRNA-seq all spatial information is lost in the dissociation process.  However, by utilizing laser capture microdissection (LCM) on snap frozen tissues to sequester specific cell populations, low-input RNA-seq may be used to elucidate spatial profiles.

 


 

So, if you happen to have access to a cell-isolation and barcoding platform nearby, and your scientific question truly relies on collecting information at the cellular level, AND you have the statistical prowess to make sense of all the data you collect – you’re on the right track with scRNA-RNA-seq!

But if your experiment does not check ALL those boxes, and you’re instead interested in:

  • sorting cell populations via FACS/flow cytometry
  • capturing small regions of tissue via LCM
  • processing very small numbers of cultured cells (or cells with low RNA content)
  • saving money and maximizing the insight from your budget

then low-input RNA-seq may be a great approach to gain powerful transcriptomic data with excellent technical reproducibility and reduced biological noise.

Are you interested in better understanding  the technical details or logistics of these protocols?  Reach out to schedule a time to speak with one of our Project Scientists today.

 

Other Helpful links:

https://arxiv.org/abs/1704.01379

https://hemberg-lab.github.io/scRNA.seq.course/introduction-to-single-cell-rna-seq.html

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4132710/

https://www.ncbi.nlm.nih.gov/pubmed/24832513

RNA Is About to Make Personalized Medicine a Reality

This is an amazing time to be in biology.

In fact, I like to say that right now we’re in a place that’s a lot like where personal computers were in 1987. In those days, PCs were pretty limited in what they could do. They still ran off of floppy disks, they really weren’t connected together and they were only capable of running a few basic software programs. But, even with those limitations, they were still useful tools that made all of our lives easier. And look how far we’ve come; today we all carry the equivalent of a 1987 supercomputer—aka a smartphone—in our pocket, capable of doing things we couldn’t have even imagined 30 years ago.

But that pre-Internet PC is right about where we are with genomic approaches to medicine right now. We have these amazing tools at our disposal, and the exciting part is that we’re just beginning to get to the point where we can see how it’s going to change all our lives and what it’s going to be in 10, 20, or even 30 years. Thomas Friedman refers to this acceleration of technologies post 2007 (including genomic technologies) as “the supernova”, where we are just now getting a grasp on how we might harness the power of these technologies 1.

Before the Human Genome Project completed its work in 2003, before we had that map, the way we analyzed DNA samples was pretty rough. We essentially had signposts mapped across all the different chromosomes and, in order to find shared markers, researchers had to sit down and simply look for them. That’s all we could do at the time and it was extremely time consuming.

It wasn’t even very precise. The whole human genome is three billion-plus letters long, so even if you’re looking at a tiny sliver of one chromosome, that still can be a million letters — a huge region and a huge margin for error. There might be 50 different genes in a segment that size.

But the Human Genome Project changed all that. It showed the value of biological big data. It created a demand for high-throughput sequencing. Today, a process that used to take years and years of tedious, hands-on work can be completed in a matter of hours, and a job that used to cost over $1 billion now comes in at around $1000 2.

We are now working at a completely different scale.

 

Beyond DNA

And this has changed everything about genetic medicine. Today we know exactly where each gene is located in the code and we can go in and read each one directly. That means we are able to identify and isolate exact mutations, allowing us to much more closely target specific diseases at the genetic level.

This has also allowed us to look at RNA in a much higher resolution. RNA is different from DNA in that RNA changes when disease is present. We can detect that change and see the indicators of disease — even when someone isn’t showing symptoms yet. At a high level, all DNA can really tell you are the hereditary risk factors that a given individual might face, or their chances for developing a certain disease. But that’s just a risk — it can’t say if you will or won’t become sick. Real symptoms and today’s conditions are what really matter, so if all you’re looking at is DNA you can end up with a lot of false positives and misdiagnoses.

In the case of serious diseases like cancer, this can mean spending thousands of dollars on the wrong therapy — that’s what many of these drugs can cost. More than just the lost money, patients can be on the incorrect treatment for a month or two. If they’re really sick, they may not get a second chance at finding the right therapy. They may run out of time.

Whereas when we look at somebody’s RNA we’re able to essentially see a snapshot of what’s happening in their body right now. This allows us to see beyond the DNA and get much more information about what’s going on with a patient, or how they are responding to a certain treatment, than we could if we just looked at their DNA.

 

Multiple Applications

 In addition to clinical, front line treatment like above, this could be very valuable in immune-oncology and the development of cancer vaccines. Every major pharmaceutical company today has an immunotherapy program that is trying to develop drugs in this area, but one of the challenges they all face is understanding how the immune system responds to the treatment, what’s going on in what they call the microenvironment of the tumor. It is very difficult to get a good sense of what immune cells are up to inside a tumor, particularly with solid tumor cancers, ones like lung cancer which is the deadliest.

But by isolating the pure immune cell subtypes and analyzing their RNA individually we’ve been able to develop a set of fingerprints, a signature for what each immune cell’s RNA looks like. With this information we can now take a patient sample, match their RNA up with our database of fingerprints and accurately pick out and report exactly what types of immune cells are present in their tumors.

We can use this information to help predict a patient’s response to immunotherapy.  Drug developers who are working on new drugs want to learn as much about their patient populations as possible. Now we can see what the immune system is doing before a drug has been administered, watch as it’s administered over time, and then see how it’s responding.

Everything in medicine today is based on the statistical average. How most people respond most of the time. With tools like RNA fingerprints, we won’t have to rely on an average that doesn’t describe us as an individual and what is going on within our body right now. Physicians and drug developers will be able to make decisions on a person-by-person basis, using that information to precisely tailor their work to focus on what is best for each individual at any given point in time.

And that is the true power of personalized medicine.

 

  1. Thomas Friedman. Thank You For Being Late: An Optimist’s Guide To Thriving in the Age of Accelerations. First Edition, Farrar, Straus and Giroux, 2016
  2. Kevin Davies eloquently covers this technological jump and price plummet in The $1,000 Genome: The Revolution in DNA Sequencing and the New Era of Personalized Medicine. First Edition. Free Press, 2010

We’re headed to AACR!

MMTC2017_booth_edited

Cofactor Genomics is headed to AACR in just a few days! But it won’t be our first rodeo this conference season. At the Molecular Medicine Tri Conference in February, our recently launched product, Pinnacle, was just one of many highlights for Cofactor:

Presenting Pinnacle…
Jon Armstrong, Cofactor’s CSO, overcame the sound of jack-hammers in the construction zone outside the Moscone Center to drop a few hammers of his own during his talk in the Molecular Diagnostics track. He presented Pinnacle, our new clinical cancer diagnostic. It combines RNA expression coupled with software, to determine the molecular fingerprint of any formalin-fixed (FFPE) solid tumor sample. After the talk, several researchers approached him with questions – sample quality and quantity requirements chief among them!

Earlier in the week, our medical director, Dr. Eric Duncavage led a short course where he got down to the nuts and bolts of designing a clinical grade NGS assay. Have you seen his recent publication, Standards and Guidelines for the Interpretation and Reporting of Sequence Variants in Cancer?

Orange you glad it’s me?
Also in attendance was our team of Project Scientists, including yours truly, bearing our signature orange jackets (whose “bright” idea was that… it was “brilliant”). I met with a new contact at the conference, someone who I had previously only known through email and telephone. As we approached each other, he greeted me with “oh, it’s you”, acknowledging that my jacket had caught his attention among the many conference goers.

The Paragon is in sight!
Immuno-oncology, or “cancer immunology” as one of our well-researched contacts referred to it, was a hot topic among the clinicians and researchers in attendance. Profiling the tumor-microenvironment and determining the level of mutational burden both provide insights into how to enable the immune system to better target the tumor. With Paragon, we’ve moved the needle on profiling, creating RNA-seq based signatures for 24 immune cell types, helping you determine who has shown up to the fight!

Interested to learn more? Dr. Ryan Bloom, Cofactor’s Director of Biomarker Development, will be presenting a poster on this topic at AACR: Cofactor Paragon: a novel tool to analyze the tumor microenvironment using RNAseq.

And, the Cofactor Team will be at Booth #1962; stop by and say hello!

Get ready for #TriCon with Discount Registration!

The Cofactor team can most often be found in our St. Louis laboratory, or our San Francisco offices.  But, sometimes we hit the road to share our latest work with our customers, collaborators, and investors.

This February, the crew will be in sunny San Francisco for CHI’s 24th International Molecular Med TRI-CON 2017 (aka Tri-Con).

There’s some really exceptional science on the agenda.  You can hear from our CSO, Jon Armstrong, on “Reaching the Pinnacle: A Unique Cancer Diagnostic Tool that Harnesses the Power of RNA” on Tuesday, February 21 at 11:45 am – 12:15 pm in the Molecular Diagnostics track. And, we’re certain the networking opportunities will foster crucial collaborations in the pursuit of personalized medicine.

Are you on the fence about attending?  What if we sweetened the deal?  

artboard-1

Friends of Cofactor Genomics can receive an exclusive discount code – 1737RAE – for $200 off your registration cost.*

How to Register:
Online: www.triconference.com/registration
Tel: +1 781-972-5400

If you’ve never attended the Molecular Medicine Tri-Conference, you can expect to meet 3,500+ international delegates for valuable networking, and attend programs on molecular medicine–specifically Discovery, Genomics, Diagnostics and Information Technology.

We hope to see you there!

View full details at TriConference.com.

 

* Our clients and colleagues will receive $200 off for commercial attendees or $100 off for academic, government and hospital-affiliated attendees. You must mention priority keycode 1737RAE to receive the registration discount. Alumni, Twitter, LinkedIn, Facebook or any other promotional discounts cannot be combined. Discounts not applicable on event short courses. This discount does not apply to previously registered attendees.

 

The trouble with RNA

Here’s the trouble with RNA: In order to sequence and analyze it, you first have to get it out of your biological samples intact. And, as we all know, this requires some nimble fingers – avoiding RNases, moving quickly, and working to ensure the highest quality RNA possible.

We know that the quality of an RNA sequencing experiment starts with the method of its purification from your sample (See 6 CHANGES THAT’LL MAKE A BIG DIFFERENCE WITH YOUR RNA-SEQ; PART 1). But, the primary focus of our Discovery Services laboratory experts is to develop and deliver the BEST in RNA library preparation, sequencing, and analysis.  

So, how can we help you to provide us with the best quality RNA possible?

Cofactor is pleased to announce that we have partnered with ARQ Genetics and their staff of Ph.D.-level molecular biologists, applying their years of experience to consistently deliver high quality material from your samples.

For routine RNA extractions (such as cell pellets, tissue, non-CLIA FFPE), we’ve made the process as easy as possible:

  • The cost of the extraction can be added directly to your RNA-seq quote, no additional billing required.
  • Transit time is minimized by shipping your samples directly to ARQ. Once extracted, the RNA will be transported to Cofactor for all downstream molecular and analysis.
  • Quality Control data are collected following extraction at ARQ and then once again at Cofactor, ensuring high-quality RNA is feeding into the RNA-seq workflow.

“We’ve seen a high quality of work performed by ARQ — with both client projects and internal projects alike.  I’m pleased that we’ll be offering this service to our customers who are unfamiliar with RNA extractions, or unable to devote internal resources. It’s important for us to make the process of obtaining high-quality RNA-seq data as seamless as possible for our clients.”   -Jon Armstrong, CSO 

If you are interested in including RNA extraction as part of your project, just let us know during our experimental design discussions. Contact a Project Scientist at Cofactor Genomics to get started today!

 

Learn more about our offerings.

Start typing and press Enter to search