Friday, 29 January 2016

ARTICLE: Go green -- recycle mitochondria

A novel quantitative assay of mitophagy: Combining high content fluorescence microscopy and mitochondrial DNA load to quantify mitophagy and identify novel pharmacological tools against pathogenic heteroplasmic mtDNA

  • Mitophagy degrades mitochondria, and likely plays important roles in the cell's responses to mitochondrial disease, but is hard to measure and thus poorly understood: we propose new ways of measuring mitophagy and use them to explore drugs that may help change damaged mitochondrial populations
Mitochondria, as we've written about before, are important entities in our cells that produce energy and take part in many other vital processes. Mitochondrial DNA (mtDNA), inherited from our mothers, contains instructions on how to build important mitochondrial machinery. MtDNA is sometimes mutated, leading to problems with our mitochondria. How do our cells cope?

Mitophagy (from mito-(chondria) and -phagy (eating)) is a process by which cells degrade and recycle mitochondria, allowing dysfunctional mitochondria to be removed and replaced. Mitophagy is one of a number of cellular mechanisms that maintain a healthy population of mitochondria, and appears to play a central role in determining the inheritance and evolution of mtDNA over our lifetimes. However, our understanding of mitophagy is limited because it is hard to observe.

In a recent and epically-titled paper in Pharmacological Research here, we explore two different approaches for measuring mitophagy in cells. The first is physical. We used chemicals to make mitochondria glow red, and autophagosomes (the cellular machines responsible for the degradation of mitochondria) glow green. We then used a microscope to examine large numbers of cells and recorded how often red (mitochondria) and green (autophagosomes) were seen together, which we took to imply that mitophagy may be occurring. We confirmed that various drugs and chemicals known to affect mitophagy had the expected effects on this estimate of mitophagy, and that perturbing ATG7 (an essential part of the autophagic machinery) sustantially reduced our observed mitophagy levels.

We also subjected cells to stress by growing them with a less plentiful supply of energy. We found that this energy stress increased the amount of mitophagy (perhaps as cells struggle to make the very best of their mitochondrial populations). We also found that mitophagy broadly decreased in cells from older people, and was increased in cells from people carrying an mtDNA disease (negatively affecting mitochondrial functionality).

The second approach is genetic. In cells from patients with mtDNA disease, some mtDNA is normal and some is mutated -- we used genetic tools to measure the proportion of mutant mtDNA in cells. We observed that when we stressed patients' cells, levels of mutant mtDNA decreased while our physically observed measure of mitophagy increased, supporting a picture in which mitophagy removes dysfunctional mitochondria when energy output is of central importance. We also found evidence for undirected mitophagy, where mtDNA copy number is depleted with no preference for mutant or wildtype.

Observing the colocalisation of autophagosomes (green) and mitochondria (red), as well as the proportion of mutant mtDNA (white stars), allows a bilateral characterisation of mitophagy. The patterns of changes in these observations tell us about how drug treatments and different environments change mitochondrial populations.

The physical and genetic approaches give us two largely independent means to estimate mitophagy, placing our understanding of this vital process on a solid analytical foundation. We used these tools to assess the effects of various drugs on mitophagy, allowing us to characterise the effects of drugs like metformin (inhibiting mitophagy) and phenanthroline (inducing undirected mitophagy) in unprecedented detail and facilitating more precise statements about their utility in clinical contexts. Iain


Wednesday, 27 January 2016

ARTICLE: Warburg Ensemble

Monitoring Intracellular Oxygen Concentration: Implications for Hypoxia Studies and Real-time Oxygen Monitoring

  • Cancer cells vary in how they produce their energy: we make progress understanding this variability, which may eventually help scientists design better therapies.
Cells can produce energy through several processes. We'll consider two – process "O" (for "oxidative phosphorylation"), and process "G" (for "glycolysis"). "O" uses oxygen, and harnesses the cell's mitochondria to produce energy. "G" does not use oxygen and produces energy without directly using mitochondria.

Healthy cells use both “O” and “G”, but cancer cells are often observed to rely on "G" much more. The shift away from "O+G" towards just "G" in cancer is often called the "Warburg effect", after Otto Warburg, who wrote about the shift in the 1950s. It remains unclear, however, whether the Warburg effect applies to all cancer cells under all conditions, or if different cells and different environments experience different shifts. This is important because understanding how cancer cells get their energy -- and, more generally, what changes occur in cancer cells compared to healthy cells -- may allow us to design therapies that challenge cancer cells while leaving healthy cells undamaged.

We used some fancy modern technology (focussed around the MitoXpress-Intra probe) to measure the difference between oxygen levels within a cell and oxygen levels in the cell's environment. We developed a mathematical way of producing "calibration curves", directly linking the observed MitoXpress behaviour to oxygen concentrations. If cells are using "G" alone, these levels are similar, as no oxygen is being consumed by the cells. If cells are also using "O", oxygen levels within cells should be rather lower than in their environment.

We found that two different cancer cell lines (with the rather jargon-y names "RD" and "U87MG") behaved surprisingly differently. When grown on glucose, U87MG looks quite "G", with oxygen levels within cells similar to those in the environment (e.g. 17.1% in cells, 18% outside). RD looks much more "O+G", with substantial differences between in-cell and outside-cell oxygen levels (e.g 13.2% in cells, 18% outside). Importantly, these findings were reproduced across a range of environmental oxygen levels (18% to 5%), modelling the range of conditions that cancer cells experience in tumours in the body. The two cancer cell lines thus seem to produce their energy in rather different ways, underlining that the Warburg effect is not an invariant across all cancers, and that treatments may be improved by taking this into account. We also showed that treating a different cancer cell line ("786-0") with phenformin, a drug inhibiting mitochondria, shifts cells away from "O+G" to "G", and that this shift can be monitored in real time with MitoXpress.

Different cancer cell lines (U87MG and RD) produce energy through different pathways, engaging more “G” (glycolysis) or “O” (oxidative phosphorylation). “O” uses oxygen (O2), lowering oxygen levels in cells compared to their environment. The different balance of “G” and “O” in different cases is important for understanding the heterogeneity of cancer.

Our paper appears in a book with the catchy title "Oxygen Transport to Tissue XXXVII", associated with the journal Advances in Experimental Medicine and Biology. You can get a sneak peek here and we'll update with a link when possible. Iain

ARTICLE: Generations of generating functions in dividing cells

Closed-form stochastic solutions for non-equilibrium dynamics and inheritance of cellular components over many cell divisions

  • Populations of important machines in our cells behave quite randomly: we build a mathematical framework to better understand these populations (which has already helped us understand the inheritance of mtDNA disease)
Cell biology is a unpredictable world, as we've written about before. The important machines in our cells replicate and degrade in processes that can be described as random; and when cells divide, the partitioning of these machines between the resulting cells also looks random. The number of machines we have in our cells is important, but how can we work with numbers in this unpredictable environment?

In our cells, machines are produced (red), replicate (orange), and degrade (purple) randomly with time, as well as being randomly partitioned when cells split and divide (blue). Our mathematical approach describes how the total number of machines is likely to behave and change with time and as cells divide.

Tools called "generating functions" are useful in this situation. A generating function is a mathematical function (like G(z) = z2, but generally more complicated) that encodes all the information about a random system. To find the generating function for a particular system, one needs to consider all the random things that can happen to change the state of that system, write them down in an equation (the "master equation") describing them all together, then use a mathematical trick to push that equation into a different mathematical space, where it is easier to solve. If that "transformed" equation can be solved, the result is the generating function, from which we can then get all the information we could want about a random system: the behaviour of its mean and variance, the probability of making any observation at any time, and so on.

We've gone through this mathematical process for a set of systems where individual cellular machines can be produced, replicated, and degraded randomly, and split at cell divisions in a variety of different ways. The generating functions we obtain allow us to follow this random cellular behaviour in new detail. We can make probabilistic statements about any aspect of the system at any time and after any number of cell divisions, instead of relying on assumptions that the system has somehow reached an equilibrium, or restricting ourselves to a single or small number of divisions. We've applied this tool to questions about the random dynamics of mitochondrial DNA (which we're very interested in! And this work connects explicitly with our recent eLife paper) in cells that divide (like our cells) or "bud" (like yeast cells), but the approach is very general and we hope it will allow progress in many more biological situations. You can read about this, free, here in the Proceedings of the Royal Society A. Iain and Nick [blog article also here]

ARTICLE: How evolution deals with mitochondrial mutants (and how we can take advantage)

Stochastic modelling, Bayesian inference, and new in vivo measurements elucidate the debated mtDNA bottleneck mechanism

  • Disease-causing mutant mtDNA is inherited through a complicated process: we use maths and statistics to shed light on this process and suggest possible therapeutic strategies to address disease inheritance and onset
Our mitochondrial DNA (mtDNA) provides instructions for building vital machinery in our cells. MtDNA is inherited from our mothers, but the process of inheritance -- which is important in predicting and dealing with genetic disease -- is poorly understood. This is because mitochondrial behaviour during development (the process through which a fertilised egg becomes an independent organism) is rather complex. If a mother's egg cell begins with a mixed population of mtDNA -- say with some type A and some type B -- we usually observe hard-to-predict mtDNA differences between cells in the daughter. So if the mother's egg cell starts off with 20% type A, egg cells in the daughter could range (for example) from 10%-30% of type A, with each different cell having a different proportion of A. This increase in variability, referred to as the mtDNA bottleneck, is important for the inheritance of disease. It allows cells with higher proportions of mutant mtDNA to be removed; but also means that some cells in the next generation may contain a dangerous amount of mutant mtDNA. Crucially, how this increase in variability comes about during development is debated. Does variability increase because of random partitioning of mtDNAs at cell divisions? Is it due to the decreased number of mtDNAs per cell, increasing the magnitude of genetic drift? Or does something occur during later development to induce the variability? Without knowing this in detail, it is hard to propose therapies or make predictions addressing the inheritance of disease.

We set out to answer this question with maths! Several studies have provided data on this process by measuring the statistics of mixed mtDNA populations during development in mice. The different studies provided different interpretations of these results, proposing several different mechanisms for the bottleneck. We built a mathematical framework that was capable of modelling all the different mechanisms that had been proposed. We then used a statistical approach called approximate Bayesian computation to see which mechanism was most supported by the existing data. We identified a model where a combination of copy number reduction and random mtDNA duplications and deletions is responsible for the bottleneck. Exactly how much variability is due to each of these effects is flexible -- going some way towards explaining the existing debate in the literature.  We were also able to solve the equations describing the most likely model analytically. These solutions allow us to explore the behaviour of the bottleneck in detail, and we use this ability to propose several therapeutic approaches to increase the "power" of the bottleneck, and to increase the accuracy of sampling in IVF approaches.

A "bottleneck" acts to increase mtDNA variability between generations. But how is this bottleneck manifest? Our approach suggests that a combination of copy number reduction (pictured as a "true" copy number bottleneck), and later random turnover of mtDNA (pictured as replication and degradation), is responsible.

Our excellent experimental collaborators, lead by Joerg Burgstaller, then tested our theory by taking mtDNA measurements from a model mouse that differed from those used previously and which, could in principle have shown different behaviour. The behaviour they observed agreed very well with the predictions of our theory, providing encouraging validation that we have identified a likely mechanism for the bottleneck. New measurements also showed, interestingly, that the behaviour of the bottleneck looks similar in genetically diverse systems, providing evidence for its generality. You can read about this in the free (open-access) journal eLife here. Iain and Nick [blog article also here]

ARTICLE: Great technological power, great statistical responsibility

Multiple hypothesis correction is vital and undermines reported mtDNA links to diseases including AIDS, cancer, and Huntingdon’s

  • Several papers perform incorrect and misleading statistical analyses in seeking links between mtDNA and cancer: these statistical issues must be corrected before scientific and policy progress can be made from these investigations
Biologists often report a result as a "significant" sign of exciting new science if there is less than a 1-in-20 chance that the result they observe could have emerged by chance from boring old science. This is silly (although we do it too!) -- by contrast, for example, physicists require less than a 1-in-3,500,000 chance. But this post won't discuss too many problems with this state of affairs -- that is done admirably elsewhere.

The problem can be compounded when scientists take lots of measurements. Say we take 50 measurements of a boring old system, and every time we see something that has less than a 1-in-20 chance of appearing in a boring old system, we call it "significant". We're playing the odds 50 times, so we expect to see 1-in-20 results appear around 2 or 3 times; just as if we roll a dice 50 times, we'd expect to roll a good few sixes. If we call every 1-in-20 result "significant" without accounting for the fact that we've looked at lots of measurements (and are thus more likely to see 1-in-20s by chance), we are in danger of reporting exciting new science when in fact the boring old science has been true all along.

There are lots of ways of doing this accounting, but a series of papers that have been recently published linking mtDNA to diseases have made no attempt to do it. Generally, these papers look at the mtDNA of people without the disease and the mtDNA of people with the disease. If any mtDNA features appear more in the people with the disease, the paper calculates the chance of that difference occurring in the boring old picture (in which there is no link between the mtDNA feature and the disease). If they drop below the 1-in-20 mark, they report an exciting new link between that feature and the disease. But they test dozens of features and never account for this multiple testing -- so, as above, we'd expect them to see "significant" results emerging just by chance. In a paper in Mitochondrial DNA here (free here) I show, by creating artificial data, that this problem is rife, that most of these reported links are spurious, and that scientists really need to be more responsible, before their flawed analysis starts to misguide health policy and medicine.

The top graph shows how the probability of seeing a 1-in-20 occurrence (p < 0.05 in the jargon), when in fact there is nothing new and exciting to report, increases as a scientist investigates more things. If an experiment consists of one test, then a 1-in-20 occurrence indeed has a 1-in-20 probability (0.05). But as soon as we do more tests, the chance of seeing at least one 1-in-20 occurrence starts to increase, as we are "playing the game" more times. If we do 6 tests there is a 0.27 probability -- between a 1-in-4 and 1-in-3 chance -- that we will see at least one 1-in-20 event. This is illustrated below, where we have six dice and think some of them may be unfair. We roll each one five times and count the number of 6s. One of them comes up 6 three times -- the chance of this happening for one fair die is less than 1-in-20. But because we've looked at six dice, we should be less surprised to see this rare event, because we've looked at more events in total. We need more evidence to claim that this die is unfair.

This quick note only represents the tip of the iceberg. MtDNA studies are often statistically unsound; statistical misdemeanours in biomedical studies are so common that most published research is wrong; scientists increasingly focus on the 1-in-20 chance as opposed to the size and importance of the effect they're measuring; the majority of hallmark papers in vital fields like cancer science are unreproducible (though this last point may have other causes than statistical problems). The 1-in-20 idea was only ever meant to be a step in identifying interesting scientific avenues, not the final measure of scientific truth. This is a big, and growing, problem! Iain

(For accessibility I have used "exciting", "boring", and "1-in-20" instead of their usual, more technical labels; they of course are usually called the "alternative hypothesis", "null hypothesis", and "p < 0.05" respectively).

ARTICLE: The function of mitochondrial networks

What is the function of mitochondrial networks? A theoretical assessment of hypotheses and proposal for future research

  • Mitochondria in our cells sometimes form large networks and sometimes remain independent, with changes between these structure often linked to disease: we use physics and maths to explore why these networks may form and be valuable to the cell, and to suggest ways to find out more
Mitochondria are dynamic energy-producing organelles, and there can be hundreds or even thousands of them in one cell. Mitochondria (as we've blogged about before) do not exist independently of each other: sometimes they form giant fused networks across the cell, sometimes they are fragmented, and sometimes they take on intermediate shapes. Which state is preferred (fragmented, fused or in between) seems to depend on, for example, cell-division stage, age, nutrient availability and stress levels. But what is exactly the reason for the cell preferring one morphology over another?

Nonlinear phenomena -- like some percolation effects -- could help account for the functional advantage of mitochondrial networks

We recently wrote an open-access paper (free here in the journal BioEssays) in which we try to answer the question: what is it about fused mitochondrial networks that could make them preferable to fragmented mitochondria? Our paper differs from previous work in that we attempt to use a range of mathematical tools to gain insight into this complex biological system and we try to hit on the root physiological and physical roles. We use physical models, simulations, and numerical estimations to compare ideas, to reason about existing hypotheses, and to propose some new ones. Among the possibilities we consider are the effects of fusion on mitochondrial quality control, on the spread of important protein machinery throughout the cell, on the chemistry of important ions, and on the production and distribution of energy through the cell. The models we use are quite simple, but we propose ideas for improving them, and experiments that will lead to further progress.

Taking a mathematical perspective leads to a central idea: for fused mitochondria to be 'preferred' by the cell, there must be some nonlinear advantage to fusion. That's what the fuzzy line is representing in the figure above. A big mitochondrion formed by fusing two smaller ones must in some sense be 'better' than the sum of the two smaller ones, or there would be no reason why a fused state is preferred.

Mitochondria can fuse to form large continuous networks across the cell. From a mathematical and physical viewpoint, we evaluate existing and novel possible functions of mitochondrial fusion, and we suggest both experiments and modelling approaches to test hypotheses

What is the source of this nonlinearity? We find several physical and chemical possibilities. Large pieces of fused mitochondria are better at sharing their contents (e.g. proteins, enzymes, and possibly even DNA) than smaller pieces of fused mitochondria. If the 'fusedness' of the mitochondrial population increases by a factor of two, the efficiency with which they share their contents increases by more than two! Also, fusion can reduce damage. If a mitochondrion gets physically or chemically damaged, having some fused non-damaged neighbours can help to reduce the overall harm to the cell. Finally, fusion may increase energy production because of a nonlinear chemical dependence of energy production on mitochondrial membrane potential. Fusing more mitochondria may, under certain circumstances, have the effect of increasing energy production. Hanne, Iain and Nick [blog article also here]

ARTICLE: Turbocharging the back of the envelope

Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology

  • Estimated numbers in biology (and life) are often uncertain: we've made a calculator to work with this uncertainty and help make calculations more interpretable (with a particular focus on understanding how the cell works)
The numbers that we use to describe the world are rarely exact. How long will it take you to drive to work? Perhaps "between 20 and 30 minutes". It would be unwise (and unnecessary) to say "exactly 23.4 minutes".

This uncertainty means that "back-of-the-envelope" calculations are very valuable in estimating and reasoning about numerical problems, particularly in the sciences. The idea here is to perform a calculation using rough guesses of the quantities involved, to get an "order of magnitude" estimate of the answer you're after. Made famous in physics as "Fermi problems", attributed to Enrico Fermi (who used rough reasoning to deduce quantities from the power of an atomic bomb to the number of piano tuners in Chicago), this approach is integral in many current applications of maths and science. Cool books like "Street-fighting Mathematics", "Guesstimation", "Back of the envelope physics", the excellent "What If?" section of xkcd, and the lateral interview questions facing some job candidates: "how much of the world's water is contained in a cow?" are all examples.
 
Calculations in biology, such as the time it takes for a protein (foreground) to diffuse through an E. coli cell (background), are often subject to large uncertainties. Our approach and web tool allows us to track this uncertainty and obtain a probability distribution over possible answers (plotted).

We've built a free online calculator (Caladis -- calculate a distribution) that complements this approach by allowing one to take the uncertainty in one's estimates into account throughout a calculation. For example, what volume of CO2 is produced by our yearly driving? We could say that we cover 8000 miles per year "give or take" 1000 miles, and find that our car's CO2 emissions are between 100 and 150 grams per kilometre. Our calculator allows us to do the necessary conversions and sums while taking this possible variability into account -- doing maths with "probability distributions" describing our uncertainty. We no longer obtain a single (possibly inaccurate) answer, but a distribution telling us how likely any particular answer is -- in this case a rather concerning bell-shaped distribution between 1 and 2 tonnes which can be viewed here.

In the sciences, particularly in biology, measurements often have substantial uncertainties -- due to experimental error, natural variability in the system of interest, or both -- and so using distributions rather than single numbers in calculations allows us to understand and process more about the question of interest. "Back-of-the-envelope" calculations are certainly useful in biology but, owing to the uncertainties involved, one can trust one's estimates better if one has a smart envelope that takes that uncertainty into account.  We've written an accompanying paper (free here in Biophysical Journal) showing how to use our calculator -- in conjunction with the excellent Bionumbers online database, a collection of (often uncertain) experimental measurements in biology -- to make real biological calculations more powerful. Do have a go at using our calculator at www.caladis.org: it's user-friendly and there are lots of examples showing how it works! Iain and Nick [blog article also here]

ARTICLE: Therapies for mtDNA disease: models and implications

Mitochondrial DNA disease and developmental implications for reproductive strategies

  • The inheritance of mutant mtDNA can cause devastating diseases: we review how this inheritance occurs and the ways modern medicine can help (and how some therapies may be improved)
Mitochondrial DNA (mtDNA) is a molecule in our cells that contains information about how to build important cellular machines that provide us with the energy required for life. Mutations in mtDNA can prevent our cells from producing these machines correctly, causing serious diseases. Mutant mtDNA can be passed from a carrier mother to her children, and as the amount of mutated mtDNA inherited can vary, children's symptoms can be much more severe (often deadly) than those in the mother.

Several therapies exist to prevent or minimise the inheritance of mutant mtDNA from mother to daughter. These range from simply using a donor mother's eggs (in which case the child inherits no genes from the "mother") to amazing new techniques where a mother's nucleus is transferred into a donor's egg cell which has had its nucleus removed (so that the child inherits nuclear DNA from the mother and father, and healthy mtDNA from the donor). The UK is currently debating whether to allow these new therapies: several potential scientific issues have been identified in their application.

 
If a mother carries an mtDNA mutation, (A) no clinical intervention can lead to her child inheriting that mutation and developing an mtDNA disease. Several "classical" (B-C) and modern (D-E) strategies exist to attempt to prevent the inheritance of mutant mtDNA, which we review (see paper link below)

As experiments with human embryos are heavily restricted, experiments in animals provide the bulk of our knowledge about how these therapies may work. We have previously written about our research in mice, highlighting a possible issue arising from mtDNA "segregation", where one type of mtDNA (possibly carrying a harmful mutation) may proliferate over another: this phenomenon could, in some circumstances, nullify the beneficial effects of mtDNA therapies. Another possible issue involves the effects of "mismatching" between the mother and father's nuclear DNA and the donor's mtDNA: current experimental evidence is conflicted regarding the strength of this effect. Finally, mismatch between donor mtDNA and any leftover mother mtDNA may also lead to biological complications.

We have recently written a paper (free here) explaining and reviewing the current state of knowledge of these effects, summarising the evidence from existing animal experiments. We are positive about implementing these therapies, which have the potential to prevent the inheritance of devastating diseases. However, we note cautions about this implementation, noting that several scientific questions remain debated or unanswered. We particularly highlight that "haplotype matching", a strategy to ensure that donor and mother mtDNA are as similar as possible, will largely remove these concerns. Iain [blog article also here]

ARTICLE: Mitochondrial motion in plants

FRIENDLY regulates mitochondrial distribution, fusion, and quality control in Arabidopsis

  • Plant mitochondria play central roles in carbon metabolism, photosynthesis, and plant growth, but the genes controlling their structure are poorly understood: we make progress understanding how one important gene influences mitochondria and plant structure
Mitochondria are often likened to the power stations of the cell, producing energy that fuels life's processes. However, compared to traditional power stations, they're very dynamic: mitochondria move through the cell, and fuse together and break apart (among other things). Interestingly, their ability to move and undergo fusion and fission affects their functionality, and so has powerful implications for understanding disease and cellular energy supplies.

Because of this central role, it is important to understand the fundamental biological mechanisms that govern mitochondrial dynamics. Several important genes controlling mitochondrial dynamics are known in humans (and other organisms), but plant mitochondria (despite the fundamental importance of plant bioenergetics for our society) are less well understood.

Our collaborators, David Logan and his team, working with a plant called Arabidopsis, observed that a particular gene, entertainingly called "FRIENDLY", affected mitochondrial dynamics when it was artificially perturbed. (This approach, artificially interfering with a gene to explore the effects that it has on the cell and the overall organism, is a common one in cell biology.) We've just written a paper with them (free here) exploring these effects. Plants with disrupted FRIENDLY had unusual clusters of mitochondria in their cells, their mitochondria were stressed, and cell death and poor plant growth resulted.
A simulation of mitochondrial dynamics in plant cells under our simple mathematical model, which we compared to observations in real plants.

We used a 3D computational and mathematical model of randomly-moving mitochondria within the cell to show that an increased "association time" (the friendly mitochon
dria stick around each other for longer) was sufficient to explain the experimental observations of clustered mitochondria. Our paper thus identifies an important genetic player in determining mitochondrial dynamics in plants; and explores in substantial detail the intra-cellular, bioenergetic, and physiological implications of perturbation to this important gene. Iain and Nick [blog article also here]

ARTICLE: 'Mitoflashes' indicate acidity changes rather than free radical bursts

The ‘mitoflash’ probe cpYFP does not respond to superoxide

  • It is sometimes assumed that a chemical probe called cpYFP measures superoxide, a chemical species capable of causing substantial damage and implicated in ageing: we show that the probe actually measures pH, and that some studies require some reinterpretation accordingly
As we've written about before, mitochondria generate the energy required by our cells through respiration that involves using an "electrochemical gradient" as an energy store (a bit like pumping water up into a reservoir for energy storage to then harness it flowing down the gradient of a hill to turn a turbine), and produces superoxide (free oxygen radicals) as a by-product (a bit like sparks when the pumps are running hot). The fundamental importance of this machinery which not only delivers energy, but is also involved in disease and aging  has led to its investigation in great molecular detail (comparable to taking the turbines and generators apart to learn about their function). Much less is known about how mitochondria actually behave when they are fully functional in their natural environment inside our cells (comparable to looking at the fully intact and running turbine), and progress has been difficult since suitable `tools' are scarce.

A debate exists in the scientific literature about one of the key "tools" used in the investigation of living cells. A particular fluorescent sensor protein called cpYFP (circularly permuted yellow fluorescent protein) is used in biological experiments, ostensibly as a way of measuring the levels of superoxide/free oxygen radicals  in a mitochondrion. Our colleagues, however, have cast doubt on the ability of cpYFP to measure superoxide, providing evidence that it instead responds to pH, part of the above electrochemical gradient. This debate was complicated by the fact that in biology, pH and superoxide can vary together, as the amount of "driving" and amount of "sparks" might be expected to.

As another analogy: If we found an unknown measuring device and we did not know how it works, but we saw that it responds during sunny weather, we may conclude that it measures warm temperature. However, it may in fact measure high atmospheric pressure which is, like warm temperatures, often correlated with good weather. 

The protein cpYFP changes its fluorescence in response to pH changes, but is unaffected by superoxide changes.

A recent and fascinating paper in Nature observed that "flashes" of the cpYFP sensor during early development of worms (as a model for other animals and humans) were correlated with their eventual lifespan. However, despite the debate about what it is exactly that the  cpYFP sensor measures, the paper interpreted it as responding to superoxide: looking at the correlation in the light of the so called “free radical theory of aging". This long-standing and much debated theory hypothesizes that the cause of why we age and eventually die is related to the constant production of free oxygen radicals in our mitochondria causing a steady increase in damage to our cells weakening their energetic machinery more and more and making them prone to illnesses.

In response to this, our colleagues decided to settle the question about what the sensor actually measures chemically, removing biological complications from the system. In the analogy of the unknown measurement device, the device was now tested under controlled temperature and controlled pressure to clearly distinguish between the two. They produced an experimental setup where a mix of chemicals was used to generate superoxide in the absence of any pH change. cpYFP in this mix did not show any signal, showing that it remains unresponsive to superoxide. In concert, they showed that even small changes in pH produced a dramatic response in cpYFP signal. Finally, they investigated the physical structure of cpYFP, showing that a large opening in the barrel-like structure of the protein exposes a pH-sensitive chemical group to its environment (comparable to showing how exactly the inner mechanics of the unknown measurement device can pick up pressure changes). We thus concluded, in a recent communication in the journal Nature here that the cpYFP sensor reports pH rather than superoxide, and that results using cpYFP (including the above Nature paper, which remains fascinating) should be interpreted as such. Iain, Markus and Nick [blog article also here].

ARTICLE: Evolutionary competition within our cells: the maths of mitochondrial DNA

mtDNA Segregation in Heteroplasmic Tissues Is Common In Vivo and Modulated by Haplotype Differences and Developmental Stage


  • MtDNA mixtures in cells arise through mutation and gene therapies: we show that different types of mtDNA usually proliferate at different rates, which suggests ways that therapies could be made more efficient.
Women may carry mutated copies of mitochondrial DNA (mtDNA) -- a molecule that describes how to build important cellular machinery relating to cellular energy supply. If this mutant mtDNA is passed on to that woman's child, the child may develop a mitochondrial disease, which are often degenerative, fatal, and incurable.

Joerg created mice that contained two types of mtDNA -- here illustrated as blue (lab mouse mtDNA) and yellow (mtDNA from a mouse from a wild population). We used several different wild mice from across Europe to represent the mtDNA diversity one may find in a human population. We found that throughout a mouse's lifetime, one mtDNA type often outcompetes another (here, yellow beats blue), with different patterns across different tissues.

Amazing new therapies potentially allow a carrier mother A and a father B to use another woman C's egg cells to conceive a baby without much of mother A's mtDNA being present. The approach involves taking nuclear DNA content from A and B (so that most of the child's features are inherited from the true mother and father), and placing it into C's egg cells, which contain a background of healthy mtDNA. You can read about, what are misleadingly called, three-parent babies here.

Something that is less discussed is that, in this process, a small amount of A's mutant mtDNA can be "carried over" into C's cell. If this small amount remains small through the child's life, there is no danger of disease, as the larger amount of healthy C mtDNA will allow the child's cell to function normally. We can think of the resulting situation as a competition between A and C -- if A and C are evenly matched, the small amount of A will remain small; if C beats A, the small amount of A will disappear with time; and if A beats C, the small amount of A will increase and may eventually come to dominate over C.

Until recently it has been fair to assume that A and C are always about evenly matched (unless something is drastically different between A or C). However, evidence for this idea was based on model organisms in laboratories, which do not have the same amount of genetic diversity as found in human populations. Our collaborator Joerg addressed this by capturing wild mice from across central Europe, selecting a set that showed a comparable degree of genetic diversity to that expected in a human population. He used these, with our modelling and mathematical analysis, to show that pronounced differences between A and C often exist, and are more likely in more diverse populations. The possibility that A beats C, and mutant mtDNA comes to dominate the child's cells, therefore cannot be immediately discounted in a diverse population. We propose "haplotype matching" -- ensuring that A and C are as similar as possible -- to ameliorate this potential risk. It's open as to whether one can generalize from observations in mice to people and it's also open as to whether our conclusions, which used lab-mice as parent A (which are not entirely typical creatures) of necessity generalize to other non-lab mouse types.

Our mathematical approach also allowed us to explore, in detail, the dynamics by which this competition within cells occurs. We were able to use our data rather effectively by having a statistical model that allowed us to reason jointly about a range of data sets. We found that the degree to which one population of mtDNA beat the other depended on how genetically different they were.  We found that different tissues were like different environments: some favouring C over A and some vice-versa. This is perhaps surprising to some as this evolution in the proportions of different genetic species is not something we imagine occurring inside us, during our lives, and as something that might differ between our organs. We found several different regimes, where the strength of competition changes with time and as the organism develops: when our cells are multiplying faster they show a more marked preference for one of the species. We've shown our results to the UK HFEA in its ongoing assessment of these therapies, and you can read, for free, about our work in the journal Cell Reports here. Iain, Joerg, Nick [blog article also here].

ARTICLE: Polyominoes: mapping genotypes to phenotypes

A tractable genotype–phenotype map modelling the self-assembly of protein quaternary structure


  • Proteins in our cells have intricate structures built by genetic instructions, and these structures are vital for life: we produce a computational model to explore the relationship between genetic instructions and structure, and how evolution and mutation may change proteins.
Biological evolution sculpts the natural world and relies on the conversion of genetic information (stored as sequences, usually of DNA, called genotypes) into functional physical forms (called phenotypes). The complicated nature of this conversion, which is called a genotype-phenotype (or GP) map, makes the theoretical study of evolution very difficult. It is hard to say how a population of individuals may evolve without understanding the underlying GP map.

This is due to the two fundamental forces of evolution -- mutations and natural selection -- acting on different aspects of an organism. Mutations occur to genotypes (G), while natural selection, the ultimate adjudicator of the fate of mutations in the population, acts on the phenotype (P). Without understanding the link between these two -- the GP map -- we can't easily say, for example, how many mutations we expect important proteins within a virus strain to undergo with time, and thus how quickly the virus will evolve to be unrecognised by our immune systems.

Simple models for the mapping of genotype to phenotype have helped answer important questions for some model biological systems, such as RNA molecules and a coarse-grained model of protein folding. One important class of biological structure which has not yet been modelled in this way are protein complexes: structures formed through proteins binding together, fulfilling vital biological functions in living organisms. In this work, we introduce the "polyomino" model, based on the self-assembly of interacting square tiles to form polyomino structures. The square tiles that make up a polyomino are assigned different "sticky patches", modelling the interactions between different proteins that form a complex. A huge range of structures can be formed by varying the details of these patches, mimicking the range of protein complexes that exist in biology (though there are some obvious differences in the shapes of structures that can be formed).

Our simple model explores the interactions between protein subunits, and how these interactions shape a surface that evolution explores. (top) Sickle-cell anemia involves a mutation that changes the way proteins interact, making normally independent units form a dangerous extended structure. (bottom) Our polyomino model models this effect. The resultant dramatic effects on structure, fitness, and evolution can then be explored.

Despite its abstraction we show that the polyomino model displays several important features which make it a potentially useful model for the GP map underlying protein complex evolution. On top of this, we demonstrate that our model possesses similar properties to RNA and protein folding models, interestingly suggesting that universal features may be present in biological GP maps and that the "landscapes" upon which evolution searches may thus have general properties in common. You can find the paper free here and you can play with polyominoes here! Iain [blog article also here]

ARTICLE: Fast inference about noisy biology

Efficient parametric inference for stochastic biological systems with measured variability


  • Understanding biology requires us to characterise the processes that go on inside our cells: we introduce a computational way to efficiently link models to observations, especially when these observations and processes are noisy.
Biology is a random and noisy world -- as we've written about several times before! This often means that when we try to measure something in biology -- for example, the number of a particular type of proteins in a cell, or the size of a cell -- we'll get rather different results in each cell we look at, because random differences between cells mean that the exact numbers are different in each case. How can we find a "true" picture? This is rather like working out if a coin is biased by looking at lots of coin-flip results.

Measuring these random differences between cells can actually tell us more about the underlying mechanisms for things like (to use the examples above) the cellular population of proteins, or cellular growth. However, it's not always straightforward to see how to use these measurements to fill out the details in models of these mechanisms. A model of a biological process (or any other process in the world) may have several "parameters" -- important numbers which determine how the model behaves (the bias of a coin, is an example, telling us what proportion of times we'll see heads). These parameters may include, for example, rates with which proteins are produced and degraded. The task of using measurements to determine the values of these parameters in a model is generally called "parametric inference". In a new paper, I describe a new and efficient way of performing this parametric inference given measurements of the mean and variance of biological quantities. This allows us to find a suitable model for a system describing both the average behaviour and typical departures from this average: the amount of randomness in the system. The algorithm I propose is an example of approximate Bayesian computation (ABC) which allows us to deal with rather "messy" data: I also describe a fast (analytic) approach that can be used when the data is less messy (Normally distributed).

The proposed efficient parametric inference process allows us to use data to make probabilistic statements about the details of models describing biology.

Parametric inference often consists of picking a trial set of parameters for a model and seeing if the model with those parameters does a good job of matching experimental data. If so, those parameters are recorded as a "good" set, otherwise, they're discarded as a "bad" set. The increase in efficiency in my proposed approach is due to the fact that we can perform a quick, preliminary check to see if a particular parameterisation is "bad", before spending more computer time on rigorously showing that it is "good". I show a couple of examples in which this preliminary checking (based on fast computation of mean results before using stochastic simulation to compute variances) speeds up the process by 20-50% on model biological problems -- hopefully allowing some scientists to grab a little more coffee time! This work is in the journal Statistical Applications in Genetics and Molecular Biology here, and you'll find the article (free) here. Iain [blog article also here]

ARTICLE: Inferring the evolutionary history of photosynthesis : C 4 yourself


Phenotypic landscape inference reveals multiple evolutionary paths to C4 photosynthesis

  • Some plants have evolved efficient photosynthesis, but important crops including rice have not: we use maths and statistics in conjunction with biological data to understand this evolution, with a view to repeating it artificially in crops to increase food production

Biological evolution is a complex, stochastic process which dictates fundamental properties of life. Our understanding of evolutionary history is severely limited by the sparsity of the fossil record: we only have a handful of fossilised snapshots to infer how evolution may have progressed throughout the history of life. Many physicists and mathematicians have attempted theoretical treatments of the process of evolution, using varying degrees of abstraction, in order to provide a more solid quantitative foundation with which to study this complex and important phenomenon, but the predictive power of these theoretical models, and their ability to answer specific biological questions, is often questioned.

We recently focussed on one remarkable product of evolution in plants: so-called "C4 photosynthesis". C4 consists of a complex set of changes to the genetic and physiological features which have evolved in some plants and act to increase the efficiency of photosynthesis. This complex set of changes has evolved over 60 times convergently: that is, plants from many different lineages independently "discover" C4 photosynthesis through evolution. We were interested in the evolutionary history of how these discoveries occurred -- both motivated by fundamental biology and the possibility of "learning from evolution" and using information about the evolution of C4 to design more efficient crop plants.


C3 and C4 plants differ in several physical and genetic ways (leaves and cells either side). We picture evolution as progressing along paths over a hypercube connecting these states (grey lines) -- some paths will give rise to intermediate species matching those we really observe (red and blue points). We can calculate how likely each path is and thus reconstruct evolutionary history.
 
To this end, we modelled the evolution of C4 as a pathway through a space containing many different possible plant features. The pathway starts at C3 -- the precursor to C4 -- and progressively takes steps in different directions, acquiring one-by-one the features that sum up to C4 photosynthesis. Using a survey of plant properties from across the wide scientific literature, we identified which intermediate states these pathways were likely to pass through, given observed properties of plants that currently possess some, but not all, C4 features. We were then able to use a new inference technique to predict the ordering in which these likely pathways traverse the evolutionary space. We showed that this approach worked by both successfully inferring the known evolutionary steps in synthetic datasets and correctly predicting previously unknown properties of several plants, which we verified experimentally. Our (open access) paper is here and there's a less technical summary and commentary here. Our approach showed that C4 photosynthesis can evolve through a range of distinct evolutionary pathways, providing a potential explanation for its striking convergence. Several of these different pathways were made explicitly visible when we examined the inferred evolutionary histories of different plant lineages -- different families are likely to have converged on C4 through different evolutionary routes. Furthermore, the most likely initial steps towards C4 photosynthesis are surprisingly not directly related to photosynthesis, being solutions to different biological challenges, but also providing evolutionary "foundations" upon which the machinery of C4 can evolve further. We hope that the recipes for C4 photosynthesis that we have inferred find use in efficient crop design, and anticipate our inference procedure being of use in the study of other specific biological questions regarding evolutionary histories. Iain [blog article also here]

ARTICLE: Exploring noise in cellular biology

The chaos within: Exploring noise in cellular biology


  • Biology requires exquisitely precise chemistry but exists in a very noisy world: we review how science is learning about how this randomness affects cell biology and how life has evolved to deal with it
We're used to thinking about machines as robust, hard-wearing objects made from solid materials like metal and plastics. If they crack, split or overheat they are liable to malfunction, and if we subject them to too much jostling and shaking we're asking for trouble. However, the biochemical machines responsible for keeping us alive work in a rather different world -- they're made from soft, organic materials, and contained in a disorganised bag (the cell) that is constantly shaken, bumping our machines against each other and other cellular inhabitants. How can the delicate processes required by living organisms take place in this chaotic environment? And how can scientific progress be made in such a tumultuous, unpredictable world?


Extrinsic factors can modulate the stability of essential, but noisy, cellular circuits: here we see that changes in transcription rate, provoked by differences in mitochondrial populations, affects the "landscape" underlying stem cell behaviour.

Iain recently wrote an article, targeted at a broad audience, looking at some of these questions. One of the most important cellular processes that has to take place in this chaotic world is that of 'gene expression': the interpretation of genetic blueprints which describe how to build cellular machinery, and the subsequent construction process. Gene expression can be likened to using a bad photocopier to copy books from a library that opens and closes randomly, then using these photocopies (which are prone to decay) to construct machines. This problematic environment gives rise to many medically important random effects, including bacterial resistance to antibiotics and differing responses to anti-cancer drugs. We are particularly interested in how fluctuating power supplies (see our other blog articles!) influence the cell's ability to produce these machines, and what effects this unreliable power has on medically important processes. The article -- available here and appearing in the expository magazine Significance here -- takes a look at how cellular noise arises, current techniques for its detection and analysis, and its influence on important biological phenomena. Iain [blog article also here]

ARTICLE: Taking the pulse of cellular power stations

Pulsing of Membrane Potential in Individual Mitochondria: A Stress-Induced Mechanism to Regulate Respiratory Bioenergetics in Arabidopsis


  • Mitochondria in plants sometimes switch off their membrane potential, which contributes to their ability to make energy for the cell: we characterise this "pulsing" and explore how it can be beneficial to plants under stress

We've just written an article in the journal Plant Cell about pulsing cellular power-stations and will motivate it by an analogy. Imagine we have a reservoir of water, and this water flows downhill through an outlet pipe, turning a turbine and producing energy. In this thought experiment, we're faced with a problem: the only way we can get water into our reservoir is by pumping it into the bottom of the reservoir. The higher up a reservoir is, the harder it is to pump water up there and the higher the risk of pumps overheating and getting damaged.

The problem can be solved by allowing the height of our reservoir to vary. If we lower our reservoir, it will become easier to fill, and the higher water pressure that arises from an increasingly filled reservoir will partly compensate for the fact that turbine-turning water will flow downhill from a lowered height, while allowing the pumps to relax and cool.

This model is a crude representation of mitochondria, the power stations of the cell, which use energy from respiration to create an energetic gradient across their membranes -- like a natural version of an AA battery. In our picture, this corresponds to the pumps feeding into our reservoir -- and in the cell, these pumps produce dangerous chemicals when they are overworked. The gradient they produce imbues protons with energy that is part electrical -- which we picture as the height of our reservoir -- and part chemical -- which we picture as the amount of water in our reservoir. These protons then flow through a protein complex -- the turbine -- to produce ATP, the universal cellular fuel.

An abstract representation (acrylic on canvas) of a single mitochondrion undergoing a 'pulse'. Its change in energy status is shown by the change in colour that we have also observed by microscopy using fluorescent sensors. Artist: Markus S

When mitochondria pump many protons, their "reservoirs" rise, with the increase in height forcing the pumps to work harder to pump water into the reservoir. This work produces dangerous chemicals which can damage the cell and the mitochondria themselves (called reactive oxygen species - they're what antioxidants try to combat). We have found a new mechanism by which this risk is decreased: if mitochondria are having to work hard, they "pulse", spontaneously lowering the height of their reservoir. This decreases the amount of work that the mitochondrial pumps have to do to fill the reservoir. The amount of turbine-turning energy per unit of water decreases, but as it becomes easier to fill the reservoir, more water gets pumped into it, partly compensating for the loss of height by an increase in volume. The pulsing process thus lowers the reservoir but fills it with more water, allowing the mitochondrial pumps to relax and reducing the production of dangerous chemicals.

We observed these pulses, spontaneous decreases of mitochondrial membrane potential, in Arabidopsis thaliana, a model plant species used in many biological contexts. Treating plant mitochondria with a variety of chemicals and observing the effects on pulsing, we deduced a biochemical mechanism by which pulsing occurs: a controlled influx of cations such as calcium ions into the mitochondrial matrix decreases membrane potential. We also found that pulsing is increased when plants face stressful environments: if they are suddenly heated, for example, or exposed to toxic chemicals. This novel mechanism may help explain some of the variability that our cellular engines exhibit and may be an important discovery in considering how mitochondria react to dangerous cellular conditions. You'll find the article here. Iain, Markus & Nick [blog article also here].

ARTICLE: How cellular power stations might fluctuate

Mitochondrial Variability as a Source of Extrinsic Cellular Noise


  • Mitochondrial populations vary substantially between even genetically identical cells: we show that this variability can have profound effects on important cellular processes and build a mathematical framework describing it

Mitochondria are the tiny engines of eukaryotic cells (the cells of animals, plants and fungi) -- responsible for producing the energy vital for fundamental processes of life. A recent explosion of experimental results has shown that their behaviour is far more dynamic and rich than any man-made engine. They move through cells, fuse into large networks, break apart, replicate and get degraded if they don't perform adequately.

Mitochondrial populations are also observed to differ significantly between otherwise similar cells: one cell may possess many efficient engines, while another must make do with a small number of inefficient ones (see the blog entry here). As well as explaining why the error-bars in biology can be so big, this cell-to-cell variability in mitochondria can lead to profound medical consequences: many diseases are known to result from low quality mitochondria, unable to produce enough energy for a cell to remain healthy. Mitochondria (and mechanisms to keep their quality high) have been implicated in ageing and diseases like Parkinson's, Alzheimer's, diabetes and cancer.

A montage of some of the mathematical analysis in our article. We're seeing the contributions of mitochondrial variability to (top) stem cell behaviour; (right) transcription rate; (bottom) cell cycle progression and physiology.

We have recently produced a mathematical model (coupled to some new experimental data) to give an explanation as to how experimentally observed variability in mitochondrial populations arises and explore its potential consequences: ranging from differences in the rate at which fundamental biochemical elements like mRNAs and proteins are produced, to differences in the stability and ultimate fate of stem cells. Our model is physically simple (we suggest several future experimental directions which would help in further development) and can mathematically be combined with other descriptions of cellular processes, providing a general framework to investigate the biologically and medically important effects of mitochondrial variability. It appeared in PLoS Computational Biology (an open-access journal) here but you can also find it here. Iain and Nick [blog post also here].