About Josh Mitteldorf

Josh Mitteldorf studies evolutionary theory of aging using computer simulations. The surprising fact that our bodies are genetically programmed to age and to die offers an enormous opportunity for medical intervention. It may be that therapies to slow the progress of aging need not repair or regenerate anything, but only need to interfere with an existing program of self-destruction. Mitteldorf has taught a weekly yoga class for thirty years. He is an advocate for vigorous self care, including exercise, meditation and caloric restriction. After earning a PhD in astrophysicist, Mitteldorf moved to evolutionary biology as a primary field in 1996. He has taught at Harvard, Berkeley, Bryn Mawr, LaSalle and Temple University. He is presently affiliated with MIT as a visiting scholar. In private life, Mitteldorf is an advocate for election integrity as well as public health. He is an avid amateur musician, playing piano in chamber groups, French horn in community orchestras. His two daughters are among the first children adopted from China in the mid-1980s. Much to the surprise of evolutionary biologists, genetic experiments indicate that aging has been selected as an adaptation for its own sake. This poses a conundrum: the impact of aging on individual fitness is wholly negative, so aging must be regarded as a kind of evolutionary altruism. Unlike other forms of evolutionary altruism, aging offers benefits to the community that are weak, and not well focussed on near kin of the altruist. This makes the mechanism challenging to understand and to model. more at http://mathforum.org/~josh

Universal Clock implies Universal Clockwork

A new methylation clock works in 128 different mammal species, using the same methylation signals. This is the latest evidence that at least some of the mechanisms of aging have been conserved by evolution—strong evidence that aging has a useful function in ecology, so that natural selection actually prefers a finite, defined lifespan.


Einstein taught us that time is relative. Indeed, there are rodents that live less than a year, and Bowhead whales that live more than 200 years. Some of this is just about size and has a basis in physics; but it is well-known that size is only part of the story. Bats and mice are the same size, but bats live ten times longer. Humans are much smaller than horses, but live three times as long.

The first time I met Cynthia Kenyon was circa 1998. She offered me a one-line proof that aging is programmed: the enormous range in lifespans found in nature defies any theory about damage accumulation, because no conceivable process of chemical damage could vary so widely in its fundamental rate. (Think mayflies and sequoia trees.) My own one-line proof is that yeast and mammals share in common some genetic mechanisms that regulate aging, though the last common ancestor of yeast and mammals is more than half a billion years old. These mechanisms include sirtuins and the insulin metabolism.

These intuitions about aging rate and evolutionary conservation have recently come to the world of big data. In this new BioRxiv manuscript, Steve Horvath collaborates with an all-star cast of biologists the world over to compile evidence that there is a universal mechanism underlying development and aging in all mammals, and it is a pan-tissue epigenetic program, not a process of chemical damage.

Brief background on methylation: It is increasingly clear that aging has a basis in gene expression. The whole body has the same DNA, and it doesn’t change over time. However, different genes are turned on and off in different times and places. Turning genes on and off is called “epigenetics”, and evolution has devoted enormous resource to this process. One of many epigenetic mechanisms is the presence or absence of a methyl group on Cytosine, which is one of the 4 building blocks of DNA (A, C, T, G). There are over 20 million regulatory sites in human DNA where methyls can appear or not. Of these, several thousand have been found to consistently correlate with age. The correlation is so strong that the most accurate measures of biological age are now based on methylation. There is (IMO) a developing consensus in the community that methylation changes are an upstream cause of aging, and there remains strong resistance to this idea on theoretical grounds. More background here

The team assembled tissue samples from 59 organs across 128 species of mammals, and looked for commonalities in the progression of methylation that were independent of species and independent of tissue type. They found thousands of methylation sites that fit the bill, attesting to an evolutionarily-conserved mechanism “connected to” aging. It is a short leap to imagine that “connected to” implies a root cause.

How did the authors map age for a mouse onto age of a whale? Just as I might say, “I’m only 10 years old, in dog years,” a year for a whale might be a hundred “mouse years”. The authors took three different approaches. (1) Just ignore it, mapping chronological time directly. (2) Adjust time for the different species based on the maximum lifetime for that species. (3) Adjust time for the different species based on the time to maturity for that species.

Predictably, (1) produced paradoxes; (2) and (3) were similar, but (3) produced the best results. What they didn’t do — but might in follow-on work — was to optimize the age-scaling factor individually for each species to target the best fit with all the other species. Even better would be to choose two independent scaling factors to optimize the fit of each species. Ever since the original 2013 clock, Horvath has divided the lifespan into two regimes, development and aging: In development, time is logarithmic, moving very fast at the beginning and slowing down at the end of development. In the aging regime, time is linear. So it would be natural (optimum, in my opinion) to choose two separate scaling factors that best map each species’s life history course onto all the others. Mathematically, this is (roughly) as simple as matching the slopes of two lines. Horvath has told me he is interested in pursuing this strategy but for some species the existing data doe not cover the lifespan sufficiently to support it.

“Cytosines that become increasingly methylated with age (i.e., positively correlated) were found to be more highly conserved (Fig. 1a)  …Interestingly, although there were 3,617 enrichments of hypermethylated age-related CpGs [i.e., increased methylation with age] across all tissues, only 12 were found for hypomethylated [the opposite] ones.”

Interpretation: with age, we (and other mammals) tend to lose methylation, i.e., to turn on genes that shouldn’t be turned on. There are more sites that demethylate with age than that methylate with age. But the sites that gain methylation tend to be more highly conserved between species. I presume a lot of demethylation is stochastic. It’s easy for a methyl group to “fall off”, but attaching one in the right place requires a specialized enzyme (methyl transferase). What we are seeing here is stronger genetic determinism for the process that requires active intervention.

Question: Would it be useful to develop a methylation clock based solely on sites that gain methylation? What we would thereby avoid is the situation where the age algorithm combines a great many large positive numbers with a great many large negative numbers to make a small difference. This characteristic makes the algorithm overly sensitive to bad data from one or a few particular sites. We can see from the figure above that (red) sites from the top half of the plot have stronger evidence behind them than the (blue) sites from the bottom. What we would lose would be diversity in the basis of the measurement. If retaining that diversity is desirable, it would be possible to design a clock algorithm with both red and blue sites in such a way that all coefficients are relatively small, and no one site contributes inordinately to the age calculation, even if data for that site is completely missing.

Speculation for statistics geeks: I think the methodology that has become standard for developing methylation clocks is not optimal. The standard method is to identify N sites (typically a few hundred) where methylation is well-correlated with age, then derive N coefficients such that you can multiply each coefficient by the corresponding methylation, add up the products, and you get an age estimate*. The way I would do it is with a more complicated calculation, from a methodology called “maximum likelihood”. The idea is to choose the age that minimizes the difference between the expected methylation and measured methylation for the collection of the N sites. To be more specific, minimize the sum of the squares of the z scores for each site, where z is the number of standard deviations by which the measured methylation is different from the expected methylation.It may sound like a complicated calculation to find the age at which this number is a minimum, but it is not. Yes, it’s a guessing game; but the algorithm called “Newton’s method” allows you to make smart guesses so you home in on the best (min Σz2) age within four or five guesses. The calculation is more complicated to program, but it would still execute in a tiny fraction of a second. My proposed method requires maybe 10 or 20 times as many fixed parameters within the algorithm; but the data submitted from each sample is the same.
Caveat – This is all theoretical on my part. I don’t know how much performance would be improved in practice.
————————
*Two footnotes: (1) A constant is also added. (2) In case the subject is young, below the age of sexual maturity, what you get is a logarithm of age, not age itself.

“Importantly, age-related methylation changes in young animals concur strongly with those observed in middle-aged or old animals, excluding the likelihood that the changes are those involved purely in the process of organismal development.”

These plots are adduced as evidence that aging and development are one continuous process under epigenetic control. They come from EWAS=epigenome-wide association studies. Start by asking which sites on the methylome are most closely correlated with age, across many different animals and different tissues in those animals. Start with just the young animals (different ages, but all before or close to sexual maturity. Arrange all the different sites according to how they change methylation with age (increasing or decreasing), just in this age range. Then repeat the process, re-ordering the sites according to how they change with age during middle age.

The left plot above includes a dot for each methylation site, ordered along the X axis according to how they change during youth, and along the Y axis according to how they change during middle age. The point of the exercise is that it is largely the same sites that increase (or decrease) methylation in youth and in middle age.

The middle plot shows the corresponding correlation between middle age (X axis) and old age (Y axis). The right-hand plot shows the correlation between young (X axis) and old age (Y axis). (I believe the labeling of the figure on the right is a misprint.)

This evidence points to a conceptual framework that views development and aging as one continuous process. Development is a lot more complicated than aging. Consequently, most of the sites in the clock are developmental.  Maybe a clock could be optimized for aging only, and it would be more useful for those of us who are using the clocks to assess anti-aging interventions.

“The cytosines that were negatively associated with age in brain and cortex, but not skin, blood, and liver, are enriched in the circadian rhythm pathway”

Here we see again the intriguing connection between the brain’s daily timekeeping apparatus and the epigenetic changes that drive development and aging.

“The implication of multiple genes related to mitochondrial function supports the long-argued importance of this organelle in the aging process. It is also important to note that many of the identified genes are implicated in a host of age-related pathologies and conditions, bolstering the likelihood of their active participation in, as opposed to passive association with, the aging process.”

Another theme in the set of age-correlated genes that the team discovered is mitochondrial function. Mitochondria have an ancient association with cell death, and a long, conserved history with respect to aging. The simple damage themes associated with the free radical theory have yielded to a more complex picture, in which free radicals can be signals for apoptosis or inflammation or enhanced protective adaptations.

The big picture

“Therefore, methylation regulation of the genes involved in development (during and after the developmental period) may constitute a key mechanism linking growth and aging. The universal epigenetic clocks demonstrate that aging and development are coupled and share important mechanistic processes that operate over the entire lifespan of an organism.”

This is cautiously worded, presumably to represent a consensus among several dozen authors, or perhaps to appease the evolutionary biologists looking over our shoulders. The statement is akin to what Blagosklonny has for years called “quasi-programmed aging”, to wit, there are processes that are essential to development that fail to turn off on time, and cause damage as the organism gets older. In the version put forward in this present ms, it is not the gene expression itself but the direction of change of gene expression that carries momentum and cannot be turned off.

Evolutionary theory

Modern evolutionary theory began with Peter Medawar, a Nobel laureate and giant of mid-century biological understanding. (He was 6 foot 5.) Medawar’s 1952 monograph contains the insight that launched all modern theories for evolution of aging. His fundamental idea was that it’s a dog-eat-dog world in which very few few animals live long enough for aging to be a factor in their death. The three main branches of evolutionary theory in response to Medawar are called Mutation AccumulationDisposable Soma, and Antagonistic Pleiotropy. According to Medawar’s thought (and all three theories that followed) old age exists in a “selection shadow” so random processes are at work in old age. It follows that we would expect the aging of a bat and a bowhead whale to be subject to very different random processes. If it is a burden of recently acquired mutations that natural selection has not yet had time to weed out, these should be different for different species. Or if it is about tradeoffs (pleiotropy) between needs of the young animal and the old animal, we would not expect the bat and the whale to be subject to the same tradeoffs.

The Medawar paradigm and its three popular sub-theories all predict that there should be little overlap between the genetic factors involved in aging of species that are adapted so differently. Therefore, the present work documenting a common epigenetic basis of aging is a challenge to the established evolutionary theories of aging.

As I see it, the expression of genes is exquisitely timed for many purposes, so we must view gene expression as subject to tight bodily control. “Accidents” or “mistakes” or “evolutionary neglect” are implausible. For some genes, methylation changes from minute to minute in a way that is adaptive and responsive. Blagosklonny’s idea that there are genes turned on for development and then the body forgets to turn them off doesn’t feel right. Equally, the idea that certain genes are being turned on (or off) progressively through development and then, after development has ended, the process has a momentum of its own so the body can’t stop further turning on (or off) of these same genes is equally implausible. I assume the body is adapted to do exactly what it wants with gene expression, and if the body expresses a combination of genes that causes aging, it’s because that’s what natural selection has designed the body to do. Of course, this looks to be a paradox, as aging is completely maladaptive according to the notion of Darwinian fitness that became accepted in the first half of the 20th century; but evolutionary biologists have broadened the notion of fitness since then, and I’ve written volumes concerning this paradox.

The bottom line

For personal application to individuals who want to know how well they are doing and their future life expectancy, I recommend Horvath’s Grim Age clock as the best available. (Elysium has done a lot of work on their Index product, and it may be as good or better, but it’s impossible to evaluate unless they release their proprietary methodology.) For application to studies of anti-aging interventions (including my own project, DataBETA), the choice of clocks is not clear, because it depends not just on statistics but on theory. We want a clock that is not only accurate, but that is based on epigenetic causes of aging, not epigenetic responses to aging. The multi-species clock is a welcome contribution, precisely because epigenetic processes that are conserved across species are more likely to be linked to the root cause of aging. For the future, I’ve made suggestions above for ways the multi-species clock might be made even better.

A Science of Wholeness Awaits Us

Just as the melody is not made up of notes nor the verse of words nor the statue of lines, but they must be tugged and dragged till their unity has been scattered into these many pieces, so with the World to whom I say Thou Martin Buber

We creatures of the 21st Century, grandchildren of the Enlightenment, like to think that our particular brand of rationality has finally established a basis for understanding the world in which we live. Of course, we don’t have all the details worked out, but the foundation is solid. 

We might be chastened by the precedent of Lao Tzu and Socrates and Hypatia fof Alexandria and Thomas Aquinas and Lord Kelvin, who thought the same thing. I wonder if the foundation of our world-view is really made of more durable stuff than theirs. In fact, founding our paradigm in the scientific method offers us something that earlier sages did not have: we can actually compare in detail the world we observe and the consequences of our physicalist postulates. The results are not reassuring. In recent decades, the science establishment has willfully ignored observations of phenomena that call into question our foundational knowledge.

Reductionism is the process of understanding the whole as emergent from the parts. The opposite of reductionism is holism: understanding the parts in terms of their contribution to a given whole. It’s fair to say that all of science in the last 200 years has been reductionist. Physical law is the only fundamental description of nature. Chemistry could, in principle, be derived from physics (if only we could solve the Schrödinger equation for hundreds of electrons); living physiology could be understood in terms of chemistry; and ecology could be modeled in terms of individual behaviors. 

Curiously, there are holistic formulations of physics that are mathematically equivalent to the reductionist equations, but in practice, physicists use the differential equations, which are the reductionist version. 

Biological function is explained by a process of evolution through natural selection that made them what they are. Holism in evolution is called “teleology”, and is disparaged as unscientific. But when features of physics appear purposeful, there is no agreement among scientists how to explain them. Most physicists would avoid invoking a creator or embedded intelligence, even at the cost of telling stories about vast numbers of unobservable universes outside our own. This is the most common explanation for the fact that the rules of physics and the very constants of nature—things like the charge on an electron and the strength of the gravitational force—these things seemed eerily to have been fine-tuned to offer us an interesting universe; most other choices for the basic rules of physics might have produced dull uniformity, without stars or galaxies, without chemistry, without life.

But I am racing ahead of the story. The question I want to ask is whether we are missing something in reasoning exclusively from the bottom up, explaining all large-scale patterns as emergent results of small-scale laws. I want to suggest that this deeply-ingrained pattern of thought may be holding science back. Are there large-scale patterns waiting to be discovered? Are there destined outcomes that help us understand the events leading to a predetermined denouement? Even formulating such questions is controversial; and yet, we see hints pointing in just this direction, both from micro-science of quantum mechanics and from studies of the Universe on its largest scale.


Science is all about observing nature and noticing patterns which might be articulated as theories or laws. When these patterns connect nearby events that can be observed at one time by one person, they are easy to spot. When the patterns involve distant events and stretch over time and space, they may go undetected for a long while. This can lead to an obvious bias. Scientists are more inclined to formulate laws of nature that connect contiguous events than laws that connect events that are separated spatially and temporally, just because these global patterns are harder to see.

The physical laws that were formulated and tested in the 19th and 20th century were all mediated by local action. The idea that all physical action is local was formalized by Einstein, and has been baked into our theories ever since. But there is a loophole, defined by quantum randomness. Roughly speaking, Heisenberg’s Uncertainty Principle says that we can only ever know half the information we need to predict the future from the past at the microscopic level. Is the other half replaced by pure randomness, devoid of any patterns that science might discern? Or might it only appear random, because the patterns are spread over time and space, and difficult to correlate? In fact, the existence of such patterns is an implication of standard quantum theory. (This is one formulation of the theorem about quantum entanglement, proved by J.S. Bell in 1964.) Speculative scientists and philosophers relate this phenomenon to telepathic communication, to the “hard problem” of consciousness, and to the quantum basis of life.

I hope to explore this topic in a new ScienceBlog forum beginning in 2021. Here are four examples of the kinds of phenomena pointing to a new holistic science.

1. Michael Levin and the electric blueprint for your body

We think of the body as a biochemical machine, proteins and hormones turned on in the right places at the right times to give the body its shape. Levin is clear and articulate in making the case that the body develops and takes shape under a global plan, a blueprint, and not just a set of instructions. This is true for humans and other mammals, but it is easier to prove it for animals that regenerate. Humans can grow back part of a liver. An octopus can grow a new leg; a salamander can grow a new leg or tail tail; a zebrafish can grow back a seriously damaged heart; starfish and flatworms can grow back a whole body from a small piece.

Consider the difference between a blueprint and an instruction set. An instruction set says

1. Screw the left side of widget A onto the right side of gadget B.
2. Take the assembly of widget+gadget and mount it in front of doodad C, making sure the three tabs of C fit into the corresponding holes in B

A blueprint is a picture of the fully assembled object, showing the relationship of the parts.

Ikea always gives you both. With the instructions only, it is possible to complete the assembly, but only if you don’t make any mistakes. And if the finished object breaks, the instruction set will not be sufficient to repair it. The fact that living things can heal is a strong indication that they (we) contain blueprints as well as instruction sets. The instruction set is in the genome, together with the epigenetic information that turns genes on and off as appropriate; but where is the blueprint?

Prof Michael Levin of Tufts University has been working on this problem for almost 30 years. The answer he finds is in electrical patterns that span across bodies. One of the tools he pioneered is voltage reporter dyes that glow in different colors depending on the electric potential. Here is a map of the voltage in a frog embryo, together with a photomicrograph.

from Levin’s 2012 paper

Levin’s lab has been able to demonstrate that the voltage map determines the shape that the tadpole grows into as it develops. Working with planaria flatworms, rather than frogs, their coup de grace was to modify these voltage patterns “by hand”, creating morphologies that are not found in nature, such as the worm with two heads and no tail.

This is stunning work, documenting a language in biology that is every bit as important as the genetic code. Of course, I am not the first to discover Dr Levin’s work; but it is underappreciated because the vast majority of smart biologists are focusing on biochemistry and it is a stretch for them to step out of the reductionist paradigm.

(I wrote more about Levin’s work two years ago. Here is a video which presents a summary in his own words.)

2. Cold Fusion

Two atomic nuclei of heavy hydrogen can merge to create a single nucleus of helium, and tremendous energy is released. This process is not part of our everyday experience because the hydrogen nuclei are both positively charged and the energy required to push them close enough together that they will fuse is also enormous. So fusion can happen in the middle of the sun, where temperatures are in the millions of degrees, and fusion can happen inside a thermonuclear bomb. But it’s hard as hell to get hydrogen to fuse into helium, and, in fact, physicists have been working on this problem for more than 60 years without a viable solution.

Except that in 1989, the world’s most eminent electrochemist (not exactly a household name) announced that he had made fusion happen on his laboratory bench, using the metal palladium in an apparatus about as complicated as a car battery.

Six months later, at an MIT press conference, scientists from prestigious labs around the world lined up to announce they had tried to duplicate what Fleischmann had reported with no success. The results were un-reproducible. Cold Fusion was dead, and the very word was to become a joke about junk science. Along with the vast majority of scientists, I gave up on Cold Fusion and moved on. 22 years passed. Imagine my surprise when I read in 2011 that an Italian entrepreneur had demonstrated a Cold Fusion boiler, and was taking orders!

The politics of Cold Fusion is a story of its own. I wrote about it in 2012 (not for ScienceBlog). The Italian turned out to be a huckster, but the physics is real.

I began reading, and I became hooked when I watched this video. I visited Cold Fusion labs at MIT, Stanford Research Institute, Portland State University, University of Missouri, and a private company in Berkeley, CA. I went to two Cold Fusion conferences. I became convinced that some of the claims were dubious, and others were convincing. There is no doubt in my mind that Cold Fusion is real.

Physicists were right to be skeptical. The energy for activation is plentiful enough, even at room temperature, but the problem is to concentrate it all in one pair of atoms. Left to its own devices, energy will spontaneously spread itself out— that’s what the science of thermodynamics is all about. To concentrate an eye-blink worth of energy in just two atoms is unexpected and unusual. But things like this have been known to happen, and a few times before they’ve taken physicists by surprise. Quantum mechanics plays tricks on our expectations. A laser can concentrate energy, as billions of light particles all march together in lock step. Superconductivity is another example of what’s called a “bulk quantum effect”. Under extraordinary circumstances, quantum mechanics can leap from the tiny world of the atom and hit us in the face with deeply unexpected, human-scale effects that we can see and touch.

There are now many dozens of labs around the world that have replicated Cold Fusion, but there is still no theory that physicists can agree on. What we do agree is that it is a bulk quantum effect, like superconductivity and lasers. When the entire crystal (palladium deuteride) asks as one quantum entity, strange and unexpected things are possible.

For me, the larger lesson is about the way the science of quantum mechanics developed in the 20th Century. The equations and formalisms of QM are screaming of connectedness. Nothing can be analyzed on its own. Everything is entangled. The quantum formalism defies the reductionist paradigm on which 300 years of previous science had been built.

And yet, physicists were not prepared to think holistically. We literally don’t know how. If you write down the quantum mechanical equations for more than two particles, they are absurdly complex, and we throw up our hands, with no way to solve the equations or even to reason about the properties of the solutions. The many-body quantum problem is intractable, except that progress has been made in some highly symmetrical situations. A laser consists of a huge number of photons, but they all have a single wave function, which is as simple as a wave function can be. Many-electron atoms are conventionally studied as if the electrons were independent (but constrained by the Pauli Exclusion Principle). Solid state physics is built on bulk quantum mechanics of a great number of electrons, and ingenious approximations are used in combination with detailed measurements to reason about how the electrons coordinate their wave state.

Cold Fusion presents a huge but accessible challenge to quantum physicists. Beyond Cold Fusion lie a hierarchy of problems of greater and greater complexity involving quantum effects in macroscopic objects.

In the 21st Century, there is a nascent science of quantum biology. It is my belief that life is a quantum state.

3. Life coordinates on a grand scale

There are many examples of coordinated behaviors that are unexplained or partially explained. This touches my own specialty, evolution of aging. The thesis of my book is that aging is part of an evolved adaptation for ecosystem homeostasis, integrating the life history patterns of many, many species in an expanded version of co-evolution. My thesis is less audacious than the Gaia hypothesis.

  • Monarch butterflies hibernate on trees in California or Mexico for the winter. In the spring, they migrate and mate and reproduce, migrate and mate and reproduce, 6 or 7 times, dispersing thousands of miles to the north and east. Then, in the fall, the great great grand offspring of the spring Monarchs undertake the entire migration in reverse, and manage to find the same tree where their ancestor of 6 generations spent the previous winter. [Forest service article]
  • Zombie crabs have been observed in vast swarms, migrating hundreds of miles across the ocean floor. Red crabs of Christmas Island pursue an overland migration

  • Sea turtles from all over the world arrange for a common rendezvous once a year, congregating on beaches in the Caribbean and elsewhere. Their navigation involves geomagnetism, but a larger mystery is how they coordinate their movements.
  • Murmuration behavior in starlings has been modeled with local rules, where each bird knows only about the birds in its immediate vicinity; but I find the simulations unconvincing, and believe our intuition on witnessing this phenomenon: that large-scale communication is necessary to explain what we see.
  • Monica Gagliano has written about plants’ ability to sense their biological environment and coordinate behaviors on a large scale. This is her more popular book.

4. The Anthropic Coincidences, or the Improbability of Received Physical Laws

For me, this is the mother of all scientific doors, leading to a radically different perspective from the reductionist world-view of post-enlightenment science. Most physicists believe that the laws of physics were imprinted on the universe at the Big Bang, and life took advantage of whatever they happened to be. But since 1973, there has been an awareness, now universally accepted, that the laws of nature are very special, in that they lead to a complex and interesting universe, capable of supporting life. The vast majority of imaginable physical laws give rise to universes that are terminally boring; they quickly go to thermodynamic equilibrium. Without quantum mechanics, of course, there could be no stable atoms, and everything would collapse into black holes in short order. Without a very delicate balance between the strength of electric repulsion and the strong nuclear force, there would be no diversity of elements. If the gravitational force were just a little weaker, there would be no galaxies or stars, nothing in the universe but spread-out gas and dust. If our world had four (or more) dimensions instead of three, there would be no stable orbits, no solar systems because planets would would quickly fly off into space or fall into the star; but a two-dimensional world would not be able to support life because (among other reasons) interconnected networks on a 2D grid are very limited in complexity.

Stanford Philosophy article
1995 book by Frank Tipler and John Barrow
Just Six Numbers by Martin Rees

Most scientists don’t take account of this extraordinary fact; they go on as if life were an inevitability, an accident waiting to happen. But those who have thought about the Anthropic Principle fall in two camps:

The majority opinion:  There are millions and trillions and gazillions of alternative universes. They all exist. They are all equally “real”. But, of course, there’s no one looking at most of them.  It’s no coincidence that our universe is one of the tiny proportion that can support life; the very fact that we are who we are, that we are able to ask this question, implies that we are in one of the extremely lucky universes.

The minority opinion:  Life is fundamental, more fundamental than matter.  Consciousness is perhaps a physical entity, as Schrödinger thought; or perhaps it exists in a world apart from space-time, as Descartes implied 300 years before Schrödinger; or perhaps there is a Platonic world of “forms” or “ideals” [various translations of Plato’s είδος] that is primary, and that our physical world is a shadow or a concretization of that world.  One way or another, it is consciousness that has given rise to physics, and not the other way around.

If you like the multi-universe idea, you will want to listen to the recent Nobel Lecture of Roger Penrose. He races to summarize his life’s work on General Relativity to end the lecture with evidence from maps of the Cosmic Microwave Background of fossils that came from black holes in a previous universe, before our own beloved Big Bang.

I prefer the minority view, not just because it provides greater scope for the imagination [Anne of Green Gables]; there are scientific reasons that go beyond hubristic disregard of Occam’s razor in postulating all these unobservable universes.

  • Quantum mechanics requires an observer.  Nothing is reified until it is observed, and the observer’s probes help determine what it is that is reified.  Physicists debate what the “observer” means, but if we assume that it is a physical entity, paradoxes arise regarding the observer’s quantum state; hence the “observer” must be something outside the laws that determine the evolution of quantum probability waves.  Cartesian dualism provides a natural home for the “observer”.
  • Parapsychology experiments provide a great many indications that awareness (and memory) have an existence apart from the physical brain.  These include near-death experiences, telepathy, precognition, and clairvoyance.
  • Moreover, mental intentions have been observed to affect reality.  This is psychokinesis, from spoon-bending to shading the probabilities dictated by quantum mechanics.

Finally, the idea that consciousness is primary connects to mystical texts that go back thousands of years. 

Dao existed before heaven and earth, before the ten thousand things.  It is the unbounded mother of all living things.

                     — from the Dao De Jing of Lao Tzu


Please look for my new page at ExperimentalFrontiers.ScienceBlog.com, coming soon.

What to Look For in a Biological Clock

In this article, I’m reporting on 

  • new proteomic clock from Adiv Johnson and the Stanford lab of Benoit Lehalier
  • new methylation clock developed with “deep learning” algorithms by an international group from Hong Kong 
  • the advanced methylation clock developed by Morgan Levine, Len Guarente, and Elysium Health

Prelude

Aging clocks = algorithms that compute biological age from a set of measurable markers. Why are they interesting to us? And what makes one better than another?

The human lifespan is too long for us to do experiments with anti-aging interventions and then evaluate the results based on whether our subjects live longer. The usefulness of an aging clock is that it allows us to quickly evaluate the effects on aging of an intervention, so we can learn from the experiment and move on to try a variant, or something different.

Many researchers are skeptical about using clock algorithms to evaluate anti-aging interventions. I think they are right to be asking deep questions; I also think that in the end the epigenetic clocks in particular will be vindicated for this application.

It may seem obvious that we want the clock to tell us something about biological aging at the root level. We are entranced by the sophisticated statistical techniques that bioinformaticists use to derive a clock based on hundreds of different omic factors. But all that has to start with a judgment about what’s worth looking at.

Ponder this: The biostatisticians who create these clocks are optimizing them to predict chronological age with higher and higher correlation coefficient r. But if they achieve a perfect score of r=1.00, the clock becomes useless. It cannot be used to tell a 60-year-old with the metabolism of a 70-year-old from another 60-year-old with the metabolism of a 50-year-old, because both will register 60 years on this “perfect” clock.

It’s time to back up and ask what we think aging is and where it comes from, then optimize a clock based on the answer. As different people have different answers, we will have different clocks. And we can’t objectively distinguish which is better. It depends on whose theory we believe.

Straw man: AI trained to impute age from facial photos now has an accuracy of about 3½ years, in the same ballpark with methylation clocks. If we used these algorithms to evaluate anti-aging interventions, we would conclude that the best treatments we have are facelifts and hair dye.

Brass tacks: People with different positions about the root cause of aging all agree that (a) aging manifests as damage, and (b) methylation and demethylation of DNA take place under the body’s tight and explicit site-by-site regulation.

But what is the relationship between the methylation and the damage? There are three possible answers.

  1. (from the “programmed” school) Aging is programmed via epigenetics. The body downregulates repair mechanisms as we get older, while upregulating apoptosis and inflammation to such an extent that they are causes of significant damage.
  2. (from the “damage” school) The body accumulates damage as we get older. The body tries to rescue itself from the damage by upregulating repair and renewal pathways in response to the damage.
  3. (also from the “damage” school) Part of the damage the body suffers is dysregulation of methylation. Methylation changes with age are stochastic. Methylation becomes more random with age.

My belief is that (1), (2), and (3) are all occurring, but that (1) predominates over (2). The “damage” school of aging would contend that (1) is excluded, and there are only (2) and (3).

How can these three types of changes contribute to a clock? 

(3) makes a crummy clock, because, by definition, it’s full of noise and varies widely from person to person and from cell to cell. There is no dispute that a substantial portion (~50%) of age-related changes in DNA methylation are stochastic. But these changes are not useful and, in fact, most of the algorithms used to construct methylation clocks tend to exclude type (3) changes. I won’t say anything more about stochastic changes in methylation, but I’ll acknowledge that there is more to be said and refer you to this article if you’re interested in methylation entropy.

If you are from the “damage” school, you don’t believe in (1), so this leaves only type (2). If changes in methylation are the body trying to rescue itself, then any intervention that makes the body’s methylation “younger” is actually dialing down protection repair. You expect that reducing methylation age will actually hasten aging and shorten life expectancy. You have every reason to distrust a clinical trial or lab experiment that uses methylation age as criterion for success.

White cell count is used as a reliable indication of cancer. As cancer progresses, white cell count increases. The higher a person’s white cell count, the closer he is to death. So let’s build a “cancer clock” based on white blood count, and let’s use it to evaluate anti-cancer interventions. The best intervention is a chemical agent that kills the most white blood cells. It reliably sets back the “cancer clock” to zero and beyond. But we’re puzzled when we find that people who get this intervention die rapidly, even though the cancer clock predicted that they were completely cured. The problem is that white blood cells are a response to cancer, not its cause.

If you are from the “programmed” school, you think that (1) predominates, and that a clock can be designed to prefer type (1) changes to (2) and (3). Then methylation clocks measure something akin to the source of aging, and we can expect that if an intervention reduces methylation age, it is increasing life expectancy.

The fact that methylation clocks trained on chronological age alone (with no input concerning mortality or disease state) turn out to be better predictors of life expectancy than age alone is a powerful validation of methylation technology. But only if you believe (for other reasons) that methylation is an upstream cause of aging. You could expect this from either type (1) or type (2) methylation changes.

I believe that aging is an epigenetic life program, and that methylation is one of several epigenetic mechanisms by which it is implemented. That’s why I have faith in methylation clock technology.

Conversely, people who believe that the root cause of aging is accumulated damage are right to discount evidence from epigenetic clocks as it pertains to the efficacy of particular treatments. As in the cancer example above, treatments that create a younger methylation age can actually be damaging.

The basis for my belief that aging is an epigenetic program is the subject of my two books, and was summarized several years ago in this blog. I first wrote about methylation as a cause of aging in this space in 2013. For here and now, I’ll just add that we have direct evidence for changes of type (1). Inflammatory cytokines are up-regulated with age. Apoptosis is upregulated with age. Antioxidants are downregulated with age. DNA repair enzymes and autophagy enzymes and protein-folding chaperones are all down-regulated with age. All these are changes in gene expression, presumably under epigenetic control.

Which is more basic, the proteome or the methylome?

For reasons I have elaborated often in the past, I adopt a perspective on aging as an epigenetic program. I think of methylation clocks as close to the source, because methylation is a dispersed epigenetic signal. But the proteome is, by definition, the collection of all signals transmitted in blood plasma, including all age signals and transcription factors that help to program epigenetics cell-by-cell. The proteome is generated by transcription of the DNA body-wide, which transcription is controlled by methylation among other epigenetic mechanisms. So one might argue from this that the methylome is further upstream than the proteome. On the other hand, methylation is just one among many epigenetic mechanisms, and the proteome is the net result of all of them. On this basis, I would lean toward a proteomic clock as being a more reliable surrogate for age in clinical experiments, even better than methylation clocks. It is a historic fact, however, that methylation clocks have a 6-year headstart. Methylation testing is entering the mainstream, with a dozen labs offering individual readings of methylation age, priced to attract end-users.

Let’s see if proteomic clocks can catch up. The new technology is based on SOMAscan assays, and so far is marketed to research labs, not individuals or doctors, and it is priced accordingly. The only company providing lab services is SOMAlogic.com of Boulder, CO. SOMAscan is an aptamer-based proteomics assay capable of measuring 1,305 human protein analytes in serum, plasma, and other biological matrices with high sensitivity and specificity.” [ref] As I understand it, they have a microscope slide with 1305 tiny dots, each containing a different aptamer attached to a fluorescent dye. An aptamer is like an engineered antibody, optimized by humans to mate to a particular protein. Thus 1305 different proteins can be measured by applying a sample (in our case, blood plasma) to the slide, chemically processing the slide to remove aptamers that have not found their targets, then photographing the slide and analyzing the readout from the fluorescent dye.

Aptamers are synthetic molecules that can be raised against any kind of target, including toxic or non immunogenic ones. They bind their target with affinity similar or higher than antibodies. They are 10 fold smaller than antibodies and can be chemically-modified at will in a defined and precise way. [NOVAPTech company website]

Curiously, aptamers are not usually proteins but oligonucleotides, cousins of RNA, simply because the chemical engineers who design and optimize these structures have had good success with the RNA backbone. The SOMA in SOMAlogic stands for “Slow Off-rate Modified Aptamers”, meaning that the aptamers have been modified to make them stick tight to their target and resist dissociating.

An internal proteome-methylome clock?

It’s possible that there is a central clock that tells the body “act your age”. I have cited evidence that there is such a clock in the hypothalamus, and that it signals the whole body via secretions [20152017].

Another possibility is a dispersed clock. The body’s cells manufacture proteins based on their epigenetic state, the proteins are dispersed in the blood, some of these are received by other cells and affect the epigenetic state of those cells. This is a feedback loop with a whole-body reach, and it is a good candidate for a clock mechanism in its own right.

I’m interested in the logic and the mathematics of such a clock in the abstract. Any feedback loop can be a time-keeping mechanism. Such a mechanism is
_____Epigenetics ⇒ Protein secretion ⇒ Transcription factors ⇒ Epigenetics
This is difficult to document experimentally, but it is an attractive hypothesis because it would explain how the body’s age can be coordinated system-wide without a single central authority, which would be subject to evolutionary hijacking, and might be too easily affected by individual metabolism, environment, etc. But the body’s aging clock must be both robust and homeostatic. If it is thrown off by small events, it must return to the appropriate age.  So my question—maybe there are readers who would like to explore this with me—is whether it is logically possible to have a timekeeping mechanism that is both homeostatic and progressive, without an external reference by which it can be reset.

Last year, Lehalier and a Stanford-based research group jumpstarted the push toward a methylomic aging clock with this publication [my write-up here]. The same group has a follow-up, published a few weeks ago. The new work steps beyond biologically agnostic statistics to incorporate information about known functions of the proteins that they identified last year. The importance of this is twofold: It suggests targets for anti-aging interventions. And it supports the creation of a clock composed of upstream signals that have been verified to have an effect on aging. I argued in the long Prelude above that this is exactly what we want to know in order to have confidence in an algorithmic clock as surrogate to evaluate anti-aging interventions.

They work with a database I had not known about before: the Human Ageing Genomic Resources Database.  HAGR indexes genes related to aging and summarizes studies that document their functions. Some highlights of the proteins they identified:

  • Inflammatory pathways are right up there in importance. No surprise here. But if you can use inflammatory epigenetic changes to make an aging clock, you have a solid beginning.
  • Sex hormones that change with age turn out to be even more prominent in their list. The first several involve FSH and LH. These are hormones connected with women’s ovarian cycles; but after menopause, when they are not needed, their prominence shoots up, and not just once-a-month, but always on. Men, too, show increases in LH and FSH with age, though they are more subtle. I first became aware of LH and FSH as bad actors from the writings of Jeff Bowles more than 20 years ago.
  • “GDF15 It is a protein belonging to the transforming growth factor beta superfamily. Under normal conditions, GDF-15 is expressed in low concentrations in most organs and upregulated because of injury of organs such as such as liverkidneyheart and lung.” [Wikipedia]  “GDF15 deserves a story of its own. The authors identify it as the single most useful protein for their clock, increasing monotonically across the age span. It is described sketchily in Wikipedia as having a role in both inflammation and apoptosis, and it has been identified as a powerful indicator of heart disease. My guess is that it is mostly Type 1, but that it also plays a role in repair. GDF15 is too central a player to be purely an agent of self-destruction.” [from my blog last year]
  • Insulin is a known modulator of aging (through caloric restriction and diabetes).
  • Superoxide Dismutase (SOD2) is a ubiquitous antioxidant that decreases with age, leaving the body open to ROS damage.
  • Motilin is a digestive hormone. Go figure. Until we understand more, my recommendation would be to leave this one out of the aging clock algorithm.
  • Sclerostin is a hormone for bone growth. It may be related to osteoporosis, and well worth inclusion. 
  • RET and PTN are called “proto-oncogenes” and are important for development, but associated with cancer later in life.

Which proteins are most relevant?

The Horvath clocks have been created using “supervised” optimization, which involves human intelligence that oversees the application of sophisticated algorithms. But what happens if you automate the “supervised” part? On the one hand, you must expect mistakes and missed opportunities that you wouldn’t have with human supervision. On the other hand, once you have a machine learning algorithm, you can apply it over and over to different subsets of the data, produce hundreds of different clocks, and choose those that perform best. That’s what Johnson and co-authors have done in the current paper. They describe creating 1565 different clocks based on different subsets of a universe of 529 proteins. In my opinion, their most important work combines biochemical knowledge with statistical algorithms. The work using statistical algorithms alone are much less interesting, for reasons detailed in the Prelude above.

Summary

This new offering from Lehalier and Johnson is a great step forward in that

  • proteins in the blood are a broader picture of epigenetics than methylation alone
  • specific proteins are linked to specific interventions that are reliably connected to aging in the right direction. Crucially, the clock is designed to have type (1) epigenetic changes (from the Prelude above) and to exclude type (2)

Next steps

  • to calibrate the clock not with calendar age but with future mortality. This would require historic blood samples, and it is the basis of the Levine/Horvath PhenoAge clock.
  • to optimize the clock separately for different age ranges or, equivalently, to use non-linear fitting techniques in constructing the clock algorithm
  • to commercialize the Aptomer technology, so that it is available more widely and more cheaply

Elysium Index

Elysium is a New York company advised by Leonard Guarente of MIT and Morgan Levine (formerly Horvath’s student, now at Yale). They have an advanced methylation clock available to the public, which they claim is more accurate than any so far. Other clocks are based on a few hundred CpG sites that change most reliably with age, but the Index clock uses 150,000 separate sites (!) which, they claim, offers more stability. The Horvath clocks can be overwhelmed by a single CpG site that is measured badly. (I have personal experienc with this.) Elysium claims that variations from one day to the next or one lab slide to the next tend to average out over such a large number of contributions. On the other hand, as a statistician, I have to wonder about deriving 150,000 coefficients from a much smaller number of inividuals. The problem is called overfitting, and the risk is that the function doesn’t work well outide the limited data set from which it was derived.

In connection with the DataBETA project, I have been talking to Tina Hu-Seliger, who is part of the Elysium team that developed Index. I am impressed that they have done some homework that other labs have not done. They compare the same subject in different slides. They store samples and freeze them and compare results to fresh samples. They compare different clocks using saliva and blood.

I wish I could say more but Elysium Index is proprietary. There is a lot I have not been told, and there is more that I know that I have been asked not to reveal. I don’t like this. I wish that all aging research could be open sourced so that researchers could learn from one another’s work.

Two other related papers

DeepMAge is a new methylation clock, published just this month, based on more sophisticated AI algorithms instead of the standard 20th-century statistics used by Horvath and others thus far. Galkin and his (mostly Hong Kong, mostly InSilico) team are able to get impressive accuracy in tracking chronological age. This technology has forensic applications, in which evidence of someone’s calendar age is relevant, independent of senescence.  And the technology may someday be the basis for more accurate predictions of individual life expectancy. But, as I have argued above, a good clock for evaluating anti-aging measures must look at more than statistics. Correlation is not the same as causation, and only detailed reference to the biochemistry can give confidence that we have found causation.

Biohorology is a review paper from some of this same InSilico team together with some prominent academics, describing the latest crop of aging clocks. The ms is long and detailed, yet it never addresses the core issue that I raise in the Prelude above, about the need to distinguish upstream causes of aging from downstream responses to damage.

The beginning of the ms contains a gratuitous and outdated dismissal of programmed aging theories.

“Firstly, programmed aging contains an implicit contradiction with observations, since it requires group selection for elderly elimination to be stronger than individual selection for increased lifespan.”

Personally, I bristle at reading statements like this. which ignore an important message of my own work and, more broadly, ignore the broadened understanding of evolution that has emerged over the last four decades.

“Secondly, in order for the mechanism to come into place, natural populations should contain a significant fraction of old individuals, which is not observed either (Williams, 1957).” 

This statement was the basis not just of Williams’s 1957 theory, but more explicitly of the Medawar theory 5 years earlier. Neither of these eminent scientists could have known that their conjecture about the absence of senescence in the wild would be thoroughly disproven by field studies in the 1990s, The definitive recent work on this subject is [Jones, 2014].

Take-home message

For the purpose of evaluating anti-aging treatments, the ideal biological clock should be created with these two techniques:

  • It should be trained on historic samples where mortality data is available, rather than current samples where all we know is chronological age, and
  • Components should be chosen “by hand” to assure all are upstream causes of aging rather than downstream responses to damage. (Type 1 from analysis above.)

Deep Mind Knows how Proteins Fold

This week, Deep Mind, a London-based Google company, claims to have solved the number one most consequential problem in computational biochemistry: the protein-folding problem.  If true, this could be the start of something big.


What does it mean, and why is it important? Let’s start with signal transduction. This is a word for the body’s chemical computer. The nervous system, of course, constitutes a signal-processing and decision-making engine; and in parallel, there is a chemical computer. The body has molecules that talk to other molecules that talk to other molecules, sending a cascade of ifs and thens down a chain of logic. The way molecules with very complex shapes fit snugly together is the language of the chemical computer. These molecules with intricate shapes are proteins, and they are not formed in 3D. Rather, DNA provides instructions for a linear peptide chain of amino acids which are transcribed in ribosomes (present in every cell) to create a chain of amino acids, chosen from a canonical set of 20. Each peptide chain folds into a protein with a characteristic shape, and it is these shapes that constitute the body’s signaling language. Most age-related diseases can be traced to an excess or a deficiency of these protein signal molecules.

So signal proteins are targets of medical research. Pharmaceutical interventions may modify signal transduction, perhaps by goosing signaling at some juncture, or by siphoning off a particular signal with another chemical designed to fit perfectly into its bumps and hollows. Up until now, there has been a lot of trial and error in the lab, looking for chemicals with complementary shapes. Imagine now that the Deep Mind press release is not exaggerating, and they really can reliably predict the shape that a peptide will take once it is folded. Then many months of laboratory experiments can be replaced with many hours of computation. All the trial-and-error work can be done in cyberspace. An inflection point in drug development, if it’s true.

Why it’s a Hard Problem

Computers solve large problems by breaking them down into a great many small ones. But protein folding can’t be solved by looking separately at each segment of the protein molecule. Everything affects everything else, and the optimal shape is a property of the whole. Proteins are typically huge molecules, with hundreds or thousands of amino acids chained together. The peptide bonds allow for free rotation. So the number of shapes you can form with a given chain is truly humongus. The sheer number of possibilities would overwhelm any computer program that tried to deal with the different shapes one at a time.

The thing that stabilizes a given shape is hydrogen bonding. Nominally, each hydrogen atom can form only one bond to a carbon or oxygen, but every hydrogen is a closet bigamist, and it longs to couple with a nearby carbon or (better still) oxygen atom even as it is bound primarily to its LTR partner. Every twist and bend in the molecular chain allows some new opportunities for hydrogen bonding, while removing others. The breakthrough in computing came 1% inspiration, 99% perspiration (Edisonn’s recipe). A key input was to map the structure of 170,000 known, natural proteins, and to train the computer to be able to retrodict the known results. Then, when working with a new and unknown shape, the computer makes decisions that are based on its past success.

How does it make the decisions? No one knows. One of the most successful techniques in artificial intelligence uses generic layers of input and output with programmable maps, and the maps are trained to give the right answer in known cases. But the fundamental logic that drives these decisions remains opaque, even to the programmers. 

 

It gets more complicated

Many proteins don’t have a unique folded state. They are in danger of folding the wrong way. So there are proteins called chaperones that help them to get it right. These chaperones don’t explicitly dictate the proetein’s final structure, but rather they place the protein in a protected environment. There are 20,000 different proteins needed in the human body, but only a handful of different chaperones.


Factoid: Most inorganic chemical reactions take place on a time scale of billionths of a second. Organic reactions are somewhat slower. But protein folding happens on a human time scale of seconds, or even minutes.


The AI that finds a protein’s ultimate structure must have knowledge of the environment in which the protein folds. It is not merely computing something intrinsic to the sequence of amino acids that makes up the nacent protein. To underscore this problem, proteins fold incorrectly almost as often as they fold correctly. There is an army of caretaker proteins that inspect and correct already-folded proteins. Misfolded proteins tend to clump together and there are chemicals specialized in puilling them apart. For the lost causes, there are proteasomes, which break the peptide bonds and recycle a damaged protein into constituent parts. The name ubiquitin derives from the fact that these protein recyclers are found in every part of every cell.

The question arises, how do these caretaker proteins know what is the correct shape and what is a misfolded shape? Remember that the number of chaperones and caretakers is vastly smaller than the number of proteins that they attend to, so they cannot contain detailed information about the proper conformation of each protein they service. And this leads to a deep question for AI: It’s hard enough to know how a particular protein chain will fold into a conformation that is thermodynamically optimized. But the conformation optimized for least energy may or may not be the one that is useful to the body.

Prions are mysterious

In the late 1970s, a young neurologist named Stanley Prusiner began to suspect that misfolded proteins could be infectious agents. He coined the term prion for a misfolded protein that could cause other proteins to misfold. This idea defied ideas about how pathogens evolve, and in particular ran afoul of Francis Crick’s Central Dogma of Molecular Biology, which said that information was always stored in DNA and transferred downstream to proteins.

The evolutionary provenance of prions remains a mystery, but it is now well-established that certain misfolded proteins can cause a chain reaction of misfolding. The process is as mysterious as it is frightening. Neil Ferguson, who has become infamous this year for his apocalyptic COVID contagion models, frightened the UK in an earlier episode into slaughtering and incinerating more than 6 million cows and sheep, in a classic example of panic leading to overkill.

Prusiner had to wait less than 20 years before the medical community acceded to his heresy. He was awarded the Nobel Prize in 1997.

Example and Teaser

This example is from a review I am preparing for this space next week. I am reading two recent papers about proteins in the blood that change as we age. Assuming that these signals are drivers of aging, what can be done to enhance the action of those that we lose, or suppress the action of those that increase with age? The connection to the present column is that knowledge of protein folding can be used to engineer proteins that redirect the body’s chemical signal transduction at a given intervention point. For example, FSH (follicle-stimulating hormone) is needed just a few days of a woman’s menstrual cycle, but FSH levels rise late in life, with disastrous consequences for health. FSH shoots up in female menopause, and in males it rises more gradually.

FSH drives the imbalance in blood lipids associated with heart disease and stroke. In lab rodents, FSH can be blocked with an antibody, or by genetic engineering, with consequent benefits for cardiovascular health [ref] and loss of bone mass [ref]. The therapy also reduces body fat “Here, we report that this antibody sharply reduces adipose tissue in wild-type mice, phenocopying genetic haploinsufficiency for the Fsh receptor gene Fshr. The antibody also causes profound beiging*, increases cellular mitochondrial density, activates brown adipose tissue and enhances thermogenesis.” [ref] In the near future, we may be able to use computer-assisted protein design to create a protein that blocks the FSH receptor and do safely in humans what was done with genetic engineering in mice.
_______________
*Beiging is turning white adipose tissue to brown. Briefly, the white fat cells are permanent and cause diabetes, while the brown are burned for fuel.

Hyperbaric Hyperbole

An Israeli study came out last week that has been described as rejuvenation via hyperbaric oxygen. I’m not taking it very seriously, and I owe you an explanation why.

  • The main claim is telomere lengthening. I used to think of telomeres as the primary means by which aging is programmed, but since the Danish telomere study [Rode 2015], I think that telomeres play a minor role.
  • I think that methylation age is a far better surrogate than telomere length. The study doesn’t mention methylation age, but reading between the lines…
  • I think the study’s results can be explained by elimination of senescent white blood cells. This might explain the observed increase in average telomere length, even without expression of telomerase. 
  • Are there signs of senolytic benefits in other tissues? That’s the big question going forward.

A study was published in the Aging (Albany) last week claiming to lengthen telomeres and eliminate senescent cells in a test group of 20 middle-aged adults using intermittent hyperbaric oxygen treatment. It was promoted as age reversal in popular articles [for example], apparently with the encouragement of Tel Aviv University.

Telomeres as a surrogate marker for aging

Several years ago, I was enthusiastic about the use of telomere length as a measure of biological age. Telomeres shorten progressively with age, and I thought this mechanism provided a good candidate for a mechanism of programmed aging. But when the Rode study came out of Copenhagen (2015), I saw that the scatter in telomere length was too large for this idea to be credible.

I came to think that telomere shrinkage plays a minor role in aging. Around the same time, I became enthusiastic about methylation clocks. Methylation changes with age are correlated far more strongly with less scatter.

So I think that methylation is plausible as a primary cause of aging, and telomere shrinkage, less so.

Telomere length vs age, new data

 

The Treatment

The air we breathe is only 21% oxygen. Breathing pure oxygen, five times as concentrated as in air, is a temporary therapy (hours at a time, but not days) for people who have impaired lungs. But prolonged exposure to pure O2 can injure the lungs and other tissues as well. Oxygen is highly reactive, and the body’s antioxidant system is gauged to the environments in which we evolved, so oxygen therapy is not to be taken lightly.

Hyperbaric Oxygen Therapy (HBOT) is oxygen at double full strength. The patient breathes pure oxygen at twice atmospheric pressure. If you just put a tube in your mouth with that much pressure, you wouldn’t be able to hold it, or to exhale. But the body can withstand high pressures as long as it’s all around, not just inside the lungs. If you SCUBA dive, at 30 feet below the surface the ambient pressure is two atmospheres, and SCUBA tanks adjust to feed air into your mouth at a pressure that is matched to the surrounding water.

(Incidentally, pressure varies a lot with altitude, so that in Denver it’s 20% lower than New York. Two years ago, I trekked in the Himalayas at 17,000 feet, where the air pressure is only half the standard (sea level) value, and of course there is only half as much oxygen.)

HBOT needs to arrange higher ambient pressure, not just in the oxygen tank. The patient has to be enclosed in a chamber where the ambient pressure is twice atmospheric pressure. Pure oxygen is expensive enough that the ambient air is just normal air at high pressure, and the patient is given oxygen to breathe from a tank. The patient can be in a pressurized room or lying in a personalized chamber.

HBOT has been around for a century, and standard medical uses are for detoxification, gangrene, and chronic infections.  More recently, HBOT has been used with success for traumatic injury, especially nerve damage. There are studies in mice in which HBOT in combination with a ketogenic diet has successfully treated cancer.

In the new Israeli study, subjects received 90 minutes of HBOT therapy 5 days a week for 12 weeks. For 5 minutes of every 20, patients breathed ordinary 21% air. The intermittent treatment was described as inducing some hypoxia adaptations. Apparently, the body adjusts to the high oxygen environment, and then it senses (relative) oxygen deprivation for those 5 minutes.

How does it work?

There is no accepted theory for how HBOT works, so I feel free to speculate. The primary role of a highly oxidative environment is to destroy. That’s probably how HBOT treats infections, since bacteria are generally more vulnerable to oxidative damage than cells of our bodies. Another thing that HBOT does well is to eliminate necrotic tissue, and I wouldn’t be surprised if it turns out to be an effective cancer treatment, since tumor cells thrive in an anaerobic environment. But the body also uses ROS (reactive oxygen species) such as H2O2 as distress signals that dial up chemical protection and repair. This is akin to hormesis, and I’m inclined to think that when HBOT promotes nerve growth, it is via a distress signal.

Results

Authors of the new study make two claims: that telomeres are lengthened in several classes of white blood cells, and that senescent white blood cells are eliminated. Let’s take them in reverse order.

Elimination of senescent cells has been a promising anti-aging therapy since pioneering work of van Deursen at the Mayo Clinic. A quick refresher: telomeres get shorter each time cells replicate, and in our bodies, some of the cells that replicate most (stem cells and their offspring) develop short telomeres late in life that threaten their viability. Cells with short telomeres go into a state of senescence, in which they send out signals (inflammatory cytokines) that increase levels of inflammation in the body and can also induce senescence in adjacent cells, in a chain reaction. Senescent cells are a tiny proportion of all cells in the body, and Van Deursen showed that the body is better off without them. Just by selectively killing senescent cells in a mouse model, he was able to extend their lifespan by about ~25%. But to do the experiment, he had to genetically engineer the mice in such a way that the senescent cells would be easy to kill selectively. Ever since this study, the research community has been looking for effective senolytic agents that could kill senescent cells and leave regular cells alone (without having to genetically engineer us ahead of time).

The new Israeli study demonstrates that senescent white blood cells have been reduced. (Red blood cells have no chromosomes, so they can’t have short telomeres and can’t become senescent in the same way. They just wear out after a few months.) The effect continued after the 60 hyperbaric sessions were over, suggesting that HBOT kills the cells slowly, or damages them so that they die later. Apparently, the reduction was measured by separating different cell types and counting them. There was a great deal of scatter from one patient to the next.

The first claim is that average telomere length was increased in some populations of white cell sub-types. Again, there was a great deal of scatter in the data, with some of the subjects decreasing telomere length and others. For example, when they say that B cell telomeres increased by 22% + 40%, I interpret that to mean that the mean telomere length increased by 22%, but the combined standard deviations from the before and after measurements was 40% of the original length. Hence, a great deal of scatter.

Aside about statistics (With apologies — this from my geeky side)

First, what does that mean 22% + 40% ? How can that be statistically significant? Answer: The standard deviation of a set of measurements is a measure of the scatter. It tells you how broadly they differ from one another. If you’re looking for the average of that distribution, you can be pretty sure that the average isn’t out at the edges, so the uncertainty in the average is a lot smaller than the standard deviation. How much smaller? The answer is the square root of N rule. The “standard error of the mean”, or SEM, is the standard deviation divided by the square root of the number of points, or √N. So the 40% standard deviation gets divided by the square root of the number of subjects in the study, √26=5.1, and “22% + 40%” should really be reported as 22% + 8%. The mean is 22% and the uncertainty in that 22% is 8%.

The way this group did the statistics was based on

  • Finding the average telomere length among 26 subjects after the study
  • Dividing by the average telomere length among 26 subjects before the study

First they average, then they divide.

But it’s well-known (to statisticians) that the most sensitive test is to reverse the operations. First divide, then average. In other words, compare each subject’s telomeres after the study with the same subject before the study. If you do the statistics this way, then the original scatter among the different subjects all cancels out. You can start with subjects of vastly different telomere lengths, and it doesn’t matter to the statistics, so long as each one of them changes in a consistent way.

If you average first (before dividing), the scatter among the initial group imposes a penalty in statistical significance, even though that has nothing to do with effectiveness of the treatment.

So this raises the question: Why did the authors do the statistics this less-sensitive way? They hint at an answer: “repeated measures analysis shows a non-significant trend (F=4.663, p=0.06)” They seem to be saying that the test which normally gives a better p value, in this case gives a worse p value.

That can only happen if the the people who had the longest telomeres at the end of the study were not the same as the people who had the longest telomeres at the beginning.

Here’s what I think is really going on

Telomerase is the enzyme that increases telomere length. We think of telomerase as anti-aging, and supplements such as astragalus and gotu kola and silymarin are gobbled up for their telomerase activation potential. When we think of longer telomeres as a result of a study, we imagine that telomerase has been activated.

But in this case, I think that the average has gone up simply because the cells with short telomeres have been killed off. The authors are telling us that there are less senescent cells as a result of the treatment. Senescent cells are the ones with the shortest telomeres. At the beginning, the average telomere length is an average of a wide range of cells with long and short telomeres. At the end, you have the same long telomeres in the average, but the shortest ones are gone, so the average has increased.

I’m suggesting that telomerase has not been activated. There has been no elongation of telomeres, but the average length has increased because cells with the shortest telomeres have been eliminated.

It’s only a hypothesis, but it might help explain why the people who had the longest average telomere length at the beginning were not the same as the people who had the longest average telomere length at the end. The senescent cells that were being eliminated had no relationship to the telomere length in other cells.

Next steps

One thing I’d like to know is whether the HBOT treatment affected methylation age by any of the Horvath clocks. I’ve written to the authors with this question, and haven’t received a response. Maybe they did the methylation testing and didn’t report the results because they were negative—just a guess.

But even without reprogramming methylation, the therapy can be valuable if it is eliminating senescent cells generally, and not just in white blood cells. An easy first test would be whether inflammatory cytokines in the blood decreased after the treatment. Confirmation would come from the kind of test van Deursen did, assaying senescent cells in different tissues.

If hyperbaric oxygen can be shown to decrease methylation age, that would be a promising finding. If not, but the treatment has general senolytic effects (not just in white blood cells), it may yet have value as an anti-aging treatment. Maybe the authors already know the answers to these questions; if not, they should be busy finding out.

Ten Elements of the False COVID Narrative (last 5)

I am heartened that the tide seems to be turning. The Great Barrington Declaration is attracting thousands of scientists’ signatures each day. And the World Health Association’s COVID spokesperson has done an about-face and come out in opposition to lockdowns, recognizing explicitly the suffering, the poverty, and the health implications of the policy most of the world has pursued these 6 months.

The global response to COVID claims science for its foundation, and my aim in this series is to show that what is being done does not represent a scientific consensus, and is deeply variant from past public health practices. I don’t understand who is behind this, but I suspect that it is not mere incompetence or bureaucratic inertia; this suspicion is based on

  • Fraudulence of chloroquine trials
  • Suppression of scientific dissent
  • Evidence that SARS-CoV-2 originated in a lab, and suppression of this evidence in the scientific literature and in the press
  • Secrecy in planning the political response to COVID
  • Neglect of all the ancillary harms from lockdown in deciding on a response. (This warning published last March in the NYTimes by a senior epidemiologist from Yale probably could not be published in October.)
  • Well-established, safe and effective treatments for COVID are being bypassed to hang the world’s future on the mirage of a vaccine, though vaccines are (1) far more expensive and (2) much harder to prove safe and effective [See #10 below]
  • Public announcements and even the way the numbers are calculated are inciting widespread fear in the public. I think this fear is far more than is warranted, and I suspect that this is by design.

The method behind this madness remains elusive to me. But political journalists outside the established media are emphasizing the military connection. One investigative journalist whom I respect for her courage and her diligence is Whitney Webb. Here, she shows us that Operation Warp-speed is a military project much more than a public health project. It is plausible to me that COVID originated in a bioweapons research lab. And from the beginning, the US response was planned not by public health experts but by secret meetings of military leaders

I hope you will explore these connections and come to your own conclusions. My more modest goal in this series is to establish that “science” cannot be invoked to justify the lockdowns, the masking, the secrecy, the closure of schools and churches and cultural institutions. Least of all can “science” justify censorship, because the process by which science reaches for truth depends on open debate from a diversity of perspectives.


6. “New cases of COVID are expanding now in a dangerous Second Wave”

We’re concerned not for the virus but for the suffering and death that it causes. In March and April, we were frightened by the rising numbers of COVID deaths. But in May, CDC stopped reporting daily deaths and switched to reporting daily cases.

Traditionally, “cases” are defined as people who become seriously ill. That was the definition for a short while. Then it was “people who test positive for the virus”. On May 19, CDC started adding people who tested positive for antibodies to the virus as “cases”. We’re told that there is a troubling increase in COVID cases lately. If people really were getting sick, this would be disturbing. But if it is an increase in perfectly healthy people testing positive for antibodies, it is a wholly good thing. It’s called “herd immunity”.

No test is infallible, and invariably there are people who test positive who don’t really have the virus. These are false positives. As the prevalence of COVID has dropped with summer weather and more of the population already exposed (herd immunity), the rates are so low in many urban areas that false positive tests are swamping the true positives, and we really can’t say anything about trends. This recent article concludes that the quality of available data is no longer a reliable basis for policy data decisions.

https://www.nytimes.com/2020/08/29/health/coronavirus-testing.html

The low death rates are, of course, a good thing. The problem is that the false positives are being reported without explanation as though they were meaningful data about prevalence of COVID.

COVID is no longer among the top 5 causes of death in America. Why is our government slanting the reports in ways that keep us scared? I don’t have an answer to this question. I know there is a great deal of money riding on vaccines, and that by any sane criterion, COVID vaccines are past their usefulness, even if we had reason to believe they were safe. But I don’t think this fully explains the fear campaign. I suggest that it’s my job and yours to keep asking questions.

7. “Dr Fauci and the CDC are guiding our response to COVID according to the same principles of epidemic management that have protected public health in the past.”

On the contrary, standard public health procedure is to quarantine the sick and protect the most vulnerable. Telling a whole country full of healthy people to stay at home is entirely new, unstudied, a sharp departure from previous practices.

Closing down manufacturers, offices, stores, churches, concert halls, theaters, even closing private homes to social and family guests—all this is a radical new experiment. There are no scientific studies to justify it, because it has never been done in the past.

Containment of the virus is feasible if it is begun very early, when the virus is geographically contained and the number of cases is small enough that every case can be accounted for. It’s then possible for severe isolation to halt the virus in its tracks. (This was the strategy pursued by China.) Once there are thousands of cases, it is feasible to slow the spread, but not to change the fact that eventually, everyone in the population will be exposed.

Dr Fauci was clearly aware of this, because when he made his March announcement, he was asking America to isolate only for a few weeks. His goal was explicitly to “flatten the curve”, meaning to make sure the disease didn’t spread so rapidly that hospital ICUs would be overwhelmed. At the beginning, he (quite reasonably) did not claim that the measures he prescribed to America would contain the virus, but only slow its spread.

It worked. Except in a few isolated regions, there was never a shortage of hospital beds. But six months later, we are still masking and social distancing, long past when the original justification for these measures has been forgotten.

8. “Asymptomatic carriers are an important vector of disease transmission, which must be isolated if we are to stop the spread of COVID”

The justification for separating healthy people from other healthy people is the idea that we never know who is really healthy. We know from past history that colds and flu become contagious a day or two before they have symptoms, though the viral load that they transmit is greatly increased once the virus has taken hold and they are coughing and sneezing.

Extending quarantine from the traditional application to people who are obviously sick to the general population is a huge innovation, imposing tens of trillions of dollars in lost productivity worldwide, as well as social and psychological hardship. Isolation kills. It could only have been justified by evidence that the virus could not be contained by the same methods that have been used for all previous epidemics. Where is the evidence that asymptomatic carriers are a critical link in the chain of transmission?

Dr Fauci got it right at first when he said, “In all the history of respiratory-born viruses of any type, asymptomatic transmission has never been the driver of outbreaks. The driver of outbreaks is always a symptomatic person.” [Jan 28] Subsequently, there were anecdotal articles documenting particular cases in which asymptomatic transmission did occur [one, two, three]. How can we know if asymptomatic carriers are an important part of the dynamic spread of the disease? This paper is the only attempt I have found to study the question with a detailed mathematical model; but, in the end, it just calculates unknowns from unmeasurables, and reaches no conclusion. We are left with common sense, which says that patients with symptoms have much higher viral levels (that’s why they are sick). They are also coughing and aspirating more of the virus (that’s why the virus evolved to make us cough). When Maria van Kerkhove, speaking for the WHO, stated that asymptomatic transmission was not important, she was reined in by those who control the narrative, and she walked back the statement the next day.

9. “The lower death rates now compared to April are due to protective measures such as social distancing, mask-wearing, and limited travel.”

Why would we expect lower death rates? From measures intended to limit social contact and spread of the virus, we should expect lower infection rates. But that’s not happening; instead, we have higher case rates coupled with lower death rates. This can reasonably be explained by (1) changes in definition of what constitutes a “case” (see #6 above), (2) wider testing, (3) the virus evolving, as most viruses tend to do, toward higher infectivity and lower fatality, and (4) fall weather.

10. “With enough resources, pharmaceutical scientists can develop a vaccine in a matter of months, and provide reasonable assurance that it is safe.”

This is the most dangerous of all the fictions and, not incidentally, the one most closely related to $6 billion in NIH investments and tens of billions in projected corporate profits.

The subject of vaccines is highly polarizing. On the one hand, the mainstream press, especially the scientific press, has been hammering with singular purpose the message that vaccines are safe and effective and necessary not just for individual protection but for public health. On the other hand, there is about one third of the American public who distrust what they hear about vaccines, enough so that they will refuse a vaccine (if not coerced). [Updated to half of Americans, according to recent Pew survey] So much has been written about vaccine safety that I would not presume to try to convince you one way or the other in a few paragraphs. I can tell you that my own attitude changed when I had a bad reaction four years ago to a pneumonia vaccine (PCV13), and learned that there is no corporate liability for vaccine injuries. An act of Congress in 1986 exempted vaccines from the standard testing for safety and efficacy that other medications must pass, and also indemnified vaccine companies from all liability for harm caused by either design or manufacture. In my opinion, this is a dangerous situation, as it removes all motivation for companies to make a safe product. Recent amendments to the 2005 PREP act take the extraordinary extra step, for COVID vaccines only, of absolving the companies for liability in advance for fraud and intentional infliction of harm.[I thought this was true when I wrote it in October.]

I’ll close this series by defending my claim above that, compared to treatments, vaccines are (1) far more expensive and (2) much harder to prove safe and effective.

  1. One reason that vaccines are more expensive for the public (and correspondingly more profitable for the industry) is that vaccines are for everyone, while treatments are only for less than 1% of the population that becomes sick enough to need them. There is a race to patent a vaccine, a race for billions of dollars in private profits that derive from spending public research funds, and the profit potential is distorting our public priorities. The best treatment we have is hydroxychloroquine, which is out of patent, has a 65-year safety record, and costs pennies per dose. FDA can only legally approve vaccines on a fast track basis if they find that no viable treatments are available. This is ample explanation for the campaign to discredit chloroquine and other effective treatments.
  2. Because a vaccine is given to 100 times as many people, it must be 100 times safer in order to impose the same health burden from side effects. COVID is only life-threatening for people who are old and/or disabled; so to establish the safety of a vaccine, clinical trials must include people who are old and/or disabled. The relevant question is: are people who receive the vaccine dying at a lower rate than people who received a placebo?  But none of the trials are being designed to ask this question.

    There is a reason why vaccines are tested over many years, and why “warp-speed” testing cannot tell us what we need to know. Though a vaccine is always designed with one particular pathogen in mind, the effects of vaccination—beneficial and detrimental—extend to the immune system generally. This is the complex subject of cross-immunity [refrefrefref]. It is generally true that live virus vaccines tend to confer cross immunity toward non-target viruses, while vaccines made from protein fragments tend to impair immunity to non-target infections. Only one of the candidate vaccines is derived from live, attenuated virus. The new class of RNA vaccines [Moderna] is entirely untested, and we have no idea what the long-term effects would be, but initial results give us pause.

If you are open to an honest and competent criticism of vaccine science and politics, I recommend Robert F. Kennedy’s web site.


The Bottom Line

The story that we are being told about an ultra-lethal virus that “jumped to humans” and the scientific community converging on a response proportional to the threat—this story is unraveling, as more and more doctors and public health professionals are adding their voices to a global movement to restore sanity and integrity in the pandemic response.

Ten Elements of the False COVID Narrative (first 5)

Last week, I called for scientists to come forward and make a public statement that the world’s response to COVID is not consistent with best public health practices. As if in answer to my prayer, a meeting was held at Great Barrington, MA, from which emerged this statement, signed by doctors and professors from the world’s most prestigious institutions, as well as hundreds of professionals and thousands of others. You can sign, too. In this video, the three main authors present their message.

Their proposed strategy is to protect the old and most vulnerable and quarantine people with COVID symptoms, while allowing the young and strong to go back to school, go back to work, acquire herd immunity for the benefit of everyone. This is fully aligned with past practice, and is just what Dr David Katz (Yale School of Public Health) proposed in the New York Times and in a video presentation back in March. 

What they didn’t sayThe authors of the statement were cognizant of politics and avoided judgment and recrimination. I agree, this was wise. They avoided talking about the evidence that the virus was laboratory-made. I agree, this was wise. They avoided mentioning the ineffectiveness of face masks. I agree, this was wise. They avoided mentioning effective treatment strategies of which chloroquine is the best we have. I think this was a political judgment with which I disagree. Their statement would have been so much stronger if they were able to say that the limited risk that they proposed for the young and healthy will be that much lower because effective early and preventive treatment is available.


Here are ten messages that are essential pieces of the standard COVID narrative, but which are unfounded in actual science, and the promised rebuttals to each.

  1. “The origin of the SARS-CoV-2 virus was one of many random events in nature in which a virus jumps from one species to another.”
  2. “Chloroquine kills patients and is too dangerous to use against COVID”
  3. “The Ferguson model warned us of impending danger in time to take action and dodge a bullet.”
  4. “American deaths from COVID: 200,000 and counting”
  5. “Masks and social distancing are keeping the virus in check in our communities”
  6. “New cases of COVID are expanding now in a dangerous Second Wave”
  7. “Dr Fauci and the CDC are guiding our response to COVID according to the same principles of epidemic management that have protected public health in the past.”
  8. “Asymptomatic carriers are an important vector of disease transmission, which must be isolated if we are to stop the spread of COVID”
  9. “The lower death rates now compared to April are due to protective measures such as social distancing, mask-wearing, and limited travel.”
  10. “With enough resources, pharmaceutical scientists can develop a vaccine in a matter of months, and provide reasonable assurance that it is safe.”

Detailed rebuttals and references

1. “The origin of the SARS-CoV-2 virus was one of many random events in nature in which a virus jumps from one species to another.”

Strong but not dispositive evidence points to genetic engineering as the most probable origin of the virus. I wrote about this in detail last April in two installments, [Part 1Part 2].

There is no credible path by which a virus with the characteristics of SARS-CoV-2 could have appeared naturally in Wuhan last December. The “wet market” hypothesis died, while no one was looking. The bats that harbor SARS’s closest cousin virus live 1,000 miles west of Wuhan, and the pangolin viruses that harbor another part of the genome live 1,000 miles east of Wuhan. The SARS-CoV-2 genome includes a furin cleavage site and a spike protein matched to the human ACE-2 receptor. These very modifications to bat coronaviruses were the subject of published research, sponsored by our own NIAID and conducted at Univ of NC and the Wuhan Institute of Virology.

2. “Chloroquine kills patients and is too dangerous to use against COVID”

Evidence for the effectiveness of chloroquine + zinc is overwhelming. It was the drug of choice to treat the first SARS epidemic in 2003. Countries in which chloroquine is used have COVID death rates typically four times lower than countries in which use is restricted.

source: HCQtrial.com

Dozens of credible studies have found major benefits of chloroquine, especially if it is used early and especially if it is accompanied by zinc supplementation. (Apparently, the mechanism of action is to open cell membranes to allow infected cells to be flooded with zinc, which effectively stops the virus from replicating. Quercetin is an over-the-counter supplement which has the same effect of opening cell membranes to zinc ions, and there are a few studies of quercetin for COVID [for example, onetwothree].)

Suppression of chloroquine treatment has defied historic precedents, and represents the most extreme denial of real science on this list of 10. Chloroquine is a cheap, widely-used drug with a 65-year history of use by millions of patients. It has a well-studied safety profile, since it is routinely prescribed not only for malaria treatment but as prophylactic protection for people traveling to areas where they are at risk of malaria exposure. It is also standard treatment of lupus.

For the first time, doctors have been restricted in the off-label prescription of a drug. (Why aren’t they screaming about this?) WIth the combined effects of intimidation of doctors, actual restrictions, and policies of pharmacies, chloroquine treatment is effectively unavailable in most US states.

A major study in May was published prominently in The Lancet, claiming that among 100,000 COVID patients on three continents, the death rate of those taking chloroquine was three times higher than those who did not receive chloroquine. Many smaller studies around the world were immediately canceled and never re-started. But when the authors could not produce data to support their calculations, the study was retracted by its authors without comment. I am not alone in calling the Lancet study a major scientific fraud, but none of the authors of the study or the editors of the Lancet have been held accountable to date.

Smaller frauds are perpetrated with studies that are designed to fail. (Anyone who has epidemiological experience knows how much easier it is to design a study to fail than to design a study that can succeed.) There are three ways this is usually done:

  • Failure to incorporate zinc supplementation.
  • Starting late. Once patients are in the hospital, treatment with HCQ is less effective, and by the time they are dying from a cytokine storm, HCQ is useless.
  • Using toxic dosages, up to 4x the standard chloroquine dose, which triggers heart arrhythmias in some patients.

Some of these “designed to fail” studies actually showed significant benefit, and were reported in such a way as to understate their significance. (Anyone with experience in reading pharmacology studies has seen that almost always, the authors put their best results out front at the risk of overstating their significance.) Here’s an example of doublespeak in a recent review:

“Trials show low strength of evidence for no positive effect on intubation or death and discharge from the hospital, whereas evidence from cohort studies about these outcomes remains insufficient.”

Is this sentence intended deliberately to confuse with double negatives? “Low strength of evidence for no positive effect?” What they really found was “overwhelming evidence for YES positive effect”. In the only large study among the eight reviewed, the death rate of patients receiving chloroquine was half the death rate among controls, despite the fact that all patients were started on chloroquine much later than optimal, and without supplemental zinc.

3. “The Ferguson model warned us of impending danger in time to take action and dodge a bullet.”

Neil Ferguson is head of the UK-SAGE, The Scientific Advisory Group for Emergencies. Ferguson and his team at Imperial College have made draconian predictions that failed to materialize on many occasions in the past.

In 2002, he calculated that the mad cow disease would kill about 50,000 British people and another 150,000 once it was transmitted to sheep. There were only 177 deaths. In 2005, he predicted that the bird flu would kill 65,000 Britons. The total was 457 deaths…[Fergusson], true to his alarmist mindset, predicted with his “mathematical model” that 550,000 British people would die from Covid, as well as more than 2 million Americans, if a fierce lockdown did not come into effect. Benjamin Bourgeois

Subsequently, the population death rate of COVID-19 was discovered to be an order of magnitude smaller than what Ferguson was assuming, the lockdown was shown to be ineffective (see below), and still the death tolls in Britain and the US were not close to Ferguson’s predictions.

Ferguson predicted that without a lockdown, Sweden would suffer 100,000 deaths through June, 2020. In reality, the COVID death count for Sweden is 5,895 (as of 1 October), and the death rate is below one per day.

Was Ferguson the most credible biostatistician that the European governments could find in planning a response to COVID last winter, or was he only the most terrifying? Why were no other experts consulted?

4. “American deaths from COVID: 200,000 and counting”

At every turn, the COVID death count has been overestimated.

  • Hospitals were incentivized to add COVID to diagnosis and death certificates.
  • In an unprecedented departure from past practice, CDC instructed doctors to report COVID as the cause of death whenever patients seemed to have symptoms consistent with COVID, or of they tested positive for COVID and died of something else. Cases about motorcycle accidents reported as COVID deaths are no joke.
  • The tests themselves have a high false positive rate. PCR tests were previously used only for laboratory research, not for diagnosis. They involve making 35 trillion copies (based on 45 amplification stages) of every stretch of RNA in a sample from a patient’s nose or mouth and looking for some that match a stretch from the COVID genome.

It is impossible to know what the real death count has been, but three weeks ago CDC released the bombshell that people who died of COVID alone with no pre-existing chronic diseases was only 6% of the reported total.

5. “Masks and social distancing are keeping the virus in check in our communities”

Wearing a mask is perceived as an act of caring by a large proportion of Americans. But the actual benefit in slowing spread of the virus is small enough that not benefit has been detected in the overwhelming majority of studies to date. Here is a bibliography of 35 historic studies showing that face masks have no meaningful effect on the spread of viruses, and 7 more studies that document health hazards from masks. Yes, wearing masks for long periods of time imposes its own health risks, especially when the masks are not removed and washed frequently. This is certainly significant for people required to wear them many hours at a stretch.

Here is the conclusion of one meta-analysis from the CDC web page. The authors find that the benefit is too small to rise to statistical significance even in a compilation of ten studies:

In our systematic review, we identified 10 RCTs that reported estimates of the effectiveness of face masks in reducing laboratory-confirmed influenza virus infections in the community from literature published during 1946–July 27, 2018. In pooled analysis, we found no significant reduction in influenza transmission with the use of face masks (RR 0.78, 95% CI 0.51–1.20; I2 = 30%, p = 0.25)

In recent months, several studies have been published that contradict the historic findings, and seem to justify the use of masks. Here is one that is prominently published (PNAS) and highly cited:

Our analysis reveals that the difference with and without mandated face covering represents the determinant in shaping the trends of the pandemic. This protective measure significantly reduces the number of infections.

Here’s how this conclusion is reached: In three locations where face masks were introduced (Wuhan, Italy, NYC), the authors note a linear rise in incidence of COVID, followed by the curve bending over later on. Their estimate of effectiveness is derived by subtracting the number of actual cases from the number of cases which would have occurred if the linear increase had continued through the period of observation.

An obvious objection to this analysis is that the curve always bends over. The initial rise is exponential as the virus expands into an unexposed population, and then it bends over and eventually falls, as the virus runs out of susceptible people to infect. For a short stretch after the exponential phase, the curve may look like a straight line, but inevitably the curve is destined to decline as the population is gradually developing herd immunity. Authors of this study make no attempt to separate the effect of herd immunity from the effect of masking. To do the comparison correctly, it should compare these three cases to control cases, regions in which no masking requirement was decreed. Did the curve turn over more quickly in locations with masks compared to locations without?

This objection and others were voiced by Paul Hunter, Louise Dyson, and Ed Hill in (separate) responses to the study on the UK Science Media Center website. They point out that the kind of shoddy science published in PNAS would never have received such prominent attention in an unpoliticized environment.

Viruses are spread either by aerosols or by droplets. Droplets are exhaled water that contains virus particles, and masks can trap droplets. They are the dominant mode of spread when people are in very close contact, as in a doctor-patient relationship. But droplets fall quickly from the air, especially in humid summer weather, and droplets don’t penetrate deep in the lungs, where viruses are most dangerous. Aerosols are molecular-scale virus particles, far too small to be stopped by a mask. They are the predominant form of virus spread, and outdoors they are the only way the virus spreads.

In urban environments, there are always tiny quantities of prevailing viruses in the air, and for the great majority of people this is a benefit. It means that just going about their business, they are exposed to tiny quantities of virus that educate their immune systems without accumulating to a load sufficient to cause disease. The best outcome for populations—indeed, the normal outcome for every flu season in the past—is that most people acquire T-cell immunity in this way, and then the virus can no longer spread through the population. By imposing lockdown and social distancing, governments the world over have curtailed this well-known, natural process for acquisition of herd immunity.

What is the rationale for slowing spread of the virus? Originally, the stated goal was to “flatten the curve”, so that hospitals would not be overwhelmed by a sudden burden of severe cases all at once. If there was any danger of this, it passed back in April. So, at this point, slowing the spread of the virus is only important if we hope to stop the spread at some future date. This relies on the promise of a vaccine, which, I will argue in part 3, cannot be adequately tested in a relevant time frame. Hence, even the most optimistic assessment of masks and social distancing will not save lives, but only delay deaths by a few months.

NYU Prof. Mark Crispin Miller’s extended essay on masking cites copious evidence for their ineffectiveness as well more stories than you want to read about recent violence that has erupted between masked and unmasked factions, or between law enforcement officials and unmasked civilians.

Tentative conclusions

It was four years after 9/11 that I finally considered the possibility: this was never about brown-skinned men with boxcutters who hijacked airplanes; it was about restrictions on travel and free expression and a new Federal bureaucracy gathering information about our whereabouts and our contacts, all imposed in the name of keeping us safe. This time, I am a little less slow on the uptake, and I am beginning to suspect that COVID 19 is not about a viral pandemic; it is about restrictions on travel and free expression and a new Federal bureaucracy gathering information about our whereabouts and our contacts, all imposed in the name of keeping us safe.

END OF PART 2

Link to Part 3
Link to Part 1

The Men who Speak for Science

The scientific community has something that American corporations and politicians want. It’s not technology or research. It’s not understanding or policy guidance. It’s the people’s confidence.

In recent decades, every institution in America has suffered decline in public confidence. The press, the Federal government, religious institutions, banking, corporations, even academia confidence levels are all in the 30-40% range. But public confidence in science is still over 90%.

Sources: GallupGallupGallupPew

It follows that if you want to market a product or win an election, claiming that “science is on my side” is a powerful selling point. If you want to halt human colonization of the global ecosphere or move people out of their cars into public transportation, the backing of science is natural and maybe even honest. If you have more sinister goalsshutting down democracy, dividing a nation so it is politically dysfunctional, destroying small businesses and handing their markets to multinational giantsthen claiming the imprimatur of science is probably the only way to con hundreds of millions of people into a program so profoundly contrary to their interests.

Look around. You see responsible citizens and good neighbors cooperating to curtail the spread of a deadly virus. But if you blink and look again, you may see the widest, fastest, most successful mass deception in the history of the world.

They’ve come so far because they have money and government and the press on their side. But they could not have captured so many minds without the support of a few people who claim to speak for science. Of course, Bill Gates and Anthony Fauci and Neil Ferguson are not representatives of a scientific consensus. But, curiously, they have not been laughed off the stage. The scientific community has not come together, 8-million strong, with a public statement that “These men do not speak for science.” And years of anemic public education has taught the populace to accept a scientific world view, rather than to trust their own evidence-based thinking.

We the People will not pull out of this nightmare on our own. The public will continue sleepwalking into medical martial law without a strong and credible counter-narrative. There is a powerful need for We the Scientists to come together and override the mountebanks who have hijacked the mantle of science.

It’s not news that science is subject to political and financial influence. Examples from the past must start with the pharma industry as the most egregious offender; and also FDA diet recommendations, health effects of cell phones, suppression of energy technologies, past suppression of data about asbestos and tobacco and lead.

But never before 2020 have so few people with so little scientific credential claimed to speak for the scientific community as a whole; and never has the public been asked to modify our daily lives and sacrifice our livelihoods on such a scale.

Anecdotal Evidence

Biological weapons are an abomination. No government or research institute has even tried to convince the public that biowarfare research is a good idea, because it would so obviously stir more opposition than support.

After WW II, Nazi bioweapons programs were transplanted to the US, thanks to Operation Paperclip. The story is told in horrifying detail by Stephen Kinzer.

In the wake of international treaties and acts of Congress to outlaw bioweapons research, the US project was re-branded as pandemic preparation and transferred to civilian laboratories. The ruse was that in order to prepare for the next killer pathogen that may soon emerge from the wild, we must create laboratory-modified viruses so we can develop vaccines and treatments for them. The obvious flaw in this logic has been no obstacle to the bureaucratic momentum behind the project.

In 2005, 700 prominent scientists protested to the NIH, calling attention to the masquerade of biological warfare as public health [NYTimes]. Our largest and most prestigious association of scientists, AAAS issued a strong editorial denouncing biowarfare research. Though they did not succeed in halting the program, they created a public relations nightmare for NIH, and after Obama’s election, the NIH program was indeed curtailed, and had to be moved (temporarily) offshore.

The situation is very different in 2020. In April, Newsweek helped alert the public that Dr Fauci’s own NIAID was sponsoring gain-of-function research in Wuhan, China, that modified bat Coronaviruses so they could infect humans. President Trump got wind of this, and ordered that  gain-of-function research at NIAID be immediately defunded. I’m confident that scientists as well as the public were overwhelmingly supportive of this sensible, belated gesture.

But that was not the response of record. In short order, a prominent group of (geriatric? bamboozled?) scientists was reported to protest the move. 77 Nobel Laureates Denounce Trump Officials For Pulling Coronavirus Research Grant. And last month, AAAS produced editorials in support of continuing this insanely dangerous program. Even in a year as bizarre as 2020, I never expected to be siding with Donald Trump against the institutions of science. I read and reread the article in Science before I was forced to conclude that Trump was wearing the white hat.

In the same issue, there was a second editorial denouncing Trump for “politicization of science” by permitting research to go forward with plasma from recovered COVID patients as treatment for present patients. This approach to treatment is logical, it has historic precedent, and by all means it should be tested. The only reason I can imagine for suppressing convalescent plasma is that, if it works, it obviates the need for a vaccine, and NIH as well as private investors have billions of dollars sunk in vaccines. I would not dare to make such a charge if I had not seen an even more blatant example of the same phenomenon in the suppression of chloroquine [refrefrefref].I shouldn’t have to say this, but please don’t interpret my position here as any kind of general support for Donald Trump. I believe he is as corrupt and ignorant a president as I have known in my lifetimethough GWBush gives him a run for his money. One of the unfathomable turns of politics this year is that so many Democrats have been so enraged by Trump’s ascent to power that even when he does the right thing they leap to oppose him. Look at the Democratic response when he announced withdrawal of troops from Afghanistan.

COVID-19 and the Perversion of Science

The political response to COVID, in the US and elsewhere, has been not only contrary to well-supported medical science, but contrary to common sense and contrary to past practice. In every respect, the response has been either ineffective or likely to make the situation worse. We started too late for a quarantine program to be effective; then we failed to protect the most vulnerable and failed to quarantine the sickest patients. In fact, we forced nursing homes to take in COVID patients, triggering a predictable tragedy. Ventilators remained the standard of care long after it was reported by front-line doctors that they were killing COVID patients. Healthy, young people are at very low risk for serious complications, and should have been out there earning our herd immunity; instead, they were kept terrified and locked up. The economy and all cultural and religious institutions were closed down, leading to tens of thousands of deaths of despair [video by Glen Greenwald]. Masks and social distancing, the least effective protections, were endlessly promoted while simple, effective protections including vitamin D and zinc were actively disparaged by health authorities. And all the while, the most effective treatment of all, zinc + chloroquine, was criminally suppressed. Now, as deaths from COVID are down to a fraction of their April peak, government and media continue their campaign to terrorize us with a false narrative, while extending lockdowns, school closures, and masking into the indefinite future.

Call for a response by the scientific community

Mosts scientists are curious and open-minded, opinionated but cognizant of others’ opinions, the opposite of polemical. It is not a natural community from which to recruit activists. But the misrepresentation of science in this pandemic has been extreme, and it threatens the future of science and its role in guiding public policy. There have been many scientists who have stood up to counter the COVID narrative. Many more have been censored, their videos taken down from social media. This is a time when we, the scientific community, have been called to come together and call the misleadership of AAAS into account. There is an urgent need for scientists who have been shy about public stands in the past to come forward and speak out.


Over the next week, I will post details of ways in which I have seen science distorted in support of a government and corporate COVID agenda. 


Here are ten messages that are essential pieces of the standard COVID narrative, but which are unfounded in actual science. Stay tuned for a detailed rebuttal of each.

  1. “The origin of the SARS-CoV-2 virus was one of many random events in nature in which a virus jumps from one species to another.”
  2. “Chloroquine kills patients and is too dangerous to use against COVID”
  3. “The Ferguson model warned us of impending danger in time to take action and dodge a bullet.”
  4. “American deaths from COVID: 200,000 and counting”
  5. “New cases of COVID are expanding now in a dangerous Second Wave”
  6. “Masks and social distancing are keeping the virus in check in our communities”
  7. “Dr Fauci and the CDC are guiding our response to COVID according to the same principles of epidemic management that have protected public health in the past.”
  8. “Asymptomatic carriers are an important vector of disease transmission, which must be isolated if we are to stop the spread of COVID”
  9. “The lower death rates now compared to April are due to protective measures such as social distancing, mask-wearing, and limited travel.”
  10. “With enough resources, pharmaceutical scientists can develop a vaccine in a matter of months, and provide reasonable assurance that it is safe.”

END of Part 1
Link to Part 2
Link to Part 3

What I Learned from the Glucose Monitor

My fasting blood glucose has been creeping up over several years. (My fasting blood sugar is around 110, and HbA1c=5.7; fasting insulin=3.1, triglycerides=91.) Recently, I tried a continuous glucose monitor for the first time, to see what I could learn about eating and exercise habits that affect my blood glucose. The experiment led me to some reading and thinking that was worthwhile, but the results themselves were disappointing, limited (first) by flaws in the technology and (second) by wide variability that I could not trace to any of the usual behavioral correlates.


Why concern ourselves with blood sugar?

Insulin is generated in the pancreas after we eat, with a cascade of effects on the body. The primary short-term effect is to prevent glucose levels in the blood from getting too high, by notifying the liver of the need to pull glucose out of the blood and store energy as fat.

Loss of insulin sensitivity is a primary hallmark of human aging. Most of the known life extension strategies in lab animals have to do with insulin in one way or another. For example, the worm gene daf-2 is the worm’s only insulin receptor, and mutating (weakening) the daf-2 gene doubles the worm’s lifespan. Life extension benefits of exercise and caloric restriction are thought to work, at least in part, through the insulin metabolism.

But glucose is also dangerous, and as we get older we are poisoned by excess sugar in the blood. High blood sugar leads to [list from Mayo Clinic]

  • Cardiovascular disease
  • Nerve damage (neuropathy)
  • Kidney damage (diabetic nephropathy) or kidney failure
  • Damage to the blood vessels of the retina (diabetic retinopathy), potentially leading to blindness
  • Clouding of the lens of your eye (cataract)
  • Feet problems caused by damaged nerves or poor blood flow that can lead to serious skin infections, ulcerations, and in some severe cases, amputation
  • Bone and joint problems
  • Teeth and gum infections

So for long-term health, the name of the game is to keep blood sugar down with as little insulin as possible, hence preservation of insulin sensitivity is the target. Metformin is a well-studied drug for keeping blood sugar down without insulin. I have been taking it (irregularly) for the last several years, intermixed with berberine and Gynostemma (Chinese name: jiaogulan = 绞股蓝).

This reasoning plus direct evidence for life extension in rodents and indirect evidence of life extension in humans has led me to take metformin, though it is not without side-effects.

Long-term effects of metformin 

Metformin is a credible longevity drug, statistically associated with lower risk of cancer, heart disease and especially dementia in humans. Six years ago, this study laid the foundation for metformin as a longevity drug with the claim that people taking metformin had lower all-cause mortality, despite the fact that a population of type-2 diabetics was being compared to a healthier population. This finding inspired Nir Barzilai to raise support for the TAME study.

But metformin has its risks. A long-time contributor to this site, Dr Paul Rivas pointed me to evidence that metformin can interfere with exercise metabolism. Paul notes his personal experience with loss of peak performance while taking metformin. My own experience is consistent with this, though I have never done a rigorous A/B comparison. This study, demonstrating a small but consistent decrease in peak performance, appears to me to be well-designed and analyzed. A plausible mechanism is the interference of metformin with mitochondrial function [refref].  This article claims that metformin suppresses synthesis of ATP, which is the reservoir of energy for immediate use in all cell types. Ben Miller has done the most direct and most recently relevant human experiments in this area and his findings suggest the intriguing possibility that metformin blocks exercise adaptations almost completely in about half of individuals, but not at all in the other half. (If you want to know which half you’re in, you’ll have to wait for next year’s study.)

For the majority of Westerners who exercise little or not at all, metformin may show reduction in long-term risk of age-related disease; but there is no data I know of on the subset of people who do vigorous exercise, comparing metformin to no metformin. Does metformin block the health effects of exercise? Rhonda Patrick cites credible references on this subject as fast as she can get the words out, and her conclusion is that exercise is a better anti-aging program than metformin, and you really can’t have both.

Do glucose-control herbs also blunt the benefits of exercise?

I wrote a few years ago listing botanical alternatives to metformin. Much less research has gone into these herbs, so we must think theoretically about interference with benefits of exercise.Branch of ripe red barberry after a rain with drops of water

Berberine works by a mechanism of action that overlaps metformin. Both metformin and berberine promote AMPK (which in turn promotes sugar burning). Both metformin and berberine inhibit mitochondrial Complex I (slowing the conversion of sugar to usable energy). There is tentative experimental evidence that (unlike metformin) berberine does not inhibit adaptations to exercise [refrefref].

Gynostemma is a Chinese herb popularized by Life Extension Foundation in their proprietary compound called AMPK Activator. In animal models and in humans, Gynostemma suppresses blood sugar and blood cholesterol. Like metformin and berberine, it works through AMPK, which appears to be a good thing.  It is anti-inflammatory, and has a history in China as cancer therapy, supported by mouse and in vitro studies. In rodent studies, Gynostemma has a beneficial effect on strength and endurance [refrefref]. The one study I’ve found on human diabetes shows modest benefits after 12 weeks. The only counter-indication that I have seen is that it increases insulin release (in vitro), which I believe to be pro-aging.

Is it more important to suppress postprandial spikes or to depress fasting glucose levels? 

HbA1c is a standard blood test for diabetes. It is related to average blood glucose levels over the previous 90 days (= the half-life of hemoglobin in the blood). But the glycation of hemoglobin (as measured by A1c) happens predominantly during the brief glucose spikes, rather than the much longer periods of average glucose levels. So it might be fairer to say that A1c summarizes peak glucose events over a 90-day period. And we might guess that the long-term health risks of high blood sugar are similarly more sensitive to the peaks than to the average.

I believe that apoptosis is on a hair trigger as we age, and part of the reason for this is too much p53. This study links P53 activation to postprandial glucose spikes, rather than to high average glucose levels.  This study links deterioration in endothelial function (related to arterial disease) with glucose spikes. The same paper lists ROS and oxidative stress as additional risks.

For a long while, it has been established that high fasting blood sugar is associated with cardiovascular risk. Of course, there is also association with obesity and T2 diabetes, but for these, it is natural to think of fasting blood sugar as the result, rather than the cause.

Chris Kresser says the best indicator of metabolic health is blood glucose levels 2 hours after a meal. If you can bring your blood glucose down to normal within 2 hours after eating your insulin sensitivity is good. For me, unmedicated, it was 3 hours after dinner, but less than 2 hours after breakfast. Either berberine or metformin tamed the after dinner spikes within 2 hours.

Marker Normal Pre-diabetes Diabetes
Fasting blood glucose (mg/dL) <99 100-125 >126
OGGT / post-meal (mg/dL after 2 hours) <140 140-199 >200
Hemoglobin A1c (%) <6 6-6.4 >6.4

Kresser claims that these guidelines from the American Diabetes Association are not strict enough, and that statistics show increased future risk of diabetes even for people in the ADA “normal” range. But he cites Petro Dobromylskyj, who makes an exception for anyone on a low-carb diet (how low isn’t specified). Paradoxically, low-carb diets are claimed to be healthy, even though they decrease insulin sensitivity. I have been unable to make sense of this.

Kresser emphasizes that all numbers should be interpreted in the context of a person’s other lifestyle and health indicators. In people who are active and not overweight, he is not inclined to worry about statistics in the “prediabetic” range. (I take comfort in this personally, and who can say if I’m fooling myself?) But I can learn something from the way my glucose stats respond to medications, eating and exercising, whether or not I believe the absolute levels are concerning.

Writing in Science Magazine last year, Charles Piller reviewed the ADA guidelines and found a consensus in the opposite direction — that they were probably too strict, and unnecessarily worrisome to a great many people. By ADA’s definition, 80 million Americans are “pre-diabetic”, which is 40% of the adult population. The conflict really is not over the statistics but the interpretation. You can say either “People with A1c levels above 6 are at increased risk of progressing to diabetes” or equally well, “Most people with A1c levels less than 6.4 will never develop diabetes.” Both statements are true.

As promised: my experience

The Freestyle Libre was very easy to use and set up. I followed the instructions and used a spring-loaded device to insert the monitor behind my biceps. It was painless. There’s a tiny wire that goes a few millimeters into the skin and an adhesive covering with a button containing the electronics.

The wearable button stores data for up to 8 hours. The other part of the kit is a reader that downloads data every time you bring the reader within an inch or two of the button. As long as you take a reading every 8 hours or less, you won’t lose any data. And you can do it as often as you like, to get real time feedback on your glucose state.

The wearable button ($45) is meant to last two weeks, and then it must be discarded. My insurance (Blue Cross Medicare Advantage) wouldn’t pay for it because I didn’t have a diagnosis of diabetes. I found this out only after several trips to the drug store, interspersed with phone calls to Blue Cross, where I got repeated assurances that it would be covered. The reader ($85) can be reused. Apparently, it doesn’t do anything that a cell phone app couldn’t do, but Abbott (parent company of Freestyle) has arranged it so that you can only use the cell phone app if you purchase the reader.

To analyze historic data, you can use capabilities built into the phone app, or plug the reader or cell phone into a computer, using a USB cable. The data is uploaded to a web site containing analysis tools and an option for creating a CSV file for more detailed manipulation in a spreadsheet. (The download button is not so easy to find, but I called Abbott’s tech support number, and connected without excessive wait time to a friendly and knowledgeable technician.)

My intention was to vary the glycemic content of my meals, my exercise schedule, eating and fasting schedule, and the medications I was taking (metformin and berberine) to learn what I could about glucose management. The first day I fasted, and I was concerned to see that all day my fasting glucose ranged between 110 and 120. (For reference: the standard healthy range for fasting glucose is 70-100. Below 70 is “hypoglycemia”. Above 100 is “pre-diabetes” and above 125 is “type 2 diabetes”.)

I ate a meal, and glucose shot up to 179 before bedtime, only gradually coming back down during the night. As it turned out, 179 was my high for the week.

The data is cut off after 10 days, though the monitor is supposed to have a lifetime of 14, because it fell off my arm. I looked for patterns in my data, and was able to learn only four things:

  • Glucose rose after a meal. (I didn’t get as far as being able to distinguish a meal with more carbs from a meal with more fiber or protein.)
  • Glucose also rose, to a lesser extent, when I exercised.
  • Taking metformin with a meal substantially reduced the glucose spike after the meal, and raising the glucose trough a few hours later. The range (stdev) but not the average glucose was affected.
  • Taking berberine did not have this immediate effect.
  • There was a strong downward trend over the 10 days. I interpreted this to mean that the monitor was gradually loosening in my skin, probably because I am a long-distance swimmer.

(In a long phone support session with an Abbott representative, they acknowledged the reality of my experience: that the monitor can loosen over time, resulting in readings that are anomalously low. They were happy to replace the monitor, and advised me against long periods of swimming. )

Unresolved

I was left wondering all the things I wanted to discover at the beginning of my experiment.

  • What kinds of meals minimize the glucose spike? High fat? High fiber? High protein?
  • Could exercise before or after the meal help tame the spike?
  • Could I detect short- and long-term effects of metformin, berberine, and jiaogulan?

The thing that impressed me most was the natural variability of blood sugar, changing from hour to hour, uncorrelated with either food or exercise. I trust the body knows what it’s doing. “Le corps a ses raisons que la raison ne connait pas.”*

I hope to try the monitor for 2 weeks again when swimming season is over.

In the meantime, I am taking a modest, common-sense approach. I am going to leave out metformin but continue daily exercise and low doses of berberine and Gynostemma, lightening my evening meal and ending the day’s food 3 hours before bedtime.

Walking burns calories (pulls sugar from the blood) 3 to 5 times as fast as sitting, and walking after a meal feels like a natural and pleasant thing to do. My doctor recommends it. I’m going to try walking half an hour after breakfast and dinner, pending my next experiment with the CGM.

Politics Influences the Science of COVID-19

Many of us are still shell-shocked by the changes in our lives that have been imposed this spring. We’re reacting to each unexpected event as it comes. But to anyone who has stepped back to make sense of this web of contradictory messages that pour out of our newsfeeds, it is clear that the government agencies and corporate news media are slanting their message toward fear. I am particularly concerned when they do this at the expense of honesty. This is a moment for the scientific community to be engaging in spirited dialog among diverse voices. Only with open debatei can we hope to shed light to guide the momentous public policy decisions that are being made, directing our culture and global economy into unexplored territory. But instead of robust debate, what I see is a monolithic message, and censorship of the few brave scientists who dissent from that message. I’m ashamed to say that the scientific community has been part of the problem.


I’m writing here about two issues: 

(1) Numbers reported by CDC have been gamed to make it appear that America is in the second wave of a pandemic. Instead of reporting COVID deaths, they began reported COVID cases. Then they conflated recovered individuals (who test positive for antibodies) with current cases (who test positive for the active virus). No wonder numbers are rising!

(2) A new report featured prominently in Nature purports to show that lockdowns have stemmed the spread of the virus and have saved lives. The article is by the same team whose flawed models produced apocalyptic predictions last March that justified lockdowns in Europe and the US. The new computer model assumes from the start that the number of COVID deaths would have expanded exponentially from their March levels, and that social distancing is the only factor responsible for lower death rates. That is, it assumes exactly what it purports to prove. Where is accountability? Why is this perspective promoted in the world’s most prestigious journal, while reasonable doubts are swept aside?


Part One—CDC reporting

The global death rate from COVID-19 is down to about 4,000 per day. It is not even among the top ten causes. COVID is lower than traffic deaths, lower than diarrhea. Even compared to other respiratory infections, COVID is now a minority.

In the US, daily COVID deaths peaked in April, and are now down to 1/10 the peak rate, at about 400/day. COVID is now the sixth leading cause of death in America, but it no longer registers as a bump in total mortality.

But the headlines claim we are in the midst of a “second wave”, based on reported numbers of cases.

Deaths from COVID are being over-reported. Hospitals are incentivized to diagnose COVID with Medicare reimbursement rates that are higher than other diseases, and guaranteed coverage from every major insurer. Doctors are being instructed to report COVID as a cause of death when no testing is done, and when chronic illnesses contributed to the outcome. And with all this, the number of deaths continues to fall, even as the reported number of cases is rising. Why is this?

In part, the lower fatality rate is real. Doctors are learning from experience how to treat the disease. More chloroquine and zinc, less intubation. Like all viruses, this one is evolving toward greater contagion and lower lethality. But the most important explanation is an artifact in the way COVID cases are being reported. Before May 18, the “case count” was based on tests for the live virus, and counted only sick people. Then the definition was changed to count both people who tested positive for the virus and for antibodies to the virus. The latter group is mostly people who have recovered from COVID, or who developed antibodies with exposure. As the number of recovered patients increases, of course the rate of positive tests will increase.

Part Two—Models that “prove” lockdown has saved lives

In the past, Neil Ferguson’s group at Imperial College of London has produced scary computer models that overestimated the epidemics of Mad Cow Disease, Avian Flu, Swine Flu, and the 2003 SARS outbreak. In March, his group’s computer model was justification for England, Europe and America to shut down economies, prevent people talking and meeting, prohibit concerts and theater and church and every kind of public gathering, throw tens of millions of people out of work, deny the rights to freedom of assembly that are fundamental to democratic governance. His manuscript was not even peer reviewed, but only posted on a university server. Even before its details and assumptions were made known, the integrity of the model was assailed by other experts, including Stephen Eubank (UVA Biocomplexity Institute) and Yaneer Bar-Yam (New England Complex Systems Inst). After details of the assumptions were revealed at the end of April, the model was widely scorned by real experts (e.g. Andrew Gelman) and self-appointed pundits (Elon Musk).

I have enough experience with computer models to know that results are often highly leveraged with respect to details of the input. Sensitivity analysis is essential for interpreting results, but is almost never done. Too often, the output is reported without the qualification that small changes to the input produce very different results.

Against this background, the high-profile publication in Nature of Ferguson’s recent work is suspicious. I would have thought he had no credibility left among serious modelers of epidemiology, but I have ceased to be surprised when politics trumps competence for access to the most prestigious publication venues.

The Ferguson Article Vindicating Lockdown

They analyze spread of COVID in 11 Eurpoean countries this Spring, averaging over different countries but not contrasting the different local strategies. They take death counts as surrogate for case counts because reports of case counts are even more unreliable than death counts. But (one of several crucial failures) they don’t apply a time lag between death counts and case counts.

They take as input for each country the dates on which each of three different isolation strategies was implemented. They assume that the virus would have spread exponentially but for these measures, and credit the isolation measures with the entire difference between reported death rates and the theoretical exponential curve.

They conclude that Europe has dodged a bullet, that less than 4% of people had been infected, and by implication the lockdown has saved the other 96%. They imply but don’t state explicitly that there would have been about 4 million deaths in Europe instead of ~150,000 reported when the paper was written.

It is obvious that lockdown and social isolation slow the spread of the disease, but not obvious that they affect the eventual reach of the disease. Thus it is an open question whether the public policy prevented or only delayed deaths from COVID. This question can be addressed most directly by comparing regions that were locked down with regions that remained open. Instead of doing this, the Ferguson group lumped all regions together and compared their results with an unrealistic scenario in which the exponential curve would have expanded to infect every susceptible person in Europe.

Two schools of thought

There are fundamentally two hypotheses about the epidemiological events of this spring: Either the number of people exposed has been high and the fatality rate low, or else the number of people exposed has been low and the fatality rate higher. People in the first camp argue that the exposed population is over 50% in Europe and America, approaching or exceeding herd immunity, and the population death rate is in the range 0.0005. In the second camp, people estimate the population exposure about ten times lower (5%) and the fatality rate correspondingly higher (0.005).

The story told by people in the first camp is that social distancing slowed but did not prevent transmission of the disease through the population. By now, the presence of the virus is waning because people in many places have already been exposed.

The story of Ferguson and others in the second camp is that social distancing actually stopped spread of the virus, so that most people in Europe and American have never been exposed. It follows that if we ease restrictions, there is another wave of infections ahead, potentially 20 times larger than the first wave.

The deep flaw of the recent Ferguson paper is that his team does not consider the first scenario at all. Built into their model, they assume that population level immunity is negligible, and the only thing that has slowed spread of the virus has been social distancing. This is where they put the rabbit in the hat.

If they had considered the alternative hypothesis, how would it have compared?

To choose between the two hypotheses, we might compare a region before and after lockdown, or we might compare regions that locked down with regions that didn’t.

In a preprint response to Ferguson, Homburg and Kuhbandner do a good job with the first approach. They take Ferguson to task for not considering the immunity that spreads through the population along with the disease. They show that exponential expansion had already slowed in England before the effect of the lockdown on mortality data could have been felt.

Lockdown went into effect in Britain on March 23. If lockdown had a benefit, it would be in preventing new cases, and its effect on the death rate would show up about 23 days later (April 14), because 23 days is the median time to fatality for those patients who die of COVID. In the graph, we see that the death rate had already leveled off by April 14.

On this log graph, an exponential increase would appear as a straight line sloping upward. It’s clear that the exponential expansion phase ended long before the lockdown could have had any effect. Not only weren’t the numbers expanding exponentially, but the death rate had already started to decline before April 14, when the effect of lockdown was expected to kick in. The authors state they performed the same analysis for 10 other countries in the Ferguson study with similar results, though they show the graph for Great Britain alone.

“We demonstrate that the United Kingdom’s lockdown was both superfluous and ineffective.”
[Homburg and Kuhbandner]

Here in the US, there was a natural experiment when people emerged into the streets to protest racism and police brutality at the end of May. Social distancing in this environment has been impossible. Allowing for a 23-day lag, we should have seen a surge in US mortality starting mid-June. In the plot below, there appears to be a leveling off of the death rate since mid-June, but no new disaster. This alone is strong evidence that US has substantial herd immunity, and that most of the population has already been exposed to the virus.

A second way to distinguish between the two hypotheses is to compare regions that locked down with regions that didn’t. One of their 11 European countries was Sweden, where the economy was kept open and quarantine was limited to people who were symptomatic with COVID. It is a glaring defect in the Nature paper that Sweden is lumped in with the other ten countries when it should have been contrasted. In fact, the mortality curve for Sweden was typical for the other ten countries, even as commercial and cultural institutions in Sweden continued normal operations. Sweden has had a higher death rate than Austria, Germany, France, and Denmark, but lower than Belgium, Italy, Spain, or UK. There is no evidence that Sweden’s COVID mortality was higher for having bucked the trend to remain open, but some indication that Germany and Austria had particularly effective containment policies.

We can ask the same question of the different states in the USA. Comparing death rates from COVID in the 42 states that locked down with 8 states that did not lock down, this article finds that the death rates in locked down states was 4 times higher. (Caveat: there was no correction for urban vs rural or for demographic differences.) The author concludes, “With the evidence coming in that the lockdowns were neither economically nor medically effective, it is going to be increasingly difficult for lockdown partisans to marshal the evidence to convince the public that isolating people, destroying businesses, and destroying social institutions was worth it.”

I’ve prepared a comparison of all states ranked by COVID mortality which you can view here.

The Politics of COVID

In 1933, Roosevelt told America we had nothing to fear but fear itself. It is common for government leaders to dispel panic because they know that a nation can better thrive when people feel confident and secure. Even G.W. Bush responded to the terror attacks of 9/11 by telling the American people, “keep shopping.” On the other side, despots sow fear in their subjects when they want to consolidate autocratic power, and when they want to stir up fervor for war.

It is clear from messaging in the corporate media that the COVID pandemic is being hyped to create more fear than is warranted.

  • The fatality rate was vastly overestimated initially, and even now is probably overestimated at 0.002 to 0.005
  • Doctors were told to report deaths from COVID without proof that COVID was the cause
  • Reimbursement incentives for hospitals to diagnose COVID
  • Repeated warnings of a second wave, etc, which has not materialized.
  • Suppression of tests for well-studied, cheap treatments (chloroquine) while jumping into large-scale tests of vaccines that have not yet been tested on animals.
  • No mention of vitamin D, which is a simple, cheap, and effective way people can lower their risk. [refrefref]. Our own CDC is silent, while the British equivalent agency actively discourages vitamin D for COVID prevention.
  • The biggest scandal of all is that lockdown has been authorized in the US and elsewhere based on hypothetical safety benefits with no consideration of costs. Our health is affected by our communities, our cultural lives, our social lives, and our livelihoods. [Yale epidemiologist David Katz politely makes this point.]

Shamefully, the scientific community has been complicit in the campaign of fear. A handful of courageous doctors and epidemiologists have been outspoken. In addition to Katz, John Ioannidis and Knut Wittkowski are best known to me. But the most trusted journals continue to publish articles that are based on politics rather than sound science.

Who is benefiting from the international panic? Who is behind the media campaign and the distortion of science, and what is their intention?

I invite people who are more politically astute than I to speculate on these questions.