This conclusion banishes the “fountain of youth” to the limbo of scientific impossibilities where other human aspirations, like the perpetual motion machine and Laplace’s “superman” have already been placed by other theoretical considerations. Such conclusions are always disappointing, but they have the desirable consequence of channeling research in directions that are likely to be fruitful.
No Truth to the Fountain of Youth:
…no purported anti-aging intervention has been proved to modify aging…We find it ironic that a phony anti-aging industry is proliferating today…Some [researchers] Some assert that aging’s complexity will forever militate against the development of anti-aging therapies.
“Aging is mathematically inevitable. Like, seriously inevitable. There’s logically, theoretically, mathematically no way out.”
This new study is based on statistical analysis of human and primate populations. Among the 42 authors (!) who signed it, I am chagrined to find the name of J. W. Vaupel. Et tu, James? Over several decades, Vaupel has been the optimist of demography, telling us that somewhere in the world, human lifespan is always continuing to increase, as it has done since 1840, at the rate of about 1 year of new lifespan for every 4 years that passes. For the first 130 years of this advance, the improvement in lifespan was predominantly about preventing infant mortality and combatting infectious disease. But since about 1970, lifespan improvements have continued to benefit the elderly. My informal index is the number of 80-year-olds I see on the tennis courts. Vaupel and his former student, Annette Baudisch, also were prime movers in a comprehensive 2013 study of Aging Across the Tree of Life, which catalogued species that don’t age at all for decades at a time, and others that become demographically younger.
This new computer model—like all computer models—is a translation into mathematical language of a set of assumptions about a natural phenomenon. The crank turns, and out pops a prediction. The sleight-of-hand, the conjuror’s trick, is that we are tempted to look at the mathematical machinery to see where these predictions come from. But equally important is to look at the assumptions on which the mathematics is built.
In this case, the assumption is that natural selection has been trying to maximize lifespan, because the longer an individual lives, the more opportunity it has to reproduce. And reproductive output is the measure of success in neo-Darwinian logic.
But if we look at the biology of aging, it’s clear that evolution has not been trying to maximize lifespan. As we get old, genes are turned on that destroy us with inflammation and autoimmunity, and this epigenetic change shows every sign of being under the body’s control. As we get old, genes are turned off that rebuild and protect the body against chemical damage, most famously from free radicals. Again, it appears that this is deliberate. It is a product of natural selection, not a constraint on natural selection.
How can this be? How can a variety with lower reproductive success prevail in evolutionary competition against other varieties with higher reproductive success? This question has been the primary focus of my own research for 25 years, and my answer is the necessity to preserve stability of ecosystems.
My answer may be right or wrong—it is still a minority opinion. But what is clear is that the lifespan of almost all living things is under epigenetic control. That is, aging is a programmed phenomenon. Aging is not the accumulation of damage. Aging is not the body wearing out. Rather, aging derives from processes of self-destruction that are under the body’s control.
In this perspective, aging looks a good deal less inevitable than this article claims. And indeed, there is cutting-edge science that appears to be turning back the clock of aging, turning old rats into young rats.
Specifically, what does the new study find? Looking at populations of humans and other primates, they find that longer average lifespans are associated with less variability in lifespan. In other words, the short-lived primates have deaths that are spread out, with some living much longer lives; but in the longer-lived primates, age-at-death is clustered up near the high end. This gives the appearance of some kind of wall at the high end of lifespan.
And where, specifically, is the flaw in the new paper?
“Understanding the nature and extent of biological constraints on the rate of ageing and other aspects of age-specific mortality patterns is critica…”
The implicit assumption about “biological constraints” is that the constraint is physical, or that in some way it is beyond the reach of evolution. The assumption is that natural selection has pushed against these constraints, and hit a brick wall. The alternative view (a view that is shared by some of the most prominent researchers who have studied physiology and biochemistry of aging) is that these “constraints” are actually baked in by natural selection itself. Far from being constraints on evolution, these constraints are actually the product of evolution. This is to say that the constraints are not fundamental physical limits, but features built into the epigenetic cycle of growth, development, and aging. The “constraints” become malleable as we tinker with the signaling mechanism by which the body imposes aging on itself.
A crucial caveat
I believe that as we understand more about epigenetics and the signaling mechanisms that control biological age, it will become increasingly feasible to manipulate lifespan. Indeed, we’re already doing this to a huge extent in lab worms and, to a good extent, in rodents.
But evolution isn’t so dumb. Limits on lifespan have been put in place to help protect against population overshoot. And (my opinion) humans are already in a state
of severe population overshoot, in the context of sustainable limits of Earth’s biosphere. I believe that whether or not biological science succeeds in further extending lifespan, it is an urgent matter for survival of our species (and many other species) that we shrink the human footprint on the biosphere and on the soil, water, and atmosphere that support Earth’s ecology. I think that living well with less is a relatively simple technical problem. We need only implement all currently known efficiency improvements in the use of resources, and continue to discover new ones. But it is a huge political problem that we have barely begun to confront, and I don’t have any good ideas how to make these changes a political reality. I’m going to stick to the science, and count on others who are more adept at politics than myself. As we extend human lifespan, there is an urgent need to move toward sustainable agriculture and to adopt energy-efficient technologies.
Long before NAC saved the life of someone dear to me, it was a staple of my supplement stack. I notice that now N-Acetyl Cysteine has become my favorite supplement, the one I reach for 3 or 4 times a day when I pass the kitchen cabinet. It’s been such a gradual process, that I don’t remember the reasons that installed NAC in my subconscious as a reliable life extension aid. I’m taking this opportunity to review the literature.
In the 1980s and 1990s, the oxidative theory of aging reached its pinnacle, and anti-oxidant supplements were all the rage. Trials of anti-aging supplements failed time and again, and often they led to shorter lifespans of test animals. Aging of animals turns out to be more complicated than rusting of iron, and part of the complication is hormesis, and ROS (Reactive Oxygen Species), particularly H2O2, are part of the signaling cascade that turns on hormetic protections.
One anti-oxidant that survived the massacre was glutathione. I continue to believe that glutathione promotes health, despite its close association with H2O2. Supplementing with N-Acetyl Cysteine (NAC) is the commonly-recommended strategy for raising glutathione levels, and it seems to work. The best promise of NAC (through glutathione) is in preserving our mitochondria, which weaken and reduce in number as we age.
Glutathione is a tripeptide, a mini-protein consisting of the 3 amino acids glutamate, cysteine, and glycine.
Our metabolisms (like all eukaryotes) use REDOX reactions to store and deploy energy, because they are far more energy-dense than the covalent chemistry of organic molecules. The energy metabolism has waste products which must be neutralized so they don’t latch on to delicate organic molecules and damage them. There are various toxic waste products (ROS), and various pathways for reducing them. The last stage is always H2O2, which must be neutralized to water. This is the primary job of glutathione. (Also catalase.) Unlike catalase, glutathione can perform diverse other detoxifying roles as well.
Glutathione acts like a rechargeable battery. Its reduced form (GSH) is available to detoxify H2O2, after which it exists as an oxidized form (GSSG), which must be “recharged”. GSSG is just two molecules of glutathione that are linked together by a disulfide bond, and a more complex protein called glutathione reductase comes along to separate the two molecules, recharging the battery. Another supplement, Alpha Lipoic Acid (ALA) is also helpful in recycling GSSG back to its useful form, GSH. Cells sense the ratio of GSH to GSSG to determine if they are in trouble. If the ratio becomes too low, the cell turns on NFkB [ref], which, in turn, initiates an inflammation cascade. A healthy cell has GSH:GSSG in the ratio 100 to 1, but a severely stressed sell can have more GSSG than GSH. Low ratios GSH:GSSG ratios can send a cell down a senescence pathway, terminating in apoptosis.
Glutathione’s importance is underscored by its large concentrations in every cell in the body. Your average human cell is using glucose for fuel, but the cell has as much glutathione as glucose in the cytoplasm.
Glutathione levels normally decline with age.
In addition to anti-oxidant activity, glutathione is now known to have many other roles, including DNA repair, protein synthesis, and chemical signaling. These functions may be even more important than detoxifying H2O2. Most important for slowing age-related degeneration, glutathione has anti-inflammatory effects [ref], especially in the lungs [ref], which may be why NAC has been helpful in protecting against COVID [ref] It is well-established that severe COVID depletes glutathione, especially in late stages involving a cytokine storm [ref].
Table 1 Functions of Glutathione
Direct chemical neutralization of singlet oxygen, hydroxyl radicals, and superoxide radicals
Cofactor for several antioxidant enzymes
Regeneration of vitamins C and E
Neutralization of free radicals produced by Phase I liver metabolism of chemical toxins
One of approximately 7 liver Phase II reactions, which conjugate the activated intermediates produced by Phase I to make them water soluble for excretion by the kidneys
Transportation of mercury out of cells and the brain
Regulation of cellular proliferation and apoptosis
Vital to mitochondrial function and maintenance of mitochondrial DNA (mtDNA)
Fruits and vegetables are a substantial source of dietary glutathione [ref], but bioavailability is low, so most of the body’s glutathione is home-made.
Can you just take glutathione pills? Yes, but they are expensive and poorly absorbed. Does supplementation with NAC really increase availability of glutathione where it is useful? Evidence is good [ref, ref, ref]. Just two years ago, I advised readers of this blog to eat glutathione, but I’m backing off from that suggestion now, because I think NAC supplementation is not just cheaper but more effective.
“The rate-limiting step of glutathione synthesis does not appear to be the activity of either enzyme under normal conditions, but rather the provision of one of the amino acids (L-cysteine) making up the tripeptide.” [ref]
Agricultural and industrial chemicals, ubiquitous in our environment, are not the primary cause of aging, but they cause severe symptoms for some, and may be degrading the metabolisms for all of us in subtle ways. Glyphosate has become impossible to avoid. Glyphosate, mercury, and other chemicals increase the body’s need for glutathione, as glutathione is essential to the body’s detox machinery. IBS, Crohn’s disease, and other inflammation syndromes increase the need for glutathione, and can potentially benefit from NAC supplementation.
What benefits of NAC have been documented in humans?
Best evidence is for preservation of the eyes with age. This is from an article on eye health and aging by BIll Sardi:
Numerous studies link glutathione with the prevention of cataracts, glaucoma, retinal disease and diabetic blindness. Here is a sampling of the evidence concerning glutathione and eye health.
Glutathione has been shown to detoxify the aqueous fluid of the inner eye [ref] and may help maintain adequate fluid outflow among glaucoma patients. [ref, ref]
Glutathione exists in unusually high concentrations in the lens and is essential to maintain its transparency. [ref] However, glutathione levels decline in the lens with advancing age; the decline is especially rapid prior to cataract formation. [ref]
NAC has been observed to have neuroprotective properties, but whether it lowers risk of dementia or PD is still not established [ref]. Emerging evidence suggests that NAC supplementation protects the brain in the event of ischemic stroke [ref]. Intravenous glutathione has been tried as a therapy for Parkinson’s disease with unimpressive results. Psychiatric applications are still under development. NAC has shown promise for treating addiction, AD, PD, autism, OCD, schizophrenia, depression, and bipolar disorder [ref].
Infusion of NAC increased endurance in trained cyclists [ref].
Intravenous NAC is used in ERs for detoxification of acetaminophen. It is also used for heavy metals [ref, ref], chloroform, monoxide and other poisons. [ref]
Old mice have half as much glutathione in their muscles, compared to young mice [ref]
Life extension in lab animals, including rodents
There are many studies in worms and flies demonstrating life extension via NAC. There is just one study in mice [ref], but it was so successful that I don’t know why it hasn’t been replicated. 24% increase in mean lifespan and 45% increase in maximal lifespan in the only arm of this Jackson Lab broad screening study that showed promise.
Glutathione and NAC have both been readily available supplements, available without prescription for many years. NAC is preferred as a less expensive pathway to augmenting GSH levels within cells. Recently, NAC was reclassified as a prescription drug by FDA. There is no concern with safety, and the only reason offered by FDA is that NAC has been promoted as a hangover remedy after excess alcohol consumption. Since glutathione can detoxify alcohol breakdown products in the liver, NAC probably has some usefulness in this role. I believe the real motivation for making NAC harder to get is that it is useful in treating COVID, and there appears to be an agenda for suppressing inexpensive and effective treatments (chloroquine, ivermectin, vitamin D, zinc, quercetin) in favor of vaccination.
(Off-topic: If you’re interested in a comprehensive guide to the general principles and the subtleties treating COVID, I highly recommend this interview by Dr Darrell Demeo of Mumbai.)
The Bottom Line
The evidence for NAC as a life extension supplement is mostly indirect, but there are many good reasons to boost our glutathione levels, especially as we age, and especially in an age of ubiquitous chemical toxins.
This essay is inspired by Dr Mercola’s announcement last week that (reading between the lines) his life and his family’s have been threatened if he doesn’t remove from his web site a peer-reviewed study demonstrating the benefits of vitamin D and zinc in prevention of the worst COVID outcomes. In the present Orwellian era, where propaganda and deception are ubiquitous, one of the signposts of truth that I have learned to respect is that the most important truths are the most heavily censored.
This is not what I enjoy writing about, but as I find dark thoughts creeping into my consciousness, perhaps it is better to put them on paper with supporting logic and invite my readers to help me clarify the reasoning and, perhaps, to point a way out of the darkness.
Already in January, 2020, two ideas about COVID were emerging. One is that there were people and institutions who seemed to have anticipated the event, and were planning for it for a long time. Gates, Fauci, the World Economic Forum, and Johns Hopkins School of Medicine were among the prescient. (I credit the (now deleted) videos of Spiro Skouras.) Second was the genetic evidence suggesting that COVID had a laboratory origin. Funders of the scientific establishment have lost their bid to ridicule this idea, and it has now leaked into the mainstream, where it is fused with the classical yellow peril propaganda: “China did it!”. I have cited evidence that America is likely equally culpable.
The confluence of these two themes suggests the dark logic that I take for my topic today: Those who knew in advance, not only that there would be a pandemic but that it would be a Coronavirus, were actually responsible for engineering this pandemic.
Immediately, I think: How could people capable of such sociopathic enormities be occupying the most powerful circles of the world’s elite? And what would be their motivation? I don’t have answers to these questions, and I will leave speculation to others. But there’s one attractive answer that I find less compelling: that it’s a money-maker for the large and criminal pharmaceutical industry. The new mRNA vaccines are already the most profitable drugs in history, but I think that shutdown of world economies, assassinations of world leaders, deep corruption of science, and full-spectrum control of the mainstream narrative imply a larger power base than can plausibly be commanded by the pharma industry.
Instead, I’ll try to follow the scientific and medical implications of the hypothesis that COVID is a bioweapon.
The Spike Protein
The spike protein is the part of the virus structure that interfaces with the host cell. SARS 1 and SARS 2 viruses both have spike proteins that bind to a human cell receptor called ACE-2, common in lung cells but also present in other parts of the body. Binding to the cell’s ACE-2 receptor is like the wolf knocking at the door of Little Red Riding Hood’s grandmother. “Hello, grandmama. I’m your granddaughter. Please let me in.” The virus is a wolf wearing a red cape and hood, pretends to be an ACE-2 enzyme molecule seeking entrance to the cell.
In order to enter the cell, the virus must break off from the spike protein and leave it at the doorstep, so to speak. This is an important and difficult step, as it turns out. Unique to the SARS-CoV-2 virus is a trick for making the separation. Just at the edge of the protein is a furin cleavage site. Furin is an enzyme that snips protein molecules, and it is common in our bodies, with legitimate metabolic uses. A furin cleavage site is a string of 4 particular amino acids that calls to furin, “hey — come over here. I’m a protein that needs snipping.”
The most compelling evidence for a laboratory origin of COVID is that coronaviruses don’t have furin cleavage sites, and until last year, this trick has never evolved naturally.
How we think about natural disease
The classical understanding of a viral or bacterial disease is this: A parasite is an organism that uses the host’s resources for its own reproduction. It is evolved to reproduce efficiently. If it has co-evolved with the host, it may be evolved to spare the host’s health, or even to promote it, because this is the optimal long-term strategy for any predator or parasite. But newly-emerged parasites can do well for awhile even if they disable or kill their hosts, and this is the kind of disease that is most damaging to us. The damage is done because the (young) virus’s strategy is to reproduce rapidly and disperse itself into the environment where it can find new hosts. The virus has no interest in harming the host, and was not evolved to this end, but this is a side-effect of commandeering the body’s resources for its own reproduction.
How engineered diseases can be different
A bioweapon virus is designed to cause a certain kind of harm.
What kind of harm? It depends on the projected use for the weapon.
Doesn’t the virus have to reproduce? Probably, for most weapon applications; but a bioweapon is not necessarily designed for rapid reproduction. A bioweapon can be designed as a “sleeper” to remain dormant for months or years, or to cause incremental disability over a long period.
If COVID had evolved naturally, we would expect that its spike protein would be adapted to mate well with the human ACE-2 receptor. There’s no reason to suspect it being otherwise biologically active. But if COVID is engineered, it may be that the spike protein itself has been designed to make us sick.
One reason this is significant is that the vaccines have all been designed around the spike protein, assuming that the spike protein were metabolically neutral. If the virus had been naturally evolved, this is a reasonable assumption. But if it came from a laboratory (whether it leaked or was deliberately released) the spike protein might actually be the agent of damage. There are several reasons to suspect that this is the case.
The Spike Protein as an Active Pathogen
Back in February, 2020, this article noted that the spike protein was not perfectly optimized to bind to human ACE-2 and put this forward as a proof that “SARS-CoV-2 is not a purposefully manipulated virus.” But if someone were designing the virus to cause harm, the spike protein would be a convenient locus for the damage vector, so the spike might have been designed with twin purposes in mind, binding and toxicity. The spike protein appears in many copies around the “crown” of the coronavirus. Since each copy has a furin cleavage site at its base, many spike proteins will break off into the bloodstream. We now have several reports and hypotheses concerning the spike protein as an active agent of damage. The spike protein is suspected of causing blood clots, of inducing long-lasting neurological damage, and of causing infertility. Many anecdotes describe injuries to un-vaccinated people who have been in close proximity to vaccinated, prompting speculation about “shedding” the spike protein.
“Individuals with COVID-19 experience a vast number of neurological symptoms, such as headaches, ataxia, impaired consciousness, hallucinations, stroke and cerebral hemorrhage. But autopsy studies have yet to find clear evidence of destructive viral invasion into patients’ brains, pushing researchers to consider alternative explanations of how SARS-CoV-2 causes neurological symptoms….
If not viral infection, what else could be causing injury to distant organs associated with COVID-19? The most likely culprit that has been identified is the COVID-19 spike protein released from the outer shell of the virus into circulation. Research cited below* has documented that the viral spike protein is able to initiate a cascade of events that triggers damage to distant organs in COVID-19 patients.
Worryingly, several studies have found that the spike proteins alone have the capacity to cause widespread injury throughout the body, without any evidence of virus.
What makes this finding so disturbing is that the COVID-19 mRNA vaccines manufactured by Moderna and Pfizer and currently being administered throughout the U.S. program our cells to manufacture this same coronavirus spike protein as a way to trigger our bodies to produce antibodies to the virus.” [Global Research article, Feb 2021]
Note: the Astra-Zeneca and J&J vaccines are also based on the spike protein, and cause the spike protein to be created in the vaccinated person.
* “Research cited below” refers to this study in Nature which reports that the spike protein, injected into mice, crosses into the brain, where it causes neurological damage.
Bigger news came just this week from a study in which researchers from California’s Salk Institute collaborated with Chinese virologists. They have found that the bare spike protein without the virus (injected in mice) can cause damaged arteries of the kind that lead to heart disease and strokes in humans. The original paper was published in Circulation Research, and the Salk Institute issued a news report describing the research.
There is a credible mechanism, in that the spike protein is partially homologous to syncytin. Syncytin, in fact, was originally a retroviral protein, inserted into the mammalian genome many aeons ago, and evolved over the ages to play an essential role in reproduction, binding the placenta to the fetus. An immune response that attacks syncytin might be expected to impose a danger of spontaneous abortion. In any ordinary times, this would be a subject that medical researchers would jump on, with animal tests and field surveys to assess the danger. But these are no ordinary times, and the risk is being dismissed on theoretical grounds without investigation. This is especially suspicious in the context of history: a Gates Foundation vaccination program in 1995 was allegedly promoted to young women, causing infertility. (Yes, I know there are many fact-checkers eager to “debunk” this story, but I don’t find them convincing, and some of these fact-checkers are compromised by Gates funding.)
The most dangerous possibility, suspected but not verified, is that the spike protein causes a prion cascade. Prions are paradoxical pathogens, in that they are misfolded proteins that cause misfolded proteins. Their evolutionary etiology is utterly mysterious, so much so that it took Stanley Prusiner a decade after describing the biology of prions before the scientific community would take prion biochemistry seriously. But prions make potent bioweapons, which laboratories can design outside of natural evolutionary dynamics. The possibility of prion-like structures in the spike protein was noted very early in the pandemic based on a computational study. This recent review combines theoretical, laboratory, and observational evidence to make a case for caution. Once again, I find it disturbing that this possibility is being dismissed on theoretical grounds rather than investigated in the lab and the field.
Where did the idea come from that all vaccines are automatically safe? Why do so many journalists dismiss the suggestion that vaccines should be placebo-tested individually, like all other drugs? Why has it become routine to ridicule and denigrate scientists who ask questions about vaccine safety as politically-motivated luddites, or “anti-vaxxers”? How did we get to a situation where the “precautionary principle” means pressuring young people who are at almost no risk for serious COVID to accept a vaccine which has not been fully tested or approved? I don’t have answers, but I do know who benefits from this culture.
Putting together all the evidence
Suppression of treatments and cures
Toxicity of the spike protein which, if it had been made by nature, should have been benign
Inclusion of the toxic spike protein in the vaccines that are supposed to protect us
Heavy promotion of these scantily-tested vaccines and
Censorship of scientists and doctors who question the vaccines’ safety
… putting together all this evidence, it is difficult to escape the inference that powerful people and organizations have engineered this pandemic with deadly intent.
The paradox: In animal models there is a consistent relationship between eating less and living longer. But studies in humans find that people who are a little overweight live longest.
Last week, I introduced this paradox and offered evidence, both that lab animals live longer when they are underfed, and that humans live longer when they are overfed. In the article below, I introduce nuances and confounding factors, but in my opinion, the paradox remains unresolved.
BMI is an imperfect measure of how fat or thin someone is for his height. That’s because it is calculated with the square of height, but body volume (for a given shape) is proportional to the cube of height. The result is that tall people will have a higher BMI than shorter people with equivalent proportions of body fat. For example, BMI=20 for a person 5 feet tall means a weight of 102 pounds, an average weight for that height; whereas BMI=20 for a person 6 feet tall means a weight of 147, which is borderline emaciated.
Short people tend to live significantly longer than tall people, and the effect is substantial. Males under 5’7” live 7½ years longer than males over 6’ [ref]. This fits with the fact that short people tend to have less growth hormone in their youth. There is a genetic variant in parts of Ecuador that prevents growth hormone from transforming to IGF1 (Laron dwarfism); these people are generally about 4 feet tall and tend to live longer. From domesticated animals, we also know that small dogs live longer than large dogs, small horses longer than large horses. Between species, larger animals live longer, but within a single species, smaller animals live longer.
The height association deepens the weight paradox, because short people will tend to have a lower BMI, which we would expect to skew the association of BMI with longevity downward.
Growth Hormone and IGF1
Growth hormone (which is translated into IGF1 in the body) is genetically associated with shorter lifespan, but we have more of it when we’re young and it promotes a body type with more muscle, less fat. According to this Japanese study, IGF1 increases with weight for people who are thin, but decreases with weight for people who are fat. So maximum longevity is close to maximum IGF1.
Here are some partial explanations for the paradox.
Most variation in weight is explained by genetics, not food intake. The explanation I have proposed in the past is that the CR effect is about food intake, not genetics. And people who are congenitally stout are more likely to be restricting their calories. CR humans are not necessarily especially thin.
The CR effect is proportionately smaller in long-lived humans than in short-lived rodents or shorter-lived worms and flies. [ref] If life extension via CR evolved to help an animal survive a famine, then it seems reasonable that the benefit should be limited to a few years, because that is as long as most famines in nature are likely to last.
The CR effect may be due to intermittent fasting rather than total calorie intake. Traditional CR experiments conflate intermittent fasting with overall calorie reduction, because food is provided in a single daily feeding, and hungry rodents gobble it up, then go hungry for almost 24 hours. More recent experiments attempt to separate the effect of limited-time eating from the effect of calorie reduction, and the general conclusion is that both benefit longevity. It may be that humans who are skinny tend to graze all day, while people with a comfortable amount of fat more easily go for hours at a time without eating.
Mice carry less fat, have less food craving, and have better gut microbiota if they are fed at night rather than during the day [ref]. Mice are active nocturnally; so translating to humans, it probably means that we should eat in the morning. Conventional wisdom is that eating earlier in the day is better for weight loss and health [ref], but I know of no human data on mortality or life span. This classic study in mice  found caloric restriction itself was the only thing affecting lifespan, and there was no difference whether the mice were fed night or day, in three feedings or one.
Smokers tend to be thinner than non-smokers, but they don’t live longer for reasons that have to do with smoking, not weight. So this is a partial explanation why heavier BMI might be associated with longer lifespan. But note that the recent Zheng’s Ohio State study claimed there was no change in the best weight for longevity when correction was introduced for smoking.
Cachexia is a “wasting” disorder that causes extreme weight loss and muscle atrophy, and can include loss of body fat. This syndrome affects people who are in the late stages of serious diseases like cancer, HIV or AIDS, COPD, kidney disease, and congestive heart failure (CHF). [healthline.com] If cachexia subjects are not removed from a sample, it can strongly bias against weight loss, because once cachexia sets in, life expectancy is very short. But the Zheng study was based on Framingham data, collected annually over the latter half of a lifetime; Cachexia is not expected to be a significant factor.
Timing artifact – The Framingham study covers a 74-year period in which BMI is increasing and also lifespan is increasing, probably for different reasons. The younger Framingham cohort is living ~4 years longer than the older cohort and is ½ BMI point heavier. This could create an illusion that higher BMI is causing greater longevity. However, the Ohio State study made some effort to pull this factor out. Greater lifespan is associated with gradually increasing BMI, and this is true separately in both cohorts.
Differential effects on CVD and Cancer This chart (from Zheng) shows how the mortality burden of cardiovascular disease has decreased over the last century, but not so cancer.
But CV disease risk increases consistently with BMI, while cancer risk, not so much (also from Zheng):
These numbers in parentheses are odds ratios from a Cox proportional hazard model. What they means is that a person in the Lower-Normal weight group had 20% less chance of getting heart disease compared to someone of the same age in the Normal-Upward group, but a 60% increased chance of getting cancer. These appear to be large, concerning numbers. But remember that the underlying probabilities are all increasing exponentially with age. Translated into years of lost life, 60% greater probability of cancer is only 1 year of life expectancy at age 50. (60% greater overall mortality would subtract 4½ years from life expectancy.) In my experience, hazard ratios in the range 0.7 to 1.5 don’t necessarily mean anything, because of the difficulties in interpreting data. The numbers in parenthesis after 1.60 in the above table (1.12 — 2.30) mean that statistical uncertainty alone is a range from 1.12 to 2.30.There are plenty of large effects with hazard ratios of 3 or more. For comparison, the hazard ratio for pack-a-day smokers getting lung cancer is 27.
Zheng’s study found a longevity disadvantage to being underweight, and it was exclusively due to a higher cancer risk. In fact, incidence of cardiovascular disease among the lowest BMI class was lowest (0.8); but their cancer risk more than made up for it (1.6).
This means that as time goes on and most Americans are getting heavier, their risk of dying from CVD is blunted by improved technology. The mortality risk from CVD is down by 40% in this century [NEJM], while the cancer risk is unchanged [CDC]. So people are dying of cancer who would have died of CVD in previous generations.
This means that low BMI has less benefit for longevity than it used to have, and the trend over time tends to exaggerate the appearance that higher weight is protective against all-cause mortality.
Is it true that cancer risk does not go up with BMI?
The Framingham result is puzzling and difficult to reconcile with a well-established relationship between higher BMI and higher cancer risk. This review by Wolin  finds a modest increase in risks of all common types of cancer associated with each 5-point gain in BMI. (The RR numbers are comparable to hazard ratios above.)
Lung cancer is the big exception, and Wolin explains the inverse relationship with BMI by the fact that people smoke to avoid gaining weight. This would suggest a resolution to the conflict with Zheng’s study, but for the fact that Zheng explicitly corrects for smoking status and finds it makes no difference at all — a result which is puzzling in itself.
Alzheimer’s Disease is the third leading cause of death, and the corresponding story is more complicated. Lower weight in middle age seems to be mildly protective, while it is certainly not protective in the older years when AD is most prevalent.
“Hazard ratios per 5-kg/m2 increase in BMI for dementia were 0.71 (95% confidence interval = 0.66–0.77), 0.94 (0.89–0.99), and 1.16 (1.05–1.27) when BMI was assessed 10 years, 10-20 years, and >20 years before dementia diagnosis.” [ref]
This, too, is unexpected in light of previous consensus. Alzheimer’s Dementia has been recast as Type 3 Diabetes, because of its strong association with insulin metabolism. Overweight is supposed to be the greatest life-style risk factor for diabetes. When this study  out of U of Washington found that high BMI is protective against dementia, the authors were unwilling to draw the standard causal inference, so they conjectured instead that weight loss is a consequence of AD’s early stage.
There may be a better explanation hidden in their data. AD is the most common cause of dementia, but vascular dementia, a separate etiology, accounts for roughly ⅓ of cases in the Kame data set:
There is a suggestion here that higher BMI protects against vascular dementia, but not against AD.
From you, my readers
Here are some of the suggestions offered in the comment section of last week’s blog:
Fat people are happier. I don’t doubt that happiness has a lot to do with longevity but a lot of overweight is due to compulsive eating by people who are not happy with their lives. Obesity is associated with lower socio-economic status, and lower SES is independently associated with shorter lifespan and lower life satisfaction.
Higher BMI can mean more muscle mass, not necessarily more fat mass. Good point. I don’t know how big a factor this is.
This study [BMJ 2016] found greatest longevity for BMI in the range 20-22. I take your point that the larger studies with longer follow-up tend to report lower optimal BMI. The BMJ study is a meta-analysis of a huge database covering 9 million subjects.
Dean Pomerleau writes at the CR Society web page about brown fat, cold resistance, and greater longevity.
Thin people have greater insulin sensitivity, which can lead to glucose going into cells instead of being stored as fat. This is interesting, and deserves more follow-up. But good insulin sensitivity also means lower blood sugar, so it’s not obvious to me which direction the effect ought to go.
I was grateful for a pointer to Valter Longo’s recent work, recommending that time-restricted eating becomes counterproductive after about 13 hours a day of fasting. Longer fasts several times a year are still highly recommended.
Paul Rivas is my go-to authority on weight, and he recommended this 2015 study, which emphasizes the paradox as I describe it.
This study out of Emory U  recommends different diets for different BMI groups for minimizing inflammation.
What story does methylation tell?
Aside from mortality statistics, I regard methylation age as the most reliable leading indicator we have. I’ll end by reviewing data on BMI and methylation age.
The Regicor Study  looked for methylation sites associated with obesity. They reported 97 associated with high BMI and an additional 49 associated with large waistline. I compared their lists with my list of methylation sites that change most consistently with age. There was no overlap. What I learn from this is that there is no association with genetically-determined weight and longevity. If you were born with genes that make you gain weight, there is a social cost to be paid in our culture, but there is no longevity penalty.
Horvath  did not discern a signal for obesity with the original 2013 DNAmAge clock, except in the liver where the signal was weak, amounting to just 3 years for the difference between morbidly obese and normal weight. But a few years later with 3 different test groups , a moderate signal was found, as expected, linking higher BMI to greater DNAmAge acceleration. (Age acceleration is just the difference between biological age as measured by the methylation clock and chronological age by the calendar.)
This study  from the European Lifespan Consortium found a modest increased mortality from obesity, corresponding to less than a year of lost life by most measures, based on two Horvath clocks and the Hannum clock. This Finnish study  found a small association between higher BMI and faster aging in middle-aged adults, but not in old or young adults.
This study from Linda Partridge’s group  found a strong benefit of caloric restriction on epigenetic aging—in mice, not in humans.
The bottom line
I’ve had a good time with this project, seeking explanations for the paradox, and I’ve passed along some interesting associations, but in the end, the essential paradox remains. I don’t know why the robust association of caloric restriction with longevity doesn’t lead to a clear longevity advantage in humans for a lower BMI. My strongest insight is that the largest determinants of BMI are genetic, not behavioral, and the genetic contribution to weight has no effect on longevity. But what do I make of the fact that life expectancy in the US has risen by a decade over my lifetime [ref] even as BMI has increased 5 points.
Caloric restriction is the gold standard life extension strategy, validated over thousands of experiments in many animal species. How can we reconcile this with consistent findings that people who are slightly overweight live longer than normal or underweight folks?
The one fact that everyone in the field of aging agrees on is that animals fed less live longer. This is the result that got me interested in the field 25 years ago, and it is still the most robust finding in the field, verified in dozens of species from yeast cells to Rhesus monkeys.
Are humans different from all other animals?
Last month, a study came out of Ohio State U based on the famous Framingham database, including medical and demographic information on 5,000 people and their offspring, tracked over 74 years. The take-home message was that the people who lived longest were average weight when young and gained weight during their middle years. There were not enough people who had actually lost weight to constitute a subgroup, but the group identified as “low-normal weight” all through their lives showed up with 40% higher all-cause mortality than those that gained weight.
“For any given individual, it’s probably true that the less you eat the longer you live.”
The argument went thus: Weight is mostly fixed by genetics, and the genetic component of weight does not affect longevity. It is relative calorie intake that affects longevity, relative to genetics, body type, and metabolism. For example, a study of genetically obese mice found that they had shortened lifespans if they were fed ad libitem, however, if the obese mice were calorically restricted, they actually lived longer than genetically normal mice, and even longer than CR normal mice, despite the fact that they still appeared plump.
This line of reasoning led me to hypothesize that the reason overweight people tend to live longer is that they are motivated to restrict calories, whereas people (like me) who don’t get fat no matter how much we eat feel no social pressure to restrain our gluttony.
I thought at the time that we ought to see this effect much more in women than in men, because overweight women are ostracized in our culture, whereas men are not. What I found, contrary to my prediction, was that the BMI with lowest mortality (in Japan) is 23-25 for men, compared to 21-23 for women [Matsuo, 2012].
So, is it time to consider the possibility that caloric restriction doesn’t extend human life expectancy?
New Ohio State Study
The new study is based on the 74-year-old Framingham cohort, people whose health and daily habits have been followed over time. Also followed was a Framingham Offspring cohort, the children of the original Framingham cohort. Almost all the original cohort have now died (so we have extensive mortality data), but many of the offspring cohort is still alive. The authors treat the two cohorts separately, and get somewhat different results for the two cohorts. Dr Zheng was kind enough to send me the full preprint with supplemental tables, and since it’s not yet available online, I’ve made it available for you to read here on GDrive.
The study looks not just at BMI but also at the change in BMI over mid- to late-life years. They classify the trajectories in seven groups, and analyze them using a Cox model. They find that the group that has lowest mortality had an average trajectory beginning at BMI=22 at age 30, increasing gradually to BMI=27 at age 80. The group was broadly defined, so that initial BMI could be anywhere from 18.5 at the low end to 25 at the high end.
Cox Proportional Hazard Model This statistical method is standard for studies like this evaluating effect on mortality. It is designed to take into account the steep rise in mortality with age, and weight different deaths according to when they occur. The standard assumption is that the mortality curve with age is changed by a multiplicative factor associated with each variable. The mortality curve retains the same shape across ages, but it slides up or down (on a log scale) according to which factors apply to a given subgroup. For example, having a graduate degree may multiply your risk of dying by 0.9 across the board, and eating red meat may multiply your risk by 1.2, so the model actually derives these numbers by assuming that meat-eaters with a graduate degree will have a relative probability of death 1.08 times the control group, and this applies at every age. (where 1.08 = 0.9 * 1.2)Is this quantitatively realistic? Everyone knows it is not, but it yields a single number which is a good benchmark for different longevity factors, and it allows different studies to report their results in a common format for comparison.
Division of subjects into seven groups was somewhat arbitrary, and was done to facilitate statistical analysis. The red railroad tracks represents midline of the trajectory associated with “longest lifespan”, defined above as the minimum Cox factor. The lowest weight group was associated with a Cox factor of 1.4, meaning 40% more likely to die (at a given age) than the red railroad track trajectory.
Food shortages during World War II in some European countries were associated with a sharp decrease in coronary heart disease mortality, which increased again after the war ended.[Fontana, 2007]
Fontana performed in-depth metabolic profiles of people identified from the Caloric Restriction Society who were disciplining themselves to eat less. Relative to people at a comparable age, he found “a very low level of inflammation as evidenced by low circulating levels of C-reactive protein and TNFα, serum triiodothyronine levels at the low end of the normal range, and a more elastic ‘younger’ left ventricle, as evaluated by echo-doppler measures of LV stiffness.” 
There is at least preliminary evidence that weight loss tends to set back the aging clock, as measured by several methylation algorithms 
Higher BMI is associated with older methylation age 
C-reactive protein in the blood, the most common measure of inflammation, increases with increasing BMI 
Loss of insulin sensitivity is a hallmark of aging, driving many age-related diseases. There is a strong correlation between BMI and diabetes 
BMI is linked to most common cancers, the #2 source of mortality. Here’s a good review by Wolin .
BMI is also a factor in cardiovascular disease, the #1 killer. This study from Malaysia  found a trend of increasing CVD at every BMI level, but — like other studies — also found that all-cause mortality was lowest for BMI 25-30, which has traditionally been called “overweight”.
So, why doesn’t weight gain show up as a risk factor for faster aging?
I will continue this discussion in Part 2, and try to resolve this paradox in part, but (spoiler alert) I remain puzzled, after a month of reading on the subject.
Source: REB Research https://www.rebresearch.com/blog/fat-people-show-less-dementia/
A new methylation clock works in 128 different mammal species, using the same methylation signals. This is the latest evidence that at least some of the mechanisms of aging have been conserved by evolution—strong evidence that aging has a useful function in ecology, so that natural selection actually prefers a finite, defined lifespan.
Einstein taught us that time is relative. Indeed, there are rodents that live less than a year, and Bowhead whales that live more than 200 years. Some of this is just about size and has a basis in physics; but it is well-known that size is only part of the story. Bats and mice are the same size, but bats live ten times longer. Humans are much smaller than horses, but live three times as long.
The first time I met Cynthia Kenyon was circa 1998. She offered me a one-line proof that aging is programmed: the enormous range in lifespans found in nature defies any theory about damage accumulation, because no conceivable process of chemical damage could vary so widely in its fundamental rate. (Think mayflies and sequoia trees.) My own one-line proof is that yeast and mammals share in common some genetic mechanisms that regulate aging, though the last common ancestor of yeast and mammals is more than half a billion years old. These mechanisms include sirtuins and the insulin metabolism.
These intuitions about aging rate and evolutionary conservation have recently come to the world of big data. In this new BioRxiv manuscript, Steve Horvath collaborates with an all-star cast of biologists the world over to compile evidence that there is a universal mechanism underlying development and aging in all mammals, and it is a pan-tissue epigenetic program, not a process of chemical damage.
Brief background on methylation: It is increasingly clear that aging has a basis in gene expression. The whole body has the same DNA, and it doesn’t change over time. However, different genes are turned on and off in different times and places. Turning genes on and off is called “epigenetics”, and evolution has devoted enormous resource to this process. One of many epigenetic mechanisms is the presence or absence of a methyl group on Cytosine, which is one of the 4 building blocks of DNA (A, C, T, G). There are over 20 million regulatory sites in human DNA where methyls can appear or not. Of these, several thousand have been found to consistently correlate with age. The correlation is so strong that the most accurate measures of biological age are now based on methylation. There is (IMO) a developing consensus in the community that methylation changes are an upstream cause of aging, and there remains strong resistance to this idea on theoretical grounds. More background here
The team assembled tissue samples from 59 organs across 128 species of mammals, and looked for commonalities in the progression of methylation that were independent of species and independent of tissue type. They found thousands of methylation sites that fit the bill, attesting to an evolutionarily-conserved mechanism “connected to” aging. It is a short leap to imagine that “connected to” implies a root cause.
How did the authors map age for a mouse onto age of a whale? Just as I might say, “I’m only 10 years old, in dog years,” a year for a whale might be a hundred “mouse years”. The authors took three different approaches. (1) Just ignore it, mapping chronological time directly. (2) Adjust time for the different species based on the maximum lifetime for that species. (3) Adjust time for the different species based on the time to maturity for that species.
Predictably, (1) produced paradoxes; (2) and (3) were similar, but (3) produced the best results. What they didn’t do — but might in follow-on work — was to optimize the age-scaling factor individually for each species to target the best fit with all the other species. Even better would be to choose two independent scaling factors to optimize the fit of each species. Ever since the original 2013 clock, Horvath has divided the lifespan into two regimes, development and aging: In development, time is logarithmic, moving very fast at the beginning and slowing down at the end of development. In the aging regime, time is linear. So it would be natural (optimum, in my opinion) to choose two separate scaling factors that best map each species’s life history course onto all the others. Mathematically, this is (roughly) as simple as matching the slopes of two lines. Horvath has told me he is interested in pursuing this strategy but for some species the existing data doe not cover the lifespan sufficiently to support it.
“Cytosines that become increasingly methylated with age (i.e., positively correlated) were found to be more highly conserved (Fig. 1a) …Interestingly, although there were 3,617 enrichments of hypermethylated age-related CpGs [i.e., increased methylation with age] across all tissues, only 12 were found for hypomethylated [the opposite] ones.”
Interpretation: with age, we (and other mammals) tend to lose methylation, i.e., to turn on genes that shouldn’t be turned on. There are more sites that demethylate with age than that methylate with age. But the sites that gain methylation tend to be more highly conserved between species. I presume a lot of demethylation is stochastic. It’s easy for a methyl group to “fall off”, but attaching one in the right place requires a specialized enzyme (methyl transferase). What we are seeing here is stronger genetic determinism for the process that requires active intervention.
Question: Would it be useful to develop a methylation clock based solely on sites that gain methylation? What we would thereby avoid is the situation where the age algorithm combines a great many large positive numbers with a great many large negative numbers to make a small difference. This characteristic makes the algorithm overly sensitive to bad data from one or a few particular sites. We can see from the figure above that (red) sites from the top half of the plot have stronger evidence behind them than the (blue) sites from the bottom. What we would lose would be diversity in the basis of the measurement. If retaining that diversity is desirable, it would be possible to design a clock algorithm with both red and blue sites in such a way that all coefficients are relatively small, and no one site contributes inordinately to the age calculation, even if data for that site is completely missing.
Speculation for statistics geeks: I think the methodology that has become standard for developing methylation clocks is not optimal. The standard method is to identify N sites (typically a few hundred) where methylation is well-correlated with age, then derive N coefficients such that you can multiply each coefficient by the corresponding methylation, add up the products, and you get an age estimate*. The way I would do it is with a more complicated calculation, from a methodology called “maximum likelihood”. The idea is to choose the age that minimizes the difference between the expected methylation and measured methylation for the collection of the N sites. To be more specific, minimize the sum of the squares of the z scores for each site, where z is the number of standard deviations by which the measured methylation is different from the expected methylation.It may sound like a complicated calculation to find the age at which this number is a minimum, but it is not. Yes, it’s a guessing game; but the algorithm called “Newton’s method” allows you to make smart guesses so you home in on the best (min Σz2) age within four or five guesses. The calculation is more complicated to program, but it would still execute in a tiny fraction of a second. My proposed method requires maybe 10 or 20 times as many fixed parameters within the algorithm; but the data submitted from each sample is the same.
Caveat – This is all theoretical on my part. I don’t know how much performance would be improved in practice.
*Two footnotes: (1) A constant is also added. (2) In case the subject is young, below the age of sexual maturity, what you get is a logarithm of age, not age itself.
“Importantly, age-related methylation changes in young animals concur strongly with those observed in middle-aged or old animals, excluding the likelihood that the changes are those involved purely in the process of organismal development.”
These plots are adduced as evidence that aging and development are one continuous process under epigenetic control. They come from EWAS=epigenome-wide association studies. Start by asking which sites on the methylome are most closely correlated with age, across many different animals and different tissues in those animals. Start with just the young animals (different ages, but all before or close to sexual maturity. Arrange all the different sites according to how they change methylation with age (increasing or decreasing), just in this age range. Then repeat the process, re-ordering the sites according to how they change with age during middle age.
The left plot above includes a dot for each methylation site, ordered along the X axis according to how they change during youth, and along the Y axis according to how they change during middle age. The point of the exercise is that it is largely the same sites that increase (or decrease) methylation in youth and in middle age.
The middle plot shows the corresponding correlation between middle age (X axis) and old age (Y axis). The right-hand plot shows the correlation between young (X axis) and old age (Y axis). (I believe the labeling of the figure on the right is a misprint.)
This evidence points to a conceptual framework that views development and aging as one continuous process. Development is a lot more complicated than aging. Consequently, most of the sites in the clock are developmental. Maybe a clock could be optimized for aging only, and it would be more useful for those of us who are using the clocks to assess anti-aging interventions.
“The cytosines that were negatively associated with age in brain and cortex, but not skin, blood, and liver, are enriched in the circadian rhythm pathway”
Here we see again the intriguing connection between the brain’s daily timekeeping apparatus and the epigenetic changes that drive development and aging.
“The implication of multiple genes related to mitochondrial function supports the long-argued importance of this organelle in the aging process. It is also important to note that many of the identified genes are implicated in a host of age-related pathologies and conditions, bolstering the likelihood of their active participation in, as opposed to passive association with, the aging process.”
Another theme in the set of age-correlated genes that the team discovered is mitochondrial function. Mitochondria have an ancient association with cell death, and a long, conserved history with respect to aging. The simple damage themes associated with the free radical theory have yielded to a more complex picture, in which free radicals can be signals for apoptosis or inflammation or enhanced protective adaptations.
The big picture
“Therefore, methylation regulation of the genes involved in development (during and after the developmental period) may constitute a key mechanism linking growth and aging. The universal epigenetic clocks demonstrate that aging and development are coupled and share important mechanistic processes that operate over the entire lifespan of an organism.”
This is cautiously worded, presumably to represent a consensus among several dozen authors, or perhaps to appease the evolutionary biologists looking over our shoulders. The statement is akin to what Blagosklonny has for years called “quasi-programmed aging”, to wit, there are processes that are essential to development that fail to turn off on time, and cause damage as the organism gets older. In the version put forward in this present ms, it is not the gene expression itself but the direction of change of gene expression that carries momentum and cannot be turned off.
Modern evolutionary theory began with Peter Medawar, a Nobel laureate and giant of mid-century biological understanding. (He was 6 foot 5.) Medawar’s 1952 monograph contains the insight that launched all modern theories for evolution of aging. His fundamental idea was that it’s a dog-eat-dog world in which very few few animals live long enough for aging to be a factor in their death. The three main branches of evolutionary theory in response to Medawar are called Mutation Accumulation, Disposable Soma, and Antagonistic Pleiotropy. According to Medawar’s thought (and all three theories that followed) old age exists in a “selection shadow” so random processes are at work in old age. It follows that we would expect the aging of a bat and a bowhead whale to be subject to very different random processes. If it is a burden of recently acquired mutations that natural selection has not yet had time to weed out, these should be different for different species. Or if it is about tradeoffs (pleiotropy) between needs of the young animal and the old animal, we would not expect the bat and the whale to be subject to the same tradeoffs.
The Medawar paradigm and its three popular sub-theories all predict that there should be little overlap between the genetic factors involved in aging of species that are adapted so differently. Therefore, the present work documenting a common epigenetic basis of aging is a challenge to the established evolutionary theories of aging.
As I see it, the expression of genes is exquisitely timed for many purposes, so we must view gene expression as subject to tight bodily control. “Accidents” or “mistakes” or “evolutionary neglect” are implausible. For some genes, methylation changes from minute to minute in a way that is adaptive and responsive. Blagosklonny’s idea that there are genes turned on for development and then the body forgets to turn them off doesn’t feel right. Equally, the idea that certain genes are being turned on (or off) progressively through development and then, after development has ended, the process has a momentum of its own so the body can’t stop further turning on (or off) of these same genes is equally implausible. I assume the body is adapted to do exactly what it wants with gene expression, and if the body expresses a combination of genes that causes aging, it’s because that’s what natural selection has designed the body to do. Of course, this looks to be a paradox, as aging is completely maladaptive according to the notion of Darwinian fitness that became accepted in the first half of the 20th century; but evolutionary biologists have broadened the notion of fitness since then, and I’ve written volumes concerning this paradox.
The bottom line
For personal application to individuals who want to know how well they are doing and their future life expectancy, I recommend Horvath’s Grim Age clock as the best available. (Elysium has done a lot of work on their Index product, and it may be as good or better, but it’s impossible to evaluate unless they release their proprietary methodology.) For application to studies of anti-aging interventions (including my own project, DataBETA), the choice of clocks is not clear, because it depends not just on statistics but on theory. We want a clock that is not only accurate, but that is based on epigenetic causes of aging, not epigenetic responses to aging. The multi-species clock is a welcome contribution, precisely because epigenetic processes that are conserved across species are more likely to be linked to the root cause of aging. For the future, I’ve made suggestions above for ways the multi-species clock might be made even better.
Just as the melody is not made up of notes nor the verse of words nor the statue of lines, but they must be tugged and dragged till their unity has been scattered into these many pieces, so with the World to whom I say Thou. — Martin Buber
We creatures of the 21st Century, grandchildren of the Enlightenment, like to think that our particular brand of rationality has finally established a basis for understanding the world in which we live. Of course, we don’t have all the details worked out, but the foundation is solid.
We might be chastened by the precedent of Lao Tzu and Socrates and Hypatia fof Alexandria and Thomas Aquinas and Lord Kelvin, who thought the same thing. I wonder if the foundation of our world-view is really made of more durable stuff than theirs. In fact, founding our paradigm in the scientific method offers us something that earlier sages did not have: we can actually compare in detail the world we observe and the consequences of our physicalist postulates. The results are not reassuring. In recent decades, the science establishment has willfully ignored observations of phenomena that call into question our foundational knowledge.
Reductionism is the process of understanding the whole as emergent from the parts. The opposite of reductionism is holism: understanding the parts in terms of their contribution to a given whole. It’s fair to say that all of science in the last 200 years has been reductionist. Physical law is the only fundamental description of nature. Chemistry could, in principle, be derived from physics (if only we could solve the Schrödinger equation for hundreds of electrons); living physiology could be understood in terms of chemistry; and ecology could be modeled in terms of individual behaviors.
Curiously, there are holistic formulations of physics that are mathematically equivalent to the reductionist equations, but in practice, physicists use the differential equations, which are the reductionist version.
Biological function is explained by a process of evolution through natural selection that made them what they are. Holism in evolution is called “teleology”, and is disparaged as unscientific. But when features of physics appear purposeful, there is no agreement among scientists how to explain them. Most physicists would avoid invoking a creator or embedded intelligence, even at the cost of telling stories about vast numbers of unobservable universes outside our own. This is the most common explanation for the fact that the rules of physics and the very constants of nature—things like the charge on an electron and the strength of the gravitational force—these things seemed eerily to have been fine-tuned to offer us an interesting universe; most other choices for the basic rules of physics might have produced dull uniformity, without stars or galaxies, without chemistry, without life.
But I am racing ahead of the story. The question I want to ask is whether we are missing something in reasoning exclusively from the bottom up, explaining all large-scale patterns as emergent results of small-scale laws. I want to suggest that this deeply-ingrained pattern of thought may be holding science back. Are there large-scale patterns waiting to be discovered? Are there destined outcomes that help us understand the events leading to a predetermined denouement? Even formulating such questions is controversial; and yet, we see hints pointing in just this direction, both from micro-science of quantum mechanics and from studies of the Universe on its largest scale.
Science is all about observing nature and noticing patterns which might be articulated as theories or laws. When these patterns connect nearby events that can be observed at one time by one person, they are easy to spot. When the patterns involve distant events and stretch over time and space, they may go undetected for a long while. This can lead to an obvious bias. Scientists are more inclined to formulate laws of nature that connect contiguous events than laws that connect events that are separated spatially and temporally, just because these global patterns are harder to see.
The physical laws that were formulated and tested in the 19th and 20th century were all mediated by local action. The idea that all physical action is local was formalized by Einstein, and has been baked into our theories ever since. But there is a loophole, defined by quantum randomness. Roughly speaking, Heisenberg’s Uncertainty Principle says that we can only ever know half the information we need to predict the future from the past at the microscopic level. Is the other half replaced by pure randomness, devoid of any patterns that science might discern? Or might it only appear random, because the patterns are spread over time and space, and difficult to correlate? In fact, the existence of such patterns is an implication of standard quantum theory. (This is one formulation of the theorem about quantum entanglement, proved by J.S. Bell in 1964.) Speculative scientists and philosophers relate this phenomenon to telepathic communication, to the “hard problem” of consciousness, and to the quantum basis of life.
I hope to explore this topic in a new ScienceBlog forum beginning in 2021. Here are four examples of the kinds of phenomena pointing to a new holistic science.
1. Michael Levin and the electric blueprint for your body
We think of the body as a biochemical machine, proteins and hormones turned on in the right places at the right times to give the body its shape. Levin is clear and articulate in making the case that the body develops and takes shape under a global plan, a blueprint, and not just a set of instructions. This is true for humans and other mammals, but it is easier to prove it for animals that regenerate. Humans can grow back part of a liver. An octopus can grow a new leg; a salamander can grow a new leg or tail tail; a zebrafish can grow back a seriously damaged heart; starfish and flatworms can grow back a whole body from a small piece.
Consider the difference between a blueprint and an instruction set. An instruction set says
1. Screw the left side of widget A onto the right side of gadget B.
2. Take the assembly of widget+gadget and mount it in front of doodad C, making sure the three tabs of C fit into the corresponding holes in B
A blueprint is a picture of the fully assembled object, showing the relationship of the parts.
Ikea always gives you both. With the instructions only, it is possible to complete the assembly, but only if you don’t make any mistakes. And if the finished object breaks, the instruction set will not be sufficient to repair it. The fact that living things can heal is a strong indication that they (we) contain blueprints as well as instruction sets. The instruction set is in the genome, together with the epigenetic information that turns genes on and off as appropriate; but where is the blueprint?
Prof Michael Levin of Tufts University has been working on this problem for almost 30 years. The answer he finds is in electrical patterns that span across bodies. One of the tools he pioneered is voltage reporter dyes that glow in different colors depending on the electric potential. Here is a map of the voltage in a frog embryo, together with a photomicrograph.
from Levin’s 2012 paper
Levin’s lab has been able to demonstrate that the voltage map determines the shape that the tadpole grows into as it develops. Working with planaria flatworms, rather than frogs, their coup de grace was to modify these voltage patterns “by hand”, creating morphologies that are not found in nature, such as the worm with two heads and no tail.
This is stunning work, documenting a language in biology that is every bit as important as the genetic code. Of course, I am not the first to discover Dr Levin’s work; but it is underappreciated because the vast majority of smart biologists are focusing on biochemistry and it is a stretch for them to step out of the reductionist paradigm.
(I wrote more about Levin’s work two years ago. Here is a video which presents a summary in his own words.)
2. Cold Fusion
Two atomic nuclei of heavy hydrogen can merge to create a single nucleus of helium, and tremendous energy is released. This process is not part of our everyday experience because the hydrogen nuclei are both positively charged and the energy required to push them close enough together that they will fuse is also enormous. So fusion can happen in the middle of the sun, where temperatures are in the millions of degrees, and fusion can happen inside a thermonuclear bomb. But it’s hard as hell to get hydrogen to fuse into helium, and, in fact, physicists have been working on this problem for more than 60 years without a viable solution.
Except that in 1989, the world’s most eminent electrochemist (not exactly a household name) announced that he had made fusion happen on his laboratory bench, using the metal palladium in an apparatus about as complicated as a car battery.
Six months later, at an MIT press conference, scientists from prestigious labs around the world lined up to announce they had tried to duplicate what Fleischmann had reported with no success. The results were un-reproducible. Cold Fusion was dead, and the very word was to become a joke about junk science. Along with the vast majority of scientists, I gave up on Cold Fusion and moved on. 22 years passed. Imagine my surprise when I read in 2011 that an Italian entrepreneur had demonstrated a Cold Fusion boiler, and was taking orders!
The politics of Cold Fusion is a story of its own. I wrote about it in 2012 (not for ScienceBlog). The Italian turned out to be a huckster, but the physics is real.
I began reading, and I became hooked when I watched this video. I visited Cold Fusion labs at MIT, Stanford Research Institute, Portland State University, University of Missouri, and a private company in Berkeley, CA. I went to two Cold Fusion conferences. I became convinced that some of the claims were dubious, and others were convincing. There is no doubt in my mind that Cold Fusion is real.
Physicists were right to be skeptical. The energy for activation is plentiful enough, even at room temperature, but the problem is to concentrate it all in one pair of atoms. Left to its own devices, energy will spontaneously spread itself out— that’s what the science of thermodynamics is all about. To concentrate an eye-blink worth of energy in just two atoms is unexpected and unusual. But things like this have been known to happen, and a few times before they’ve taken physicists by surprise. Quantum mechanics plays tricks on our expectations. A laser can concentrate energy, as billions of light particles all march together in lock step. Superconductivity is another example of what’s called a “bulk quantum effect”. Under extraordinary circumstances, quantum mechanics can leap from the tiny world of the atom and hit us in the face with deeply unexpected, human-scale effects that we can see and touch.
There are now many dozens of labs around the world that have replicated Cold Fusion, but there is still no theory that physicists can agree on. What we do agree is that it is a bulk quantum effect, like superconductivity and lasers. When the entire crystal (palladium deuteride) asks as one quantum entity, strange and unexpected things are possible.
For me, the larger lesson is about the way the science of quantum mechanics developed in the 20th Century. The equations and formalisms of QM are screaming of connectedness. Nothing can be analyzed on its own. Everything is entangled. The quantum formalism defies the reductionist paradigm on which 300 years of previous science had been built.
And yet, physicists were not prepared to think holistically. We literally don’t know how. If you write down the quantum mechanical equations for more than two particles, they are absurdly complex, and we throw up our hands, with no way to solve the equations or even to reason about the properties of the solutions. The many-body quantum problem is intractable, except that progress has been made in some highly symmetrical situations. A laser consists of a huge number of photons, but they all have a single wave function, which is as simple as a wave function can be. Many-electron atoms are conventionally studied as if the electrons were independent (but constrained by the Pauli Exclusion Principle). Solid state physics is built on bulk quantum mechanics of a great number of electrons, and ingenious approximations are used in combination with detailed measurements to reason about how the electrons coordinate their wave state.
Cold Fusion presents a huge but accessible challenge to quantum physicists. Beyond Cold Fusion lie a hierarchy of problems of greater and greater complexity involving quantum effects in macroscopic objects.
There are many examples of coordinated behaviors that are unexplained or partially explained. This touches my own specialty, evolution of aging. The thesis of my book is that aging is part of an evolved adaptation for ecosystem homeostasis, integrating the life history patterns of many, many species in an expanded version of co-evolution. My thesis is less audacious than the Gaia hypothesis.
Monarch butterflies hibernate on trees in California or Mexico for the winter. In the spring, they migrate and mate and reproduce, migrate and mate and reproduce, 6 or 7 times, dispersing thousands of miles to the north and east. Then, in the fall, the great great grand offspring of the spring Monarchs undertake the entire migration in reverse, and manage to find the same tree where their ancestor of 6 generations spent the previous winter. [Forest service article]
Zombie crabs have been observed in vast swarms, migrating hundreds of miles across the ocean floor. Red crabs of Christmas Island pursue an overland migration
Sea turtles from all over the world arrange for a common rendezvous once a year, congregating on beaches in the Caribbean and elsewhere. Their navigation involves geomagnetism, but a larger mystery is how they coordinate their movements.
Monica Gagliano has written about plants’ ability to sense their biological environment and coordinate behaviors on a large scale. This is her more popular book.
4. The Anthropic Coincidences, or the Improbability of Received Physical Laws
For me, this is the mother of all scientific doors, leading to a radically different perspective from the reductionist world-view of post-enlightenment science. Most physicists believe that the laws of physics were imprinted on the universe at the Big Bang, and life took advantage of whatever they happened to be. But since 1973, there has been an awareness, now universally accepted, that the laws of nature are very special, in that they lead to a complex and interesting universe, capable of supporting life. The vast majority of imaginable physical laws give rise to universes that are terminally boring; they quickly go to thermodynamic equilibrium. Without quantum mechanics, of course, there could be no stable atoms, and everything would collapse into black holes in short order. Without a very delicate balance between the strength of electric repulsion and the strong nuclear force, there would be no diversity of elements. If the gravitational force were just a little weaker, there would be no galaxies or stars, nothing in the universe but spread-out gas and dust. If our world had four (or more) dimensions instead of three, there would be no stable orbits, no solar systems because planets would would quickly fly off into space or fall into the star; but a two-dimensional world would not be able to support life because (among other reasons) interconnected networks on a 2D grid are very limited in complexity.
Most scientists don’t take account of this extraordinary fact; they go on as if life were an inevitability, an accident waiting to happen. But those who have thought about the Anthropic Principle fall in two camps:
The majority opinion: There are millions and trillions and gazillions of alternative universes. They all exist. They are all equally “real”. But, of course, there’s no one looking at most of them. It’s no coincidence that our universe is one of the tiny proportion that can support life; the very fact that we are who we are, that we are able to ask this question, implies that we are in one of the extremely lucky universes.
The minority opinion: Life is fundamental, more fundamental than matter. Consciousness is perhaps a physical entity, as Schrödinger thought; or perhaps it exists in a world apart from space-time, as Descartes implied 300 years before Schrödinger; or perhaps there is a Platonic world of “forms” or “ideals” [various translations of Plato’s είδος] that is primary, and that our physical world is a shadow or a concretization of that world. One way or another, it is consciousness that has given rise to physics, and not the other way around.
I prefer the minority view, not just because it provides greater scope for the imagination [Anne of Green Gables]; there are scientific reasons that go beyond hubristic disregard of Occam’s razor in postulating all these unobservable universes.
Quantum mechanics requires an observer. Nothing is reified until it is observed, and the observer’s probes help determine what it is that is reified. Physicists debate what the “observer” means, but if we assume that it is a physical entity, paradoxes arise regarding the observer’s quantum state; hence the “observer” must be something outside the laws that determine the evolution of quantum probability waves. Cartesian dualism provides a natural home for the “observer”.
Parapsychology experiments provide a great many indications that awareness (and memory) have an existence apart from the physical brain. These include near-death experiences, telepathy, precognition, and clairvoyance.
a new methylation clock developed with “deep learning” algorithms by an international group from Hong Kong
the advanced methylation clock developed by Morgan Levine, Len Guarente, and Elysium Health
Aging clocks = algorithms that compute biological age from a set of measurable markers. Why are they interesting to us? And what makes one better than another?
The human lifespan is too long for us to do experiments with anti-aging interventions and then evaluate the results based on whether our subjects live longer. The usefulness of an aging clock is that it allows us to quickly evaluate the effects on aging of an intervention, so we can learn from the experiment and move on to try a variant, or something different.
Many researchers are skeptical about using clock algorithms to evaluate anti-aging interventions. I think they are right to be asking deep questions; I also think that in the end the epigenetic clocks in particular will be vindicated for this application.
It may seem obvious that we want the clock to tell us something about biological aging at the root level. We are entranced by the sophisticated statistical techniques that bioinformaticists use to derive a clock based on hundreds of different omic factors. But all that has to start with a judgment about what’s worth looking at.
Ponder this: The biostatisticians who create these clocks are optimizing them to predict chronological age with higher and higher correlation coefficient r. But if they achieve a perfect score of r=1.00, the clock becomes useless. It cannot be used to tell a 60-year-old with the metabolism of a 70-year-old from another 60-year-old with the metabolism of a 50-year-old, because both will register 60 years on this “perfect” clock.
It’s time to back up and ask what we think aging is and where it comes from, then optimize a clock based on the answer. As different people have different answers, we will have different clocks. And we can’t objectively distinguish which is better. It depends on whose theory we believe.
Straw man: AI trained to impute age from facial photos now has an accuracy of about 3½ years, in the same ballpark with methylation clocks. If we used these algorithms to evaluate anti-aging interventions, we would conclude that the best treatments we have are facelifts and hair dye.
Brass tacks: People with different positions about the root cause of aging all agree that (a) aging manifests as damage, and (b) methylation and demethylation of DNA take place under the body’s tight and explicit site-by-site regulation.
But what is the relationship between the methylation and the damage? There are three possible answers.
(from the “programmed” school) Aging is programmed via epigenetics. The body downregulates repair mechanisms as we get older, while upregulating apoptosis and inflammation to such an extent that they are causes of significant damage.
(from the “damage” school) The body accumulates damage as we get older. The body tries to rescue itself from the damage by upregulating repair and renewal pathways in response to the damage.
(also from the “damage” school) Part of the damage the body suffers is dysregulation of methylation. Methylation changes with age are stochastic. Methylation becomes more random with age.
My belief is that (1), (2), and (3) are all occurring, but that (1) predominates over (2). The “damage” school of aging would contend that (1) is excluded, and there are only (2) and (3).
How can these three types of changes contribute to a clock?
(3) makes a crummy clock, because, by definition, it’s full of noise and varies widely from person to person and from cell to cell. There is no dispute that a substantial portion (~50%) of age-related changes in DNA methylation are stochastic. But these changes are not useful and, in fact, most of the algorithms used to construct methylation clocks tend to exclude type (3) changes. I won’t say anything more about stochastic changes in methylation, but I’ll acknowledge that there is more to be said and refer you to this article if you’re interested in methylation entropy.
If you are from the “damage” school, you don’t believe in (1), so this leaves only type (2). If changes in methylation are the body trying to rescue itself, then any intervention that makes the body’s methylation “younger” is actually dialing down protection repair. You expect that reducing methylation age will actually hasten aging and shorten life expectancy. You have every reason to distrust a clinical trial or lab experiment that uses methylation age as criterion for success.
White cell count is used as a reliable indication of cancer. As cancer progresses, white cell count increases. The higher a person’s white cell count, the closer he is to death. So let’s build a “cancerclock” based on white blood count, and let’s use it to evaluate anti-cancer interventions. The best intervention is a chemical agent that kills the most white blood cells. It reliably sets back the “cancerclock” to zero and beyond. But we’re puzzled when we find that people who get this intervention die rapidly, even though the cancerclock predicted that they were completely cured. The problem is that white blood cells are a response to cancer, not its cause.
If you are from the “programmed” school, you think that (1) predominates, and that a clock can be designed to prefer type (1) changes to (2) and (3). Then methylation clocks measure something akin to the source of aging, and we can expect that if an intervention reduces methylation age, it is increasing life expectancy.
The fact that methylation clocks trained on chronological age alone (with no input concerning mortality or disease state) turn out to be better predictors of life expectancy than age alone is a powerful validation of methylation technology. But only if you believe (for other reasons) that methylation is an upstream cause of aging. You could expect this from either type (1) or type (2) methylation changes.
I believe that aging is an epigenetic life program, and that methylation is one of several epigenetic mechanisms by which it is implemented. That’s why I have faith in methylation clock technology.
Conversely, people who believe that the root cause of aging is accumulated damage are right to discount evidence from epigenetic clocks as it pertains to the efficacy of particular treatments. As in the cancer example above, treatments that create a younger methylation age can actually be damaging.
The basis for my belief that aging is an epigenetic program is the subject of my two books, and was summarized several years ago in this blog. I first wrote about methylation as a cause of aging in this space in 2013. For here and now, I’ll just add that we have direct evidence for changes of type (1). Inflammatory cytokines are up-regulated with age. Apoptosis is upregulated with age. Antioxidants are downregulated with age. DNA repair enzymes and autophagy enzymes and protein-folding chaperones are all down-regulated with age. All these are changes in gene expression, presumably under epigenetic control.
Which is more basic, the proteome or the methylome?
For reasons I have elaborated often in the past, I adopt a perspective on aging as an epigenetic program. I think of methylation clocks as close to the source, because methylation is a dispersed epigenetic signal. But the proteome is, by definition, the collection of all signals transmitted in blood plasma, including all age signals and transcription factors that help to program epigenetics cell-by-cell. The proteome is generated by transcription of the DNA body-wide, which transcription is controlled by methylation among other epigenetic mechanisms. So one might argue from this that the methylome is further upstream than the proteome. On the other hand, methylation is just one among many epigenetic mechanisms, and the proteome is the net result of all of them. On this basis, I would lean toward a proteomic clock as being a more reliable surrogate for age in clinical experiments, even better than methylation clocks. It is a historic fact, however, that methylation clocks have a 6-year headstart. Methylation testing is entering the mainstream, with a dozen labs offering individual readings of methylation age, priced to attract end-users.
Let’s see if proteomic clocks can catch up. The new technology is based on SOMAscan assays, and so far is marketed to research labs, not individuals or doctors, and it is priced accordingly. The only company providing lab services is SOMAlogic.com of Boulder, CO. “SOMAscan is an aptamer-based proteomics assay capable of measuring 1,305 human protein analytes in serum, plasma, and other biological matrices with high sensitivity and specificity.” [ref] As I understand it, they have a microscope slide with 1305 tiny dots, each containing a different aptamer attached to a fluorescent dye. An aptamer is like an engineered antibody, optimized by humans to mate to a particular protein. Thus 1305 different proteins can be measured by applying a sample (in our case, blood plasma) to the slide, chemically processing the slide to remove aptamers that have not found their targets, then photographing the slide and analyzing the readout from the fluorescent dye.
Aptamers are synthetic molecules that can be raised against any kind of target, including toxic or non immunogenic ones. They bind their target with affinity similar or higher than antibodies. They are 10 fold smaller than antibodies and can be chemically-modified at will in a defined and precise way. [NOVAPTech company website]
Curiously, aptamers are not usually proteins but oligonucleotides, cousins of RNA, simply because the chemical engineers who design and optimize these structures have had good success with the RNA backbone. The SOMA in SOMAlogic stands for “Slow Off-rate Modified Aptamers”, meaning that the aptamers have been modified to make them stick tight to their target and resist dissociating.
An internal proteome-methylome clock?
It’s possible that there is a central clock that tells the body “act your age”. I have cited evidence that there is such a clock in the hypothalamus, and that it signals the whole body via secretions [2015, 2017].
Another possibility is a dispersed clock. The body’s cells manufacture proteins based on their epigenetic state, the proteins are dispersed in the blood, some of these are received by other cells and affect the epigenetic state of those cells. This is a feedback loop with a whole-body reach, and it is a good candidate for a clock mechanism in its own right.
I’m interested in the logic and the mathematics of such a clock in the abstract. Any feedback loop can be a time-keeping mechanism. Such a mechanism is _____Epigenetics ⇒ Protein secretion ⇒ Transcription factors ⇒ Epigenetics
This is difficult to document experimentally, but it is an attractive hypothesis because it would explain how the body’s age can be coordinated system-wide without a single central authority, which would be subject to evolutionary hijacking, and might be too easily affected by individual metabolism, environment, etc. But the body’s aging clock must be both robust and homeostatic. If it is thrown off by small events, it must return to the appropriate age. So my question—maybe there are readers who would like to explore this with me—is whether it is logically possible to have a timekeeping mechanism that is both homeostatic and progressive, without an external reference by which it can be reset.
Last year, Lehalier and a Stanford-based research group jumpstarted the push toward a methylomic aging clock with this publication [my write-up here]. The same group has a follow-up, published a few weeks ago. The new work steps beyond biologically agnostic statistics to incorporate information about known functions of the proteins that they identified last year. The importance of this is twofold: It suggests targets for anti-aging interventions. And it supports the creation of a clock composed of upstream signals that have been verified to have an effect on aging. I argued in the long Prelude above that this is exactly what we want to know in order to have confidence in an algorithmic clock as surrogate to evaluate anti-aging interventions.
They work with a database I had not known about before: the Human Ageing Genomic Resources Database. HAGR indexes genes related to aging and summarizes studies that document their functions. Some highlights of the proteins they identified:
Inflammatory pathways are right up there in importance. No surprise here. But if you can use inflammatory epigenetic changes to make an aging clock, you have a solid beginning.
Sex hormones that change with age turn out to be even more prominent in their list. The first several involve FSH and LH. These are hormones connected with women’s ovarian cycles; but after menopause, when they are not needed, their prominence shoots up, and not just once-a-month, but always on. Men, too, show increases in LH and FSH with age, though they are more subtle. I first became aware of LH and FSH as bad actors from the writings of Jeff Bowles more than 20 years ago.
“GDF15 It is a protein belonging to the transforming growth factor beta superfamily. Under normal conditions, GDF-15 is expressed in low concentrations in most organs and upregulated because of injury of organs such as such as liver, kidney, heart and lung.” [Wikipedia] “GDF15 deserves a story of its own. The authors identify it as the single most useful protein for their clock, increasing monotonically across the age span. It is described sketchily in Wikipedia as having a role in both inflammation and apoptosis, and it has been identified as a powerful indicator of heart disease. My guess is that it is mostly Type 1, but that it also plays a role in repair. GDF15 is too central a player to be purely an agent of self-destruction.” [from my blog last year]
Insulin is a known modulator of aging (through caloric restriction and diabetes).
Superoxide Dismutase (SOD2) is a ubiquitous antioxidant that decreases with age, leaving the body open to ROS damage.
Motilin is a digestive hormone. Go figure. Until we understand more, my recommendation would be to leave this one out of the aging clock algorithm.
Sclerostin is a hormone for bone growth. It may be related to osteoporosis, and well worth inclusion.
RET and PTN are called “proto-oncogenes” and are important for development, but associated with cancer later in life.
Which proteins are most relevant?
The Horvath clocks have been created using “supervised” optimization, which involves human intelligence that oversees the application of sophisticated algorithms. But what happens if you automate the “supervised” part? On the one hand, you must expect mistakes and missed opportunities that you wouldn’t have with human supervision. On the other hand, once you have a machine learning algorithm, you can apply it over and over to different subsets of the data, produce hundreds of different clocks, and choose those that perform best. That’s what Johnson and co-authors have done in the current paper. They describe creating 1565 different clocks based on different subsets of a universe of 529 proteins. In my opinion, their most important work combines biochemical knowledge with statistical algorithms. The work using statistical algorithms alone are much less interesting, for reasons detailed in the Prelude above.
This new offering from Lehalier and Johnson is a great step forward in that
proteins in the blood are a broader picture of epigenetics than methylation alone
specific proteins are linked to specific interventions that are reliably connected to aging in the right direction. Crucially, the clock is designed to have type (1) epigenetic changes (from the Prelude above) and to exclude type (2)
to calibrate the clock not with calendar age but with future mortality. This would require historic blood samples, and it is the basis of the Levine/Horvath PhenoAge clock.
to optimize the clock separately for different age ranges or, equivalently, to use non-linear fitting techniques in constructing the clock algorithm
to commercialize the Aptomer technology, so that it is available more widely and more cheaply
Elysium is a New York company advised by Leonard Guarente of MIT and Morgan Levine (formerly Horvath’s student, now at Yale). They have an advanced methylation clock available to the public, which they claim is more accurate than any so far. Other clocks are based on a few hundred CpG sites that change most reliably with age, but the Index clock uses 150,000 separate sites (!) which, they claim, offers more stability. The Horvath clocks can be overwhelmed by a single CpG site that is measured badly. (I have personal experienc with this.) Elysium claims that variations from one day to the next or one lab slide to the next tend to average out over such a large number of contributions. On the other hand, as a statistician, I have to wonder about deriving 150,000 coefficients from a much smaller number of inividuals. The problem is called overfitting, and the risk is that the function doesn’t work well outide the limited data set from which it was derived.
In connection with the DataBETA project, I have been talking to Tina Hu-Seliger, who is part of the Elysium team that developed Index. I am impressed that they have done some homework that other labs have not done. They compare the same subject in different slides. They store samples and freeze them and compare results to fresh samples. They compare different clocks using saliva and blood.
I wish I could say more but Elysium Index is proprietary. There is a lot I have not been told, and there is more that I know that I have been asked not to reveal. I don’t like this. I wish that all aging research could be open sourced so that researchers could learn from one another’s work.
Two other related papers
DeepMAge is a new methylation clock, published just this month, based on more sophisticated AI algorithms instead of the standard 20th-century statistics used by Horvath and others thus far. Galkin and his (mostly Hong Kong, mostly InSilico) team are able to get impressive accuracy in tracking chronological age. This technology has forensic applications, in which evidence of someone’s calendar age is relevant, independent of senescence. And the technology may someday be the basis for more accurate predictions of individual life expectancy. But, as I have argued above, a good clock for evaluating anti-aging measures must look at more than statistics. Correlation is not the same as causation, and only detailed reference to the biochemistry can give confidence that we have found causation.
Biohorology is a review paper from some of this same InSilico team together with some prominent academics, describing the latest crop of aging clocks. The ms is long and detailed, yet it never addresses the core issue that I raise in the Prelude above, about the need to distinguish upstream causes of aging from downstream responses to damage.
The beginning of the ms contains a gratuitous and outdated dismissal of programmed aging theories.
“Firstly, programmed aging contains an implicit contradiction with observations, since it requires group selection for elderly elimination to be stronger than individual selection for increased lifespan.”
“Secondly, in order for the mechanism to come into place, natural populations should contain a significant fraction of old individuals, which is not observed either (Williams, 1957).”
This statement was the basis not just of Williams’s 1957 theory, but more explicitly of the Medawar theory 5 years earlier. Neither of these eminent scientists could have known that their conjecture about the absence of senescence in the wild would be thoroughly disproven by field studies in the 1990s, The definitive recent work on this subject is [Jones, 2014].
For the purpose of evaluating anti-aging treatments, the ideal biological clock should be created with these two techniques:
It should be trained on historic samples where mortality data is available, rather than current samples where all we know is chronological age, and
Components should be chosen “by hand” to assure all are upstream causes of aging rather than downstream responses to damage. (Type 1 from analysis above.)
What does it mean, and why is it important? Let’s start with signal transduction. This is a word for the body’s chemical computer. The nervous system, of course, constitutes a signal-processing and decision-making engine; and in parallel, there is a chemical computer. The body has molecules that talk to other molecules that talk to other molecules, sending a cascade of ifs and thens down a chain of logic. The way molecules with very complex shapes fit snugly together is the language of the chemical computer. These molecules with intricate shapes are proteins, and they are not formed in 3D. Rather, DNA provides instructions for a linear peptide chain of amino acids which are transcribed in ribosomes (present in every cell) to create a chain of amino acids, chosen from a canonical set of 20. Each peptide chain folds into a protein with a characteristic shape, and it is these shapes that constitute the body’s signaling language. Most age-related diseases can be traced to an excess or a deficiency of these protein signal molecules.
So signal proteins are targets of medical research. Pharmaceutical interventions may modify signal transduction, perhaps by goosing signaling at some juncture, or by siphoning off a particular signal with another chemical designed to fit perfectly into its bumps and hollows. Up until now, there has been a lot of trial and error in the lab, looking for chemicals with complementary shapes. Imagine now that the Deep Mind press release is not exaggerating, and they really can reliably predict the shape that a peptide will take once it is folded. Then many months of laboratory experiments can be replaced with many hours of computation. All the trial-and-error work can be done in cyberspace. An inflection point in drug development, if it’s true.
Why it’s a Hard Problem
Computers solve large problems by breaking them down into a great many small ones. But protein folding can’t be solved by looking separately at each segment of the protein molecule. Everything affects everything else, and the optimal shape is a property of the whole. Proteins are typically huge molecules, with hundreds or thousands of amino acids chained together. The peptide bonds allow for free rotation. So the number of shapes you can form with a given chain is truly humongus. The sheer number of possibilities would overwhelm any computer program that tried to deal with the different shapes one at a time.
The thing that stabilizes a given shape is hydrogen bonding. Nominally, each hydrogen atom can form only one bond to a carbon or oxygen, but every hydrogen is a closet bigamist, and it longs to couple with a nearby carbon or (better still) oxygen atom even as it is bound primarily to its LTR partner. Every twist and bend in the molecular chain allows some new opportunities for hydrogen bonding, while removing others. The breakthrough in computing came 1% inspiration, 99% perspiration (Edisonn’s recipe). A key input was to map the structure of 170,000 known, natural proteins, and to train the computer to be able to retrodict the known results. Then, when working with a new and unknown shape, the computer makes decisions that are based on its past success.
How does it make the decisions? No one knows. One of the most successful techniques in artificial intelligence uses generic layers of input and output with programmable maps, and the maps are trained to give the right answer in known cases. But the fundamental logic that drives these decisions remains opaque, even to the programmers.
It gets more complicated
Many proteins don’t have a unique folded state. They are in danger of folding the wrong way. So there are proteins called chaperones that help them to get it right. These chaperones don’t explicitly dictate the proetein’s final structure, but rather they place the protein in a protected environment. There are 20,000 different proteins needed in the human body, but only a handful of different chaperones.
Factoid: Most inorganic chemical reactions take place on a time scale of billionths of a second. Organic reactions are somewhat slower. But protein folding happens on a human time scale of seconds, or even minutes.
The AI that finds a protein’s ultimate structure must have knowledge of the environment in which the protein folds. It is not merely computing something intrinsic to the sequence of amino acids that makes up the nacent protein. To underscore this problem, proteins fold incorrectly almost as often as they fold correctly. There is an army of caretaker proteins that inspect and correct already-folded proteins. Misfolded proteins tend to clump together and there are chemicals specialized in puilling them apart. For the lost causes, there are proteasomes, which break the peptide bonds and recycle a damaged protein into constituent parts. The name ubiquitin derives from the fact that these protein recyclers are found in every part of every cell.
The question arises, how do these caretaker proteins know what is the correct shape and what is a misfolded shape? Remember that the number of chaperones and caretakers is vastly smaller than the number of proteins that they attend to, so they cannot contain detailed information about the proper conformation of each protein they service. And this leads to a deep question for AI: It’s hard enough to know how a particular protein chain will fold into a conformation that is thermodynamically optimized. But the conformation optimized for least energy may or may not be the one that is useful to the body.
Prions are mysterious
In the late 1970s, a young neurologist named Stanley Prusiner began to suspect that misfolded proteins could be infectious agents. He coined the term prion for a misfolded protein that could cause other proteins to misfold. This idea defied ideas about how pathogens evolve, and in particular ran afoul of Francis Crick’s Central Dogma of Molecular Biology, which said that information was always stored in DNA and transferred downstream to proteins.
The evolutionary provenance of prions remains a mystery, but it is now well-established that certain misfolded proteins can cause a chain reaction of misfolding. The process is as mysterious as it is frightening. Neil Ferguson, who has become infamous this year for his apocalyptic COVID contagion models, frightened the UK in an earlier episode into slaughtering and incinerating more than 6 million cows and sheep, in a classic example of panic leading to overkill.
Prusiner had to wait less than 20 years before the medical community acceded to his heresy. He was awarded the Nobel Prize in 1997.
Example and Teaser
This example is from a review I am preparing for this space next week. I am reading two recent papers about proteins in the blood that change as we age. Assuming that these signals are drivers of aging, what can be done to enhance the action of those that we lose, or suppress the action of those that increase with age? The connection to the present column is that knowledge of protein folding can be used to engineer proteins that redirect the body’s chemical signal transduction at a given intervention point. For example, FSH (follicle-stimulating hormone) is needed just a few days of a woman’s menstrual cycle, but FSH levels rise late in life, with disastrous consequences for health. FSH shoots up in female menopause, and in males it rises more gradually.
FSH drives the imbalance in blood lipids associated with heart disease and stroke. In lab rodents, FSH can be blocked with an antibody, or by genetic engineering, with consequent benefits for cardiovascular health [ref] and loss of bone mass [ref]. The therapy also reduces body fat “Here, we report that this antibody sharply reduces adipose tissue in wild-type mice, phenocopying genetic haploinsufficiency for the Fsh receptor gene Fshr. The antibody also causes profound beiging*, increases cellular mitochondrial density, activates brown adipose tissue and enhances thermogenesis.” [ref] In the near future, we may be able to use computer-assisted protein design to create a protein that blocks the FSH receptor and do safely in humans what was done with genetic engineering in mice. _______________ *Beiging is turning white adipose tissue to brown. Briefly, the white fat cells are permanent and cause diabetes, while the brown are burned for fuel.
An Israeli study came out last week that has been described as rejuvenation via hyperbaric oxygen. I’m not taking it very seriously, and I owe you an explanation why.
The main claim is telomere lengthening. I used to think of telomeres as the primary means by which aging is programmed, but since the Danish telomere study [Rode 2015], I think that telomeres play a minor role.
I think that methylation age is a far better surrogate than telomere length. The study doesn’t mention methylation age, but reading between the lines…
I think the study’s results can be explained by elimination of senescent white blood cells. This might explain the observed increase in average telomere length, even without expression of telomerase.
Are there signs of senolytic benefits in other tissues? That’s the big question going forward.
A study was published in the Aging (Albany) last week claiming to lengthen telomeres and eliminate senescent cells in a test group of 20 middle-aged adults using intermittent hyperbaric oxygen treatment. It was promoted as age reversal in popular articles [for example], apparently with the encouragement of Tel Aviv University.
Telomeres as a surrogate marker for aging
Several years ago, I was enthusiastic about the use of telomere length as a measure of biological age. Telomeres shorten progressively with age, and I thought this mechanism provided a good candidate for a mechanism of programmed aging. But when the Rode study came out of Copenhagen (2015), I saw that the scatter in telomere length was too large for this idea to be credible.
I came to think that telomere shrinkage plays a minor role in aging. Around the same time, I became enthusiastic about methylation clocks. Methylation changes with age are correlated far more strongly with less scatter.
The air we breathe is only 21% oxygen. Breathing pure oxygen, five times as concentrated as in air, is a temporary therapy (hours at a time, but not days) for people who have impaired lungs. But prolonged exposure to pure O2 can injure the lungs and other tissues as well. Oxygen is highly reactive, and the body’s antioxidant system is gauged to the environments in which we evolved, so oxygen therapy is not to be taken lightly.
Hyperbaric Oxygen Therapy (HBOT) is oxygen at double full strength. The patient breathes pure oxygen at twice atmospheric pressure. If you just put a tube in your mouth with that much pressure, you wouldn’t be able to hold it, or to exhale. But the body can withstand high pressures as long as it’s all around, not just inside the lungs. If you SCUBA dive, at 30 feet below the surface the ambient pressure is two atmospheres, and SCUBA tanks adjust to feed air into your mouth at a pressure that is matched to the surrounding water.
(Incidentally, pressure varies a lot with altitude, so that in Denver it’s 20% lower than New York. Two years ago, I trekked in the Himalayas at 17,000 feet, where the air pressure is only half the standard (sea level) value, and of course there is only half as much oxygen.)
HBOT needs to arrange higher ambient pressure, not just in the oxygen tank. The patient has to be enclosed in a chamber where the ambient pressure is twice atmospheric pressure. Pure oxygen is expensive enough that the ambient air is just normal air at high pressure, and the patient is given oxygen to breathe from a tank. The patient can be in a pressurized room or lying in a personalized chamber.
HBOT has been around for a century, and standard medical uses are for detoxification, gangrene, and chronic infections. More recently, HBOT has been used with success for traumatic injury, especially nerve damage. There are studies in mice in which HBOT in combination with a ketogenic diet has successfully treated cancer.
In the new Israeli study, subjects received 90 minutes of HBOT therapy 5 days a week for 12 weeks. For 5 minutes of every 20, patients breathed ordinary 21% air. The intermittent treatment was described as inducing some hypoxia adaptations. Apparently, the body adjusts to the high oxygen environment, and then it senses (relative) oxygen deprivation for those 5 minutes.
How does it work?
There is no accepted theory for how HBOT works, so I feel free to speculate. The primary role of a highly oxidative environment is to destroy. That’s probably how HBOT treats infections, since bacteria are generally more vulnerable to oxidative damage than cells of our bodies. Another thing that HBOT does well is to eliminate necrotic tissue, and I wouldn’t be surprised if it turns out to be an effective cancer treatment, since tumor cells thrive in an anaerobic environment. But the body also uses ROS (reactive oxygen species) such as H2O2 as distress signals that dial up chemical protection and repair. This is akin to hormesis, and I’m inclined to think that when HBOT promotes nerve growth, it is via a distress signal.
Authors of the new study make two claims: that telomeres are lengthened in several classes of white blood cells, and that senescent white blood cells are eliminated. Let’s take them in reverse order.
Elimination of senescent cells has been a promising anti-aging therapy since pioneering work of van Deursen at the Mayo Clinic. A quick refresher: telomeres get shorter each time cells replicate, and in our bodies, some of the cells that replicate most (stem cells and their offspring) develop short telomeres late in life that threaten their viability. Cells with short telomeres go into a state of senescence, in which they send out signals (inflammatory cytokines) that increase levels of inflammation in the body and can also induce senescence in adjacent cells, in a chain reaction. Senescent cells are a tiny proportion of all cells in the body, and Van Deursen showed that the body is better off without them. Just by selectively killing senescent cells in a mouse model, he was able to extend their lifespan by about ~25%. But to do the experiment, he had to genetically engineer the mice in such a way that the senescent cells would be easy to kill selectively. Ever since this study, the research community has been looking for effective senolytic agents that could kill senescent cells and leave regular cells alone (without having to genetically engineer us ahead of time).
The new Israeli study demonstrates that senescent white blood cells have been reduced. (Red blood cells have no chromosomes, so they can’t have short telomeres and can’t become senescent in the same way. They just wear out after a few months.) The effect continued after the 60 hyperbaric sessions were over, suggesting that HBOT kills the cells slowly, or damages them so that they die later. Apparently, the reduction was measured by separating different cell types and counting them. There was a great deal of scatter from one patient to the next.
The first claim is that average telomere length was increased in some populations of white cell sub-types. Again, there was a great deal of scatter in the data, with some of the subjects decreasing telomere length and others. For example, when they say that B cell telomeres increased by 22% + 40%, I interpret that to mean that the mean telomere length increased by 22%, but the combined standard deviations from the before and after measurements was 40% of the original length. Hence, a great deal of scatter.
Aside about statistics (With apologies — this from my geeky side)
First, what does that mean 22% + 40% ? How can that be statistically significant? Answer: The standard deviation of a set of measurements is a measure of the scatter. It tells you how broadly they differ from one another. If you’re looking for the average of that distribution, you can be pretty sure that the average isn’t out at the edges, so the uncertainty in the average is a lot smaller than the standard deviation. How much smaller? The answer is the square root of N rule. The “standard error of the mean”, or SEM, is the standard deviation divided by the square root of the number of points, or √N. So the 40% standard deviation gets divided by the square root of the number of subjects in the study, √26=5.1, and “22% + 40%” should really be reported as 22% + 8%. The mean is 22% and the uncertainty in that 22% is 8%.
The way this group did the statistics was based on
Finding the average telomere length among 26 subjects after the study
Dividing by the average telomere length among 26 subjects before the study
First they average, then they divide.
But it’s well-known (to statisticians) that the most sensitive test is to reverse the operations. First divide, then average. In other words, compare each subject’s telomeres after the study with the same subject before the study. If you do the statistics this way, then the original scatter among the different subjects all cancels out. You can start with subjects of vastly different telomere lengths, and it doesn’t matter to the statistics, so long as each one of them changes in a consistent way.
If you average first (before dividing), the scatter among the initial group imposes a penalty in statistical significance, even though that has nothing to do with effectiveness of the treatment.
So this raises the question: Why did the authors do the statistics this less-sensitive way? They hint at an answer: “repeated measures analysis shows a non-significant trend (F=4.663, p=0.06)” They seem to be saying that the test which normally gives a better p value, in this case gives a worse p value.
That can only happen if the the people who had the longest telomeres at the end of the study were not the same as the people who had the longest telomeres at the beginning.
Here’s what I think is really going on
Telomerase is the enzyme that increases telomere length. We think of telomerase as anti-aging, and supplements such as astragalus and gotu kola and silymarin are gobbled up for their telomerase activation potential. When we think of longer telomeres as a result of a study, we imagine that telomerase has been activated.
But in this case, I think that the average has gone up simply because the cells with short telomeres have been killed off. The authors are telling us that there are less senescent cells as a result of the treatment. Senescent cells are the ones with the shortest telomeres. At the beginning, the average telomere length is an average of a wide range of cells with long and short telomeres. At the end, you have the same long telomeres in the average, but the shortest ones are gone, so the average has increased.
I’m suggesting that telomerase has not been activated. There has been no elongation of telomeres, but the average length has increased because cells with the shortest telomeres have been eliminated.
It’s only a hypothesis, but it might help explain why the people who had the longest average telomere length at the beginning were not the same as the people who had the longest average telomere length at the end. The senescent cells that were being eliminated had no relationship to the telomere length in other cells.
One thing I’d like to know is whether the HBOT treatment affected methylation age by any of the Horvath clocks. I’ve written to the authors with this question, and haven’t received a response. Maybe they did the methylation testing and didn’t report the results because they were negative—just a guess.
But even without reprogramming methylation, the therapy can be valuable if it is eliminating senescent cells generally, and not just in white blood cells. An easy first test would be whether inflammatory cytokines in the blood decreased after the treatment. Confirmation would come from the kind of test van Deursen did, assaying senescent cells in different tissues.
If hyperbaric oxygen can be shown to decrease methylation age, that would be a promising finding. If not, but the treatment has general senolytic effects (not just in white blood cells), it may yet have value as an anti-aging treatment. Maybe the authors already know the answers to these questions; if not, they should be busy finding out.