Unthinkable Thoughts

This essay is inspired by Dr Mercola’s announcement last week that (reading between the lines) his life and his family’s have been threatened if he doesn’t remove from his web site a peer-reviewed study demonstrating the benefits of vitamin D and zinc in prevention of the worst COVID outcomes. In the present Orwellian era, where propaganda and deception are ubiquitous, one of the signposts of truth that I have learned to respect is that the most important truths are the most heavily censored.


This is not what I enjoy writing about, but as I find dark thoughts creeping into my consciousness, perhaps it is better to put them on paper with supporting logic and invite my readers to help me clarify the reasoning and, perhaps, to point a way out of the darkness.

Already in January, 2020, two ideas about COVID were emerging. One is that there were people and institutions who seemed to have anticipated the event, and were planning for it for a long time. Gates, Fauci, the World Economic Forum, and Johns Hopkins School of Medicine were among the prescient. (I credit the (now deleted) videos of Spiro Skouras.) Second was the genetic evidence suggesting that COVID had a laboratory origin. Funders of the scientific establishment have lost their bid to ridicule this idea, and it has now leaked into the mainstream, where it is fused with the classical yellow peril propaganda: “China did it!”. I have cited evidence that America is likely equally culpable.

The confluence of these two themes suggests the dark logic that I take for my topic today: Those who knew in advance, not only that there would be a pandemic but that it would be a Coronavirus, were actually responsible for engineering this pandemic.

Immediately, I think: How could people capable of such sociopathic enormities be occupying the most powerful circles of the world’s elite? And what would be their motivation? I don’t have answers to these questions, and I will leave speculation to others. But there’s one attractive answer that I find less compelling: that it’s a money-maker for the large and criminal pharmaceutical industry. The new mRNA vaccines are already the most profitable drugs in history, but I think that shutdown of world economies, assassinations of world leaders, deep corruption of science, and full-spectrum control of the mainstream narrative imply a larger power base than can plausibly be commanded by the pharma industry.

Instead, I’ll try to follow the scientific and medical implications of the hypothesis that COVID is a bioweapon.

The Spike Protein

The spike protein is the part of the virus structure that interfaces with the host cell. SARS 1 and SARS 2 viruses both have spike proteins that bind to a human cell receptor called ACE-2, common in lung cells but also present in other parts of the body. Binding to the cell’s ACE-2 receptor is like the wolf knocking at the door of Little Red Riding Hood’s grandmother. “Hello, grandmama. I’m your granddaughter. Please let me in.” The virus is a wolf wearing a red cape and hood, pretends to be an ACE-2 enzyme molecule seeking entrance to the cell.

In order to enter the cell, the virus must break off from the spike protein and leave it at the doorstep, so to speak. This is an important and difficult step, as it turns out. Unique to the SARS-CoV-2 virus is a trick for making the separation. Just at the edge of the protein is a furin cleavage site. Furin is an enzyme that snips protein molecules, and it is common in our bodies, with legitimate metabolic uses. A furin cleavage site is a string of 4 particular amino acids that calls to furin, “hey — come over here. I’m a protein that needs snipping.”

The most compelling evidence for a laboratory origin of COVID is that coronaviruses don’t have furin cleavage sites, and until last year, this trick has never evolved naturally.

How we think about natural disease

The classical understanding of a viral or bacterial disease is this: A parasite is an organism that uses the host’s resources for its own reproduction. It is evolved to reproduce efficiently. If it has co-evolved with the host, it may be evolved to spare the host’s health, or even to promote it, because this is the optimal long-term strategy for any predator or parasite. But newly-emerged parasites can do well for awhile even if they disable or kill their hosts, and this is the kind of disease that is most damaging to us. The damage is done because the (young) virus’s strategy is to reproduce rapidly and disperse itself into the environment where it can find new hosts. The virus has no interest in harming the host, and was not evolved to this end, but this is a side-effect of commandeering the body’s resources for its own reproduction.

How engineered diseases can be different

A bioweapon virus is designed to cause a certain kind of harm.

  • What kind of harm? It depends on the projected use for the weapon.
  • Doesn’t the virus have to reproduce? Probably, for most weapon applications; but a bioweapon is not necessarily designed for rapid reproduction. A bioweapon can be designed as a “sleeper” to remain dormant for months or years, or to cause incremental disability over a long period.

If COVID had evolved naturally, we would expect that its spike protein would be adapted to mate well with the human ACE-2 receptor. There’s no reason to suspect it being otherwise biologically active. But if COVID is engineered, it may be that the spike protein itself has been designed to make us sick.

One reason this is significant is that the vaccines have all been designed around the spike protein, assuming that the spike protein were metabolically neutral. If the virus had been naturally evolved, this is a reasonable assumption. But if it came from a laboratory (whether it leaked or was deliberately released) the spike protein might actually be the agent of damage. There are several reasons to suspect that this is the case.

The Spike Protein as an Active Pathogen

Back in February, 2020, this article noted that the spike protein was not perfectly optimized to bind to human ACE-2 and put this forward as a proof that “SARS-CoV-2 is not a purposefully manipulated virus.” But if someone were designing the virus to cause harm, the spike protein would be a convenient locus for the damage vector, so the spike might have been designed with twin purposes in mind, binding and toxicity. The spike protein appears in many copies around the “crown” of the coronavirus. Since each copy has a furin cleavage site at its base, many spike proteins will break off into the bloodstream. We now have several reports and hypotheses concerning the spike protein as an active agent of damage. The spike protein is suspected of causing blood clots, of inducing long-lasting neurological damage, and of causing infertility. Many anecdotes describe injuries to un-vaccinated people who have been in close proximity to vaccinated, prompting speculation about “shedding” the spike protein.

“Individuals with COVID-19 experience a vast number of neurological symptoms, such as headaches, ataxia, impaired consciousness, hallucinations, stroke and cerebral hemorrhage. But autopsy studies have yet to find clear evidence of destructive viral invasion into patients’ brains, pushing researchers to consider alternative explanations of how SARS-CoV-2 causes neurological symptoms….

 

If not viral infection, what else could be causing injury to distant organs associated with COVID-19? The most likely culprit that has been identified is the COVID-19 spike protein released from the outer shell of the virus into circulation. Research cited below* has documented that the viral spike protein is able to initiate a cascade of events that triggers damage to distant organs in COVID-19 patients.

Worryingly, several studies have found that the spike proteins alone have the capacity to cause widespread injury throughout the body, without any evidence of virus.

 

What makes this finding so disturbing is that the COVID-19 mRNA vaccines manufactured by Moderna and Pfizer and currently being administered throughout the U.S. program our cells to manufacture this same coronavirus spike protein as a way to trigger our bodies to produce antibodies to the virus.” [Global Research article, Feb 2021]

Note: the Astra-Zeneca and J&J vaccines are also based on the spike protein, and cause the spike protein to be created in the vaccinated person.

* “Research cited below” refers to this study in Nature which reports that the spike protein, injected into mice, crosses into the brain, where it causes neurological damage.

Bigger news came just this week from a study in which researchers from California’s Salk Institute collaborated with Chinese virologists. They have found that the bare spike protein without the virus (injected in mice) can cause damaged arteries of the kind that lead to heart disease and strokes in humans. The original paper was published in Circulation Research, and the Salk Institute issued a news report describing the research.

One of the most credible dangers of the spike protein involves fertility. None of the vaccines were tested in pregnant women, and yet many government and other authorities are recommending it as safe for pregnant women. VAERS has reported 174 miscarriages to date after COVID vaccination. VAERS is notoriously underreported. I find the anecdotes less concerning than the fact that no one is taking this seriously, and research is being actively discouraged in the best-respected science journals.

There is a credible mechanism, in that the spike protein is partially homologous to syncytin. Syncytin, in fact, was originally a retroviral protein, inserted into the mammalian genome many aeons ago, and evolved over the ages to play an essential role in reproduction, binding the placenta to the fetus. An immune response that attacks syncytin might be expected to impose a danger of spontaneous abortion. In any ordinary times, this would be a subject that medical researchers would jump on, with animal tests and field surveys to assess the danger. But these are no ordinary times, and the risk is being dismissed on theoretical grounds without investigation. This is especially suspicious in the context of history: a Gates Foundation vaccination program in 1995 was allegedly promoted to young women, causing infertility. (Yes, I know there are many fact-checkers eager to “debunk” this story, but I don’t find them convincing, and some of these fact-checkers are compromised by Gates funding.)

Even doing what the spike protein is supposed to do — tying up ACE2 — can be a problem for our lungs and arteries, which are routinely protected by ACE2.

The most dangerous possibility, suspected but not verified, is that the spike protein causes a prion cascade. Prions are paradoxical pathogens, in that they are misfolded proteins that cause misfolded proteins. Their evolutionary etiology is utterly mysterious, so much so that it took Stanley Prusiner a decade after describing the biology of prions before the scientific community would take prion biochemistry seriously. But prions make potent bioweapons, which laboratories can design outside of natural evolutionary dynamics. The possibility of prion-like structures in the spike protein was noted very early in the pandemic based on a computational study. This recent review combines theoretical, laboratory, and observational evidence to make a case for caution. Once again, I find it disturbing that this possibility is being dismissed on theoretical grounds rather than investigated in the lab and the field.

Where did the idea come from that all vaccines are automatically safe? Why do so many journalists dismiss the suggestion that vaccines should be placebo-tested individually, like all other drugs? Why has it become routine to ridicule and denigrate scientists who ask questions about vaccine safety as politically-motivated luddites, or “anti-vaxxers”? How did we get to a situation where the “precautionary principle” means pressuring young people who are at almost no risk for serious COVID to accept a vaccine which has not been fully tested or approved? I don’t have answers, but I do know who benefits from this culture.

Putting together all the evidence

  • Knowledge beforehand
  • Suppression of treatments and cures
  • Toxicity of the spike protein which, if it had been made by nature, should have been benign
  • Inclusion of the toxic spike protein in the vaccines that are supposed to protect us
  • Heavy promotion of  these scantily-tested vaccines and
  • Censorship of scientists and doctors who question the vaccines’ safety

… putting together all this evidence, it is difficult to escape the inference that powerful people and organizations have engineered this pandemic with deadly intent.

Weight and Aging: a Paradox, Part 2

The paradox: In animal models there is a consistent relationship between eating less and living longer. But studies in humans find that people who are a little overweight live longest.

Last week, I introduced this paradox and offered evidence, both that lab animals live longer when they are underfed, and that humans live longer when they are overfed. In the article below, I introduce nuances and confounding factors, but in my opinion, the paradox remains unresolved.

BMI

BMI is an imperfect measure of how fat or thin someone is for his height. That’s because it is calculated with the square of height, but body volume (for a given shape) is proportional to the cube of height. The result is that tall people will have a higher BMI than shorter people with equivalent proportions of body fat. For example, BMI=20 for a person 5 feet tall means a weight of 102 pounds, an average weight for that height; whereas BMI=20 for a person 6 feet tall means a weight of 147, which is borderline emaciated.

Short people tend to live significantly longer than tall people, and the effect is substantial.  Males under 5’7” live 7½  years longer than males over 6’ [ref]. This fits with the fact that short people tend to have less growth hormone in their youth. There is a genetic variant in parts of Ecuador that prevents growth hormone from transforming to IGF1 (Laron dwarfism); these people are generally about 4 feet tall and tend to live longer. From domesticated animals, we also know that small dogs live longer than large dogs, small horses longer than large horses. Between species, larger animals live longer, but within a single species, smaller animals live longer.

The height association deepens the weight paradox, because short people will tend to have a lower BMI, which we would expect to skew the association of BMI with longevity downward.

Growth Hormone and IGF1

Growth hormone (which is translated into IGF1 in the body) is genetically associated with shorter lifespan, but we have more of it when we’re young and it promotes a body type with more muscle, less fat. According to this Japanese study, IGF1 increases with weight for people who are thin, but decreases with weight for people who are fat. So maximum longevity is close to maximum IGF1.

Here are some partial explanations for the paradox.

Most variation in weight is explained by genetics, not food intake. The explanation I have proposed in the past is that the CR effect is about food intake, not genetics. And people who are congenitally stout are more likely to be restricting their calories. CR humans are not necessarily especially thin.

The CR effect is proportionately smaller in long-lived humans than in short-lived rodents or shorter-lived worms and flies. [ref] If life extension via CR evolved to help an animal survive a famine, then it seems reasonable that the benefit should be limited to a few years, because that is as long as most famines in nature are likely to last.

The CR effect may be due to intermittent fasting rather than total calorie intake. Traditional CR experiments conflate intermittent fasting with overall calorie reduction, because food is provided in a single daily feeding, and hungry rodents gobble it up, then go hungry for almost 24 hours. More recent experiments attempt to separate the effect of limited-time eating from the effect of calorie reduction, and the general conclusion is that both benefit longevity. It may be that humans who are skinny tend to graze all day, while people with a comfortable amount of fat more easily go for hours at a time without eating. 

Mice carry less fat, have less food craving, and have better gut microbiota if they are fed at night rather than during the day [ref]. Mice are active nocturnally; so translating to humans, it probably means that we should eat in the morning. Conventional wisdom is that eating earlier in the day is better for weight loss and health [ref], but I know of no human data on mortality or life span. This classic study in mice [1986] found caloric restriction itself was the only thing affecting lifespan, and there was no difference whether the mice were fed night or day, in three feedings or one.

Smokers tend to be thinner than non-smokers, but they don’t live longer for reasons that have to do with smoking, not weightSo this is a partial explanation why heavier BMI might be associated with longer lifespan. But note that the recent Zheng’s Ohio State study claimed there was no change in the best weight for longevity when correction was introduced for smoking.

Cachexia is a “wasting” disorder that causes extreme weight loss and muscle atrophy, and can include loss of body fat. This syndrome affects people who are in the late stages of serious diseases like cancer, HIV or AIDS, COPD, kidney disease, and congestive heart failure (CHF). [healthline.com] If cachexia subjects are not removed from a sample, it can strongly bias against weight loss, because once cachexia sets in, life expectancy is very short. But the Zheng study was based on Framingham data, collected annually over the latter half of a lifetime; Cachexia is not expected to be a significant factor.

Timing artifact – The Framingham study covers a 74-year period in which BMI is increasing and also lifespan is increasing, probably for different reasons. The younger Framingham cohort is living ~4 years longer than the older cohort and is ½ BMI point heavier. This could create an illusion that higher BMI is causing greater longevity. However, the Ohio State study made some effort to pull this factor out. Greater lifespan is associated with gradually increasing BMI, and this is true separately in both cohorts.

Differential effects on CVD and Cancer This chart (from Zheng) shows how the mortality burden of cardiovascular disease has decreased over the last century, but not so cancer.

But CV disease risk increases consistently with BMI, while cancer risk, not so much (also from Zheng):

These numbers in parentheses are odds ratios from a Cox proportional hazard model. What they means is that a person in the Lower-Normal weight group had 20% less chance of getting heart disease compared to someone of the same age in the Normal-Upward group, but a 60% increased chance of getting cancer. These appear to be large, concerning numbers. But remember that the underlying probabilities are all increasing exponentially with age. Translated into years of lost life, 60% greater probability of cancer is only 1 year of life expectancy at age 50. (60% greater overall mortality would subtract 4½ years from life expectancy.) In my experience, hazard ratios in the range 0.7 to 1.5 don’t necessarily mean anything, because of the difficulties in interpreting data. The numbers in parenthesis after 1.60 in the above table (1.12 2.30) mean that statistical uncertainty alone is a range from 1.12 to 2.30.There are plenty of large effects with hazard ratios of 3 or more. For comparison, the hazard ratio for pack-a-day smokers getting lung cancer is 27.

Zheng’s study found a longevity disadvantage to being underweight, and it was exclusively due to a higher cancer risk. In fact, incidence of cardiovascular disease among the lowest BMI class was lowest (0.8); but their cancer risk more than made up for it (1.6). 

This means that as time goes on and most Americans are getting heavier, their risk of dying from CVD is blunted by improved technology. The mortality risk from CVD is down by 40% in this century [NEJM], while the cancer risk is unchanged [CDC]. So people are dying of cancer who would have died of CVD in previous generations. 

This means that low BMI has less benefit for longevity than it used to have, and the trend over time tends to exaggerate the appearance that higher weight is protective against all-cause mortality.

Is it true that cancer risk does not go up with BMI?

The Framingham result is puzzling and difficult to reconcile with a well-established relationship between higher BMI and higher cancer risk. This review by Wolin [2010] finds a modest increase in risks of all common types of cancer associated with each 5-point gain in BMI. (The RR numbers are comparable to hazard ratios above.)

Lung cancer is the big exception, and Wolin explains the inverse relationship with BMI by the fact that people smoke to avoid gaining weight. This would suggest a resolution to the conflict with Zheng’s study, but for the fact that Zheng explicitly corrects for smoking status and finds it makes no difference at all — a result which is puzzling in itself.

Alzheimer’s Disease is the third leading cause of death, and the corresponding story is more complicated. Lower weight in middle age seems to be mildly protective, while it is certainly not protective in the older years when AD is most prevalent.

“Hazard ratios per 5-kg/m2 increase in BMI for dementia were 0.71 (95% confidence interval = 0.66–0.77), 0.94 (0.89–0.99), and 1.16 (1.05–1.27) when BMI was assessed 10 years, 10-20 years, and >20 years before dementia diagnosis.”  [ref]

This, too, is unexpected in light of previous consensus. Alzheimer’s Dementia has been recast as Type 3 Diabetes, because of its strong association with insulin metabolism. Overweight is supposed to be the greatest life-style risk factor for diabetes. When this study [2009] out of U of Washington found that high BMI is protective against dementia, the authors were unwilling to draw the standard causal inference, so they conjectured instead  that weight loss is a consequence of AD’s early stage. 

There may be a better explanation hidden in their data. AD is the most common cause of dementia, but vascular dementia, a separate etiology, accounts for roughly ⅓ of cases in the Kame data set:

There is a suggestion here that higher BMI protects against vascular dementia, but not against AD.

From you, my readers

Here are some of the suggestions offered in the comment section of last week’s blog:

  • Fat people are happier.  I don’t doubt that happiness has a lot to do with longevity but a lot of overweight is due to compulsive eating by people who are not happy with their lives. Obesity is associated with lower socio-economic status, and lower SES is independently associated with shorter lifespan and lower life satisfaction.
  • Higher BMI can mean more muscle mass, not necessarily more fat mass. Good point. I don’t know how big a factor this is.
  • This study [BMJ 2016] found greatest longevity for BMI in the range 20-22.  I take your point that the larger studies with longer follow-up tend to report lower optimal BMI. The BMJ study is a meta-analysis of a huge database covering 9 million subjects.
  • Dean Pomerleau writes at the CR Society web page about brown fat, cold resistance, and greater longevity.
  • Thin people have greater insulin sensitivity, which can lead to glucose going into cells instead of being stored as fat. This is interesting, and deserves more follow-up. But good insulin sensitivity also means lower blood sugar, so its not obvious to me which direction the effect ought to go.
  • I was grateful for a pointer to Valter Longos recent work, recommending that time-restricted eating becomes counterproductive after about 13 hours a day of fasting. Longer fasts several times a year are still highly recommended.
  • Paul Rivas is my go-to authority on weight, and he recommended this 2015 study, which emphasizes the paradox as I describe it.
  • This study out of Emory U [2019] recommends different diets for different BMI groups for minimizing inflammation.

What story does methylation tell?

Aside from mortality statistics, I regard methylation age as the most reliable leading indicator we have. I’ll end by reviewing data on BMI and methylation age.

The Regicor Study [2017] looked for methylation sites associated with obesity. They reported 97 associated with high BMI and an additional 49 associated with large waistline. I compared their lists with my list of methylation sites that change most consistently with age. There was no overlap. What I learn from this is that there is no association with genetically-determined weight and longevity. If you were born with genes that make you gain weight, there is a social cost to be paid in our culture, but there is no longevity penalty.

Horvath [2014] did not discern a signal for obesity with the original 2013 DNAmAge clock, except in the liver where the signal was weak, amounting to just 3 years for the difference between morbidly obese and normal weight. But a few years later with 3 different test groups [2017], a moderate signal was found, as expected, linking higher BMI to greater DNAmAge acceleration. (Age acceleration is just the difference between biological age as measured by the methylation clock and chronological age by the calendar.) 

This study [2019] from the European Lifespan Consortium found a modest increased mortality from obesity, corresponding to less than a year of lost life by most measures, based on two Horvath clocks and the Hannum clock. This Finnish study [2017] found a small association between higher BMI and faster aging in middle-aged adults, but not in old or young adults.

This study from Linda Partridge’s group [2017] found a strong benefit of caloric restriction on epigenetic aging—in mice, not in humans. 

The bottom line

I’ve had a good time with this project, seeking explanations for the paradox, and I’ve passed along some interesting associations, but in the end, the essential paradox remains. I don’t know why the robust association of caloric restriction with longevity doesn’t lead to a clear longevity advantage in humans for a lower BMI. My strongest insight is that the largest determinants of BMI are genetic, not behavioral, and the genetic contribution to weight has no effect on longevity. But what do I make of the fact that life expectancy in the US has risen by a decade over my lifetime [ref] even as BMI has increased 5 points.

Weight and Aging: A Paradox, Part 1

Caloric restriction is the gold standard life extension strategy, validated over thousands of experiments in many animal species. How can we reconcile this with consistent findings that people who are slightly overweight live longer than normal or underweight folks?


The one fact that everyone in the field of aging agrees on is that animals fed less live longer. This is the result that got me interested in the field 25 years ago, and it is still the most robust finding in the field, verified in dozens of species from yeast cells to Rhesus monkeys.

Are humans different from all other animals?

Last month, a study came out of Ohio State U based on the famous Framingham database, including medical and demographic information on 5,000 people and their offspring, tracked over 74 years. The take-home message was that the people who lived longest were average weight when young and gained weight during their middle years. There were not enough people who had actually lost weight to constitute a subgroup, but the group identified as “low-normal weight” all through their lives showed up with 40% higher all-cause mortality than those that gained weight.

I wrote about this subject in my book, and in one of my first posts on ScienceBlog, back in 2012. The post was titled Ideal Weight may be an Illusion, and I concluded that

For any given individual, it’s probably true that
the less you eat the longer you live.” 

The argument went thus: Weight is mostly fixed by genetics, and the genetic component of weight does not affect longevity. It is relative calorie intake that affects longevity, relative to genetics, body type, and metabolism. For example, a study of genetically obese mice found that they had shortened lifespans if they were fed ad libitem, however, if the obese mice were calorically restricted, they actually lived longer than genetically normal mice, and even longer than CR normal mice, despite the fact that they still appeared plump. 

This line of reasoning led me to hypothesize that the reason overweight people tend to live longer is that they are motivated to restrict calories, whereas people (like me) who don’t get fat no matter how much we eat feel no social pressure to restrain our gluttony.

I thought at the time that we ought to see this effect much more in women than in men, because overweight women are ostracized in our culture, whereas men are not. What I found, contrary to my prediction, was that the BMI with lowest mortality (in Japan) is 23-25 for men, compared to 21-23 for women [Matsuo, 2012].

So, is it time to consider the possibility that caloric restriction doesn’t extend human life expectancy?

New Ohio State Study

The new study is based on the 74-year-old Framingham cohort, people whose health and daily habits have been followed over time. Also followed was a Framingham Offspring cohort, the children of the original Framingham cohort. Almost all the original cohort have now died (so we have extensive mortality data), but many of the offspring cohort is still alive. The authors treat the two cohorts separately, and get somewhat different results for the two cohorts. Dr Zheng was kind enough to send me the full preprint with supplemental tables, and since it’s not yet available online, I’ve made it available for you to read here on GDrive.

The study looks not just at BMI but also at the change in BMI over mid- to late-life years. They classify the trajectories in seven groups, and analyze them using a Cox model. They find that the group that has lowest mortality had an average trajectory beginning at BMI=22 at age 30, increasing gradually to BMI=27 at age 80. The group was broadly defined, so that initial BMI could be anywhere from 18.5 at the low end to 25 at the high end.

Cox Proportional Hazard Model
This statistical method is standard for studies like this evaluating effect on mortality. It is designed to take into account the steep rise in mortality with age, and weight different deaths according to when they occur. The standard assumption is that the mortality curve with age is changed by a multiplicative factor associated with each variable. The mortality curve retains the same shape across ages, but it slides up or down (on a log scale) according to which factors apply to a given subgroup. For example, having a graduate degree may multiply your risk of dying by 0.9 across the board, and eating red meat may multiply your risk by 1.2, so the model actually derives these numbers by assuming that meat-eaters with a graduate degree will have a relative probability of death 1.08 times the control group, and this applies at every age. (where 1.08 = 0.9 * 1.2)Is this quantitatively realistic? Everyone knows it is not, but it yields a single number which is a good benchmark for different longevity factors, and it allows different studies to report their results in a common format for comparison.

Division of subjects into seven groups was somewhat arbitrary, and was done to facilitate statistical analysis. The red railroad tracks represents midline of the trajectory associated with “longest lifespan”, defined above as the minimum Cox factor. The lowest weight group was associated with a Cox factor of 1.4, meaning 40% more likely to die (at a given age) than the red railroad track trajectory.

On the other side

CR extends lifespan in almost every animal model in which it has been tried. I won’t dwell on this, because it’s so well known, but I’ll note that CR works better in short-lived animals, as a percentage of lifespan and the enthusiastic projections of Roy Walford now seem overstated. I have said that I think CR in humans is good for 3 to 5 years. Do I still think so? There is good evidence for CR in humans.

  • Food shortages during World War II in some European countries were associated with a sharp decrease in coronary heart disease mortality, which increased again after the war ended.[Fontana, 2007
  • Fontana performed in-depth metabolic profiles of people identified from the Caloric Restriction Society who were disciplining themselves to eat less. Relative to people at a comparable age, he found “a very low level of inflammation as evidenced by low circulating levels of C-reactive protein and TNFα, serum triiodothyronine levels at the low end of the normal range, and a more elastic ‘younger’ left ventricle, as evaluated by echo-doppler measures of LV stiffness.” [2007]
  • There is at least preliminary evidence that weight loss tends to set back the aging clock, as measured by several methylation algorithms [2020]
  • Higher BMI is associated with older methylation age [2019]
  • C-reactive protein in the blood, the most common measure of inflammation, increases with increasing BMI [2003]
  • Loss of insulin sensitivity is a hallmark of aging, driving many age-related diseases. There is a strong correlation between BMI and diabetes [2007]
  • BMI is linked to most common cancers, the #2 source of mortality. Here’s a good review by Wolin [2010].
  • BMI is also a factor in cardiovascular disease, the #1 killer. This study from Malaysia [2017] found a trend of increasing CVD at every BMI level, but — like other studies — also found that all-cause mortality was lowest for BMI 25-30, which has traditionally been called “overweight”.

So, why doesn’t weight gain show up as a risk factor for faster aging?

I will continue this discussion in Part 2, and try to resolve this paradox in part, but (spoiler alert) I remain puzzled, after a month of reading on the subject.

Source: REB Research https://www.rebresearch.com/blog/fat-people-show-less-dementia/

Universal Clock implies Universal Clockwork

A new methylation clock works in 128 different mammal species, using the same methylation signals. This is the latest evidence that at least some of the mechanisms of aging have been conserved by evolution—strong evidence that aging has a useful function in ecology, so that natural selection actually prefers a finite, defined lifespan.


Einstein taught us that time is relative. Indeed, there are rodents that live less than a year, and Bowhead whales that live more than 200 years. Some of this is just about size and has a basis in physics; but it is well-known that size is only part of the story. Bats and mice are the same size, but bats live ten times longer. Humans are much smaller than horses, but live three times as long.

The first time I met Cynthia Kenyon was circa 1998. She offered me a one-line proof that aging is programmed: the enormous range in lifespans found in nature defies any theory about damage accumulation, because no conceivable process of chemical damage could vary so widely in its fundamental rate. (Think mayflies and sequoia trees.) My own one-line proof is that yeast and mammals share in common some genetic mechanisms that regulate aging, though the last common ancestor of yeast and mammals is more than half a billion years old. These mechanisms include sirtuins and the insulin metabolism.

These intuitions about aging rate and evolutionary conservation have recently come to the world of big data. In this new BioRxiv manuscript, Steve Horvath collaborates with an all-star cast of biologists the world over to compile evidence that there is a universal mechanism underlying development and aging in all mammals, and it is a pan-tissue epigenetic program, not a process of chemical damage.

Brief background on methylation: It is increasingly clear that aging has a basis in gene expression. The whole body has the same DNA, and it doesn’t change over time. However, different genes are turned on and off in different times and places. Turning genes on and off is called “epigenetics”, and evolution has devoted enormous resource to this process. One of many epigenetic mechanisms is the presence or absence of a methyl group on Cytosine, which is one of the 4 building blocks of DNA (A, C, T, G). There are over 20 million regulatory sites in human DNA where methyls can appear or not. Of these, several thousand have been found to consistently correlate with age. The correlation is so strong that the most accurate measures of biological age are now based on methylation. There is (IMO) a developing consensus in the community that methylation changes are an upstream cause of aging, and there remains strong resistance to this idea on theoretical grounds. More background here

The team assembled tissue samples from 59 organs across 128 species of mammals, and looked for commonalities in the progression of methylation that were independent of species and independent of tissue type. They found thousands of methylation sites that fit the bill, attesting to an evolutionarily-conserved mechanism “connected to” aging. It is a short leap to imagine that “connected to” implies a root cause.

How did the authors map age for a mouse onto age of a whale? Just as I might say, “I’m only 10 years old, in dog years,” a year for a whale might be a hundred “mouse years”. The authors took three different approaches. (1) Just ignore it, mapping chronological time directly. (2) Adjust time for the different species based on the maximum lifetime for that species. (3) Adjust time for the different species based on the time to maturity for that species.

Predictably, (1) produced paradoxes; (2) and (3) were similar, but (3) produced the best results. What they didn’t do — but might in follow-on work — was to optimize the age-scaling factor individually for each species to target the best fit with all the other species. Even better would be to choose two independent scaling factors to optimize the fit of each species. Ever since the original 2013 clock, Horvath has divided the lifespan into two regimes, development and aging: In development, time is logarithmic, moving very fast at the beginning and slowing down at the end of development. In the aging regime, time is linear. So it would be natural (optimum, in my opinion) to choose two separate scaling factors that best map each species’s life history course onto all the others. Mathematically, this is (roughly) as simple as matching the slopes of two lines. Horvath has told me he is interested in pursuing this strategy but for some species the existing data doe not cover the lifespan sufficiently to support it.

“Cytosines that become increasingly methylated with age (i.e., positively correlated) were found to be more highly conserved (Fig. 1a)  …Interestingly, although there were 3,617 enrichments of hypermethylated age-related CpGs [i.e., increased methylation with age] across all tissues, only 12 were found for hypomethylated [the opposite] ones.”

Interpretation: with age, we (and other mammals) tend to lose methylation, i.e., to turn on genes that shouldn’t be turned on. There are more sites that demethylate with age than that methylate with age. But the sites that gain methylation tend to be more highly conserved between species. I presume a lot of demethylation is stochastic. It’s easy for a methyl group to “fall off”, but attaching one in the right place requires a specialized enzyme (methyl transferase). What we are seeing here is stronger genetic determinism for the process that requires active intervention.

Question: Would it be useful to develop a methylation clock based solely on sites that gain methylation? What we would thereby avoid is the situation where the age algorithm combines a great many large positive numbers with a great many large negative numbers to make a small difference. This characteristic makes the algorithm overly sensitive to bad data from one or a few particular sites. We can see from the figure above that (red) sites from the top half of the plot have stronger evidence behind them than the (blue) sites from the bottom. What we would lose would be diversity in the basis of the measurement. If retaining that diversity is desirable, it would be possible to design a clock algorithm with both red and blue sites in such a way that all coefficients are relatively small, and no one site contributes inordinately to the age calculation, even if data for that site is completely missing.

Speculation for statistics geeks: I think the methodology that has become standard for developing methylation clocks is not optimal. The standard method is to identify N sites (typically a few hundred) where methylation is well-correlated with age, then derive N coefficients such that you can multiply each coefficient by the corresponding methylation, add up the products, and you get an age estimate*. The way I would do it is with a more complicated calculation, from a methodology called “maximum likelihood”. The idea is to choose the age that minimizes the difference between the expected methylation and measured methylation for the collection of the N sites. To be more specific, minimize the sum of the squares of the z scores for each site, where z is the number of standard deviations by which the measured methylation is different from the expected methylation.It may sound like a complicated calculation to find the age at which this number is a minimum, but it is not. Yes, it’s a guessing game; but the algorithm called “Newton’s method” allows you to make smart guesses so you home in on the best (min Σz2) age within four or five guesses. The calculation is more complicated to program, but it would still execute in a tiny fraction of a second. My proposed method requires maybe 10 or 20 times as many fixed parameters within the algorithm; but the data submitted from each sample is the same.
Caveat – This is all theoretical on my part. I don’t know how much performance would be improved in practice.
————————
*Two footnotes: (1) A constant is also added. (2) In case the subject is young, below the age of sexual maturity, what you get is a logarithm of age, not age itself.

“Importantly, age-related methylation changes in young animals concur strongly with those observed in middle-aged or old animals, excluding the likelihood that the changes are those involved purely in the process of organismal development.”

These plots are adduced as evidence that aging and development are one continuous process under epigenetic control. They come from EWAS=epigenome-wide association studies. Start by asking which sites on the methylome are most closely correlated with age, across many different animals and different tissues in those animals. Start with just the young animals (different ages, but all before or close to sexual maturity. Arrange all the different sites according to how they change methylation with age (increasing or decreasing), just in this age range. Then repeat the process, re-ordering the sites according to how they change with age during middle age.

The left plot above includes a dot for each methylation site, ordered along the X axis according to how they change during youth, and along the Y axis according to how they change during middle age. The point of the exercise is that it is largely the same sites that increase (or decrease) methylation in youth and in middle age.

The middle plot shows the corresponding correlation between middle age (X axis) and old age (Y axis). The right-hand plot shows the correlation between young (X axis) and old age (Y axis). (I believe the labeling of the figure on the right is a misprint.)

This evidence points to a conceptual framework that views development and aging as one continuous process. Development is a lot more complicated than aging. Consequently, most of the sites in the clock are developmental.  Maybe a clock could be optimized for aging only, and it would be more useful for those of us who are using the clocks to assess anti-aging interventions.

“The cytosines that were negatively associated with age in brain and cortex, but not skin, blood, and liver, are enriched in the circadian rhythm pathway”

Here we see again the intriguing connection between the brain’s daily timekeeping apparatus and the epigenetic changes that drive development and aging.

“The implication of multiple genes related to mitochondrial function supports the long-argued importance of this organelle in the aging process. It is also important to note that many of the identified genes are implicated in a host of age-related pathologies and conditions, bolstering the likelihood of their active participation in, as opposed to passive association with, the aging process.”

Another theme in the set of age-correlated genes that the team discovered is mitochondrial function. Mitochondria have an ancient association with cell death, and a long, conserved history with respect to aging. The simple damage themes associated with the free radical theory have yielded to a more complex picture, in which free radicals can be signals for apoptosis or inflammation or enhanced protective adaptations.

The big picture

“Therefore, methylation regulation of the genes involved in development (during and after the developmental period) may constitute a key mechanism linking growth and aging. The universal epigenetic clocks demonstrate that aging and development are coupled and share important mechanistic processes that operate over the entire lifespan of an organism.”

This is cautiously worded, presumably to represent a consensus among several dozen authors, or perhaps to appease the evolutionary biologists looking over our shoulders. The statement is akin to what Blagosklonny has for years called “quasi-programmed aging”, to wit, there are processes that are essential to development that fail to turn off on time, and cause damage as the organism gets older. In the version put forward in this present ms, it is not the gene expression itself but the direction of change of gene expression that carries momentum and cannot be turned off.

Evolutionary theory

Modern evolutionary theory began with Peter Medawar, a Nobel laureate and giant of mid-century biological understanding. (He was 6 foot 5.) Medawar’s 1952 monograph contains the insight that launched all modern theories for evolution of aging. His fundamental idea was that it’s a dog-eat-dog world in which very few few animals live long enough for aging to be a factor in their death. The three main branches of evolutionary theory in response to Medawar are called Mutation AccumulationDisposable Soma, and Antagonistic Pleiotropy. According to Medawar’s thought (and all three theories that followed) old age exists in a “selection shadow” so random processes are at work in old age. It follows that we would expect the aging of a bat and a bowhead whale to be subject to very different random processes. If it is a burden of recently acquired mutations that natural selection has not yet had time to weed out, these should be different for different species. Or if it is about tradeoffs (pleiotropy) between needs of the young animal and the old animal, we would not expect the bat and the whale to be subject to the same tradeoffs.

The Medawar paradigm and its three popular sub-theories all predict that there should be little overlap between the genetic factors involved in aging of species that are adapted so differently. Therefore, the present work documenting a common epigenetic basis of aging is a challenge to the established evolutionary theories of aging.

As I see it, the expression of genes is exquisitely timed for many purposes, so we must view gene expression as subject to tight bodily control. “Accidents” or “mistakes” or “evolutionary neglect” are implausible. For some genes, methylation changes from minute to minute in a way that is adaptive and responsive. Blagosklonny’s idea that there are genes turned on for development and then the body forgets to turn them off doesn’t feel right. Equally, the idea that certain genes are being turned on (or off) progressively through development and then, after development has ended, the process has a momentum of its own so the body can’t stop further turning on (or off) of these same genes is equally implausible. I assume the body is adapted to do exactly what it wants with gene expression, and if the body expresses a combination of genes that causes aging, it’s because that’s what natural selection has designed the body to do. Of course, this looks to be a paradox, as aging is completely maladaptive according to the notion of Darwinian fitness that became accepted in the first half of the 20th century; but evolutionary biologists have broadened the notion of fitness since then, and I’ve written volumes concerning this paradox.

The bottom line

For personal application to individuals who want to know how well they are doing and their future life expectancy, I recommend Horvath’s Grim Age clock as the best available. (Elysium has done a lot of work on their Index product, and it may be as good or better, but it’s impossible to evaluate unless they release their proprietary methodology.) For application to studies of anti-aging interventions (including my own project, DataBETA), the choice of clocks is not clear, because it depends not just on statistics but on theory. We want a clock that is not only accurate, but that is based on epigenetic causes of aging, not epigenetic responses to aging. The multi-species clock is a welcome contribution, precisely because epigenetic processes that are conserved across species are more likely to be linked to the root cause of aging. For the future, I’ve made suggestions above for ways the multi-species clock might be made even better.

A Science of Wholeness Awaits Us

Just as the melody is not made up of notes nor the verse of words nor the statue of lines, but they must be tugged and dragged till their unity has been scattered into these many pieces, so with the World to whom I say Thou Martin Buber

We creatures of the 21st Century, grandchildren of the Enlightenment, like to think that our particular brand of rationality has finally established a basis for understanding the world in which we live. Of course, we don’t have all the details worked out, but the foundation is solid. 

We might be chastened by the precedent of Lao Tzu and Socrates and Hypatia of Alexandria and Thomas Aquinas and Lord Kelvin, who thought the same thing. I wonder if the foundation of our world-view is really made of more durable stuff than theirs. In fact, founding our paradigm in the scientific method offers us something that earlier sages did not have: we can actually compare in detail the world we observe and the consequences of our physicalist postulates. The results are not reassuring. In recent decades, the science establishment has willfully ignored observations of phenomena that call into question our foundational knowledge.

Reductionism is the process of understanding the whole as emergent from the parts. The opposite of reductionism is holism: understanding the parts in terms of their contribution to a given whole. It’s fair to say that all of science in the last 200 years has been reductionist. Physical law is the only fundamental description of nature. Chemistry could, in principle, be derived from physics (if only we could solve the Schrödinger equation for hundreds of electrons); living physiology could be understood in terms of chemistry; and ecology could be modeled in terms of individual behaviors. 

Curiously, there are holistic formulations of physics that are mathematically equivalent to the reductionist equations, but in practice, physicists use the differential equations, which are the reductionist version. 

Biological function is explained by a process of evolution through natural selection that made them what they are. Holism in evolution is called “teleology”, and is disparaged as unscientific. But when features of physics appear purposeful, there is no agreement among scientists how to explain them. Most physicists would avoid invoking a creator or embedded intelligence, even at the cost of telling stories about vast numbers of unobservable universes outside our own. This is the most common explanation for the fact that the rules of physics and the very constants of nature—things like the charge on an electron and the strength of the gravitational force—these things seemed eerily to have been fine-tuned to offer us an interesting universe; most other choices for the basic rules of physics might have produced dull uniformity, without stars or galaxies, without chemistry, without life.

But I am racing ahead of the story. The question I want to ask is whether we are missing something in reasoning exclusively from the bottom up, explaining all large-scale patterns as emergent results of small-scale laws. I want to suggest that this deeply-ingrained pattern of thought may be holding science back. Are there large-scale patterns waiting to be discovered? Are there destined outcomes that help us understand the events leading to a predetermined denouement? Even formulating such questions is controversial; and yet, we see hints pointing in just this direction, both from micro-science of quantum mechanics and from studies of the Universe on its largest scale.


Science is all about observing nature and noticing patterns which might be articulated as theories or laws. When these patterns connect nearby events that can be observed at one time by one person, they are easy to spot. When the patterns involve distant events and stretch over time and space, they may go undetected for a long while. This can lead to an obvious bias. Scientists are more inclined to formulate laws of nature that connect contiguous events than laws that connect events that are separated spatially and temporally, just because these global patterns are harder to see.

The physical laws that were formulated and tested in the 19th and 20th century were all mediated by local action. The idea that all physical action is local was formalized by Einstein, and has been baked into our theories ever since. But there is a loophole, defined by quantum randomness. Roughly speaking, Heisenberg’s Uncertainty Principle says that we can only ever know half the information we need to predict the future from the past at the microscopic level. Is the other half replaced by pure randomness, devoid of any patterns that science might discern? Or might it only appear random, because the patterns are spread over time and space, and difficult to correlate? In fact, the existence of such patterns is an implication of standard quantum theory. (This is one formulation of the theorem about quantum entanglement, proved by J.S. Bell in 1964.) Speculative scientists and philosophers relate this phenomenon to telepathic communication, to the “hard problem” of consciousness, and to the quantum basis of life.

I hope to explore this topic in a new ScienceBlog forum beginning in 2021. Here are four examples of the kinds of phenomena pointing to a new holistic science.

1. Michael Levin and the electric blueprint for your body

We think of the body as a biochemical machine, proteins and hormones turned on in the right places at the right times to give the body its shape. Levin is clear and articulate in making the case that the body develops and takes shape under a global plan, a blueprint, and not just a set of instructions. This is true for humans and other mammals, but it is easier to prove it for animals that regenerate. Humans can grow back part of a liver. An octopus can grow a new leg; a salamander can grow a new leg or tail tail; a zebrafish can grow back a seriously damaged heart; starfish and flatworms can grow back a whole body from a small piece.

Consider the difference between a blueprint and an instruction set. An instruction set says

1. Screw the left side of widget A onto the right side of gadget B.
2. Take the assembly of widget+gadget and mount it in front of doodad C, making sure the three tabs of C fit into the corresponding holes in B

A blueprint is a picture of the fully assembled object, showing the relationship of the parts.

Ikea always gives you both. With the instructions only, it is possible to complete the assembly, but only if you don’t make any mistakes. And if the finished object breaks, the instruction set will not be sufficient to repair it. The fact that living things can heal is a strong indication that they (we) contain blueprints as well as instruction sets. The instruction set is in the genome, together with the epigenetic information that turns genes on and off as appropriate; but where is the blueprint?

Prof Michael Levin of Tufts University has been working on this problem for almost 30 years. The answer he finds is in electrical patterns that span across bodies. One of the tools he pioneered is voltage reporter dyes that glow in different colors depending on the electric potential. Here is a map of the voltage in a frog embryo, together with a photomicrograph.

from Levin’s 2012 paper

Levin’s lab has been able to demonstrate that the voltage map determines the shape that the tadpole grows into as it develops. Working with planaria flatworms, rather than frogs, their coup de grace was to modify these voltage patterns “by hand”, creating morphologies that are not found in nature, such as the worm with two heads and no tail.

This is stunning work, documenting a language in biology that is every bit as important as the genetic code. Of course, I am not the first to discover Dr Levin’s work; but it is underappreciated because the vast majority of smart biologists are focusing on biochemistry and it is a stretch for them to step out of the reductionist paradigm.

(I wrote more about Levin’s work two years ago. Here is a video which presents a summary in his own words.)

2. Cold Fusion

Two atomic nuclei of heavy hydrogen can merge to create a single nucleus of helium, and tremendous energy is released. This process is not part of our everyday experience because the hydrogen nuclei are both positively charged and the energy required to push them close enough together that they will fuse is also enormous. So fusion can happen in the middle of the sun, where temperatures are in the millions of degrees, and fusion can happen inside a thermonuclear bomb. But it’s hard as hell to get hydrogen to fuse into helium, and, in fact, physicists have been working on this problem for more than 60 years without a viable solution.

Except that in 1989, the world’s most eminent electrochemist (not exactly a household name) announced that he had made fusion happen on his laboratory bench, using the metal palladium in an apparatus about as complicated as a car battery.

Six months later, at an MIT press conference, scientists from prestigious labs around the world lined up to announce they had tried to duplicate what Fleischmann had reported with no success. The results were un-reproducible. Cold Fusion was dead, and the very word was to become a joke about junk science. Along with the vast majority of scientists, I gave up on Cold Fusion and moved on. 22 years passed. Imagine my surprise when I read in 2011 that an Italian entrepreneur had demonstrated a Cold Fusion boiler, and was taking orders!

The politics of Cold Fusion is a story of its own. I wrote about it in 2012 (not for ScienceBlog). The Italian turned out to be a huckster, but the physics is real.

I began reading, and I became hooked when I watched this video. I visited Cold Fusion labs at MIT, Stanford Research Institute, Portland State University, University of Missouri, and a private company in Berkeley, CA. I went to two Cold Fusion conferences. I became convinced that some of the claims were dubious, and others were convincing. There is no doubt in my mind that Cold Fusion is real.

Physicists were right to be skeptical. The energy for activation is plentiful enough, even at room temperature, but the problem is to concentrate it all in one pair of atoms. Left to its own devices, energy will spontaneously spread itself out— that’s what the science of thermodynamics is all about. To concentrate an eye-blink worth of energy in just two atoms is unexpected and unusual. But things like this have been known to happen, and a few times before they’ve taken physicists by surprise. Quantum mechanics plays tricks on our expectations. A laser can concentrate energy, as billions of light particles all march together in lock step. Superconductivity is another example of what’s called a “bulk quantum effect”. Under extraordinary circumstances, quantum mechanics can leap from the tiny world of the atom and hit us in the face with deeply unexpected, human-scale effects that we can see and touch.

There are now many dozens of labs around the world that have replicated Cold Fusion, but there is still no theory that physicists can agree on. What we do agree is that it is a bulk quantum effect, like superconductivity and lasers. When the entire crystal (palladium deuteride) asks as one quantum entity, strange and unexpected things are possible.

For me, the larger lesson is about the way the science of quantum mechanics developed in the 20th Century. The equations and formalisms of QM are screaming of connectedness. Nothing can be analyzed on its own. Everything is entangled. The quantum formalism defies the reductionist paradigm on which 300 years of previous science had been built.

And yet, physicists were not prepared to think holistically. We literally don’t know how. If you write down the quantum mechanical equations for more than two particles, they are absurdly complex, and we throw up our hands, with no way to solve the equations or even to reason about the properties of the solutions. The many-body quantum problem is intractable, except that progress has been made in some highly symmetrical situations. A laser consists of a huge number of photons, but they all have a single wave function, which is as simple as a wave function can be. Many-electron atoms are conventionally studied as if the electrons were independent (but constrained by the Pauli Exclusion Principle). Solid state physics is built on bulk quantum mechanics of a great number of electrons, and ingenious approximations are used in combination with detailed measurements to reason about how the electrons coordinate their wave state.

Cold Fusion presents a huge but accessible challenge to quantum physicists. Beyond Cold Fusion lie a hierarchy of problems of greater and greater complexity involving quantum effects in macroscopic objects.

In the 21st Century, there is a nascent science of quantum biology. It is my belief that life is a quantum state.

3. Life coordinates on a grand scale

There are many examples of coordinated behaviors that are unexplained or partially explained. This touches my own specialty, evolution of aging. The thesis of my book is that aging is part of an evolved adaptation for ecosystem homeostasis, integrating the life history patterns of many, many species in an expanded version of co-evolution. My thesis is less audacious than the Gaia hypothesis.

  • Monarch butterflies hibernate on trees in California or Mexico for the winter. In the spring, they migrate and mate and reproduce, migrate and mate and reproduce, 6 or 7 times, dispersing thousands of miles to the north and east. Then, in the fall, the great great grand offspring of the spring Monarchs undertake the entire migration in reverse, and manage to find the same tree where their ancestor of 6 generations spent the previous winter. [Forest service article]
  • Zombie crabs have been observed in vast swarms, migrating hundreds of miles across the ocean floor. Red crabs of Christmas Island pursue an overland migration

  • Sea turtles from all over the world arrange for a common rendezvous once a year, congregating on beaches in the Caribbean and elsewhere. Their navigation involves geomagnetism, but a larger mystery is how they coordinate their movements.
  • Murmuration behavior in starlings has been modeled with local rules, where each bird knows only about the birds in its immediate vicinity; but I find the simulations unconvincing, and believe our intuition on witnessing this phenomenon: that large-scale communication is necessary to explain what we see.
  • Monica Gagliano has written about plants’ ability to sense their biological environment and coordinate behaviors on a large scale. This is her more popular book.

4. The Anthropic Coincidences, or the Improbability of Received Physical Laws

For me, this is the mother of all scientific doors, leading to a radically different perspective from the reductionist world-view of post-enlightenment science. Most physicists believe that the laws of physics were imprinted on the universe at the Big Bang, and life took advantage of whatever they happened to be. But since 1973, there has been an awareness, now universally accepted, that the laws of nature are very special, in that they lead to a complex and interesting universe, capable of supporting life. The vast majority of imaginable physical laws give rise to universes that are terminally boring; they quickly go to thermodynamic equilibrium. Without quantum mechanics, of course, there could be no stable atoms, and everything would collapse into black holes in short order. Without a very delicate balance between the strength of electric repulsion and the strong nuclear force, there would be no diversity of elements. If the gravitational force were just a little weaker, there would be no galaxies or stars, nothing in the universe but spread-out gas and dust. If our world had four (or more) dimensions instead of three, there would be no stable orbits, no solar systems because planets would would quickly fly off into space or fall into the star; but a two-dimensional world would not be able to support life because (among other reasons) interconnected networks on a 2D grid are very limited in complexity.

Stanford Philosophy article
1995 book by Frank Tipler and John Barrow
Just Six Numbers by Martin Rees

Most scientists don’t take account of this extraordinary fact; they go on as if life were an inevitability, an accident waiting to happen. But those who have thought about the Anthropic Principle fall in two camps:

The majority opinion:  There are millions and trillions and gazillions of alternative universes. They all exist. They are all equally “real”. But, of course, there’s no one looking at most of them.  It’s no coincidence that our universe is one of the tiny proportion that can support life; the very fact that we are who we are, that we are able to ask this question, implies that we are in one of the extremely lucky universes.

The minority opinion:  Life is fundamental, more fundamental than matter.  Consciousness is perhaps a physical entity, as Schrödinger thought; or perhaps it exists in a world apart from space-time, as Descartes implied 300 years before Schrödinger; or perhaps there is a Platonic world of “forms” or “ideals” [various translations of Plato’s είδος] that is primary, and that our physical world is a shadow or a concretization of that world.  One way or another, it is consciousness that has given rise to physics, and not the other way around.

If you like the multi-universe idea, you will want to listen to the recent Nobel Lecture of Roger Penrose. He races to summarize his life’s work on General Relativity to end the lecture with evidence from maps of the Cosmic Microwave Background of fossils that came from black holes in a previous universe, before our own beloved Big Bang.

I prefer the minority view, not just because it provides greater scope for the imagination [Anne of Green Gables]; there are scientific reasons that go beyond hubristic disregard of Occam’s razor in postulating all these unobservable universes.

  • Quantum mechanics requires an observer.  Nothing is reified until it is observed, and the observer’s probes help determine what it is that is reified.  Physicists debate what the “observer” means, but if we assume that it is a physical entity, paradoxes arise regarding the observer’s quantum state; hence the “observer” must be something outside the laws that determine the evolution of quantum probability waves.  Cartesian dualism provides a natural home for the “observer”.
  • Parapsychology experiments provide a great many indications that awareness (and memory) have an existence apart from the physical brain.  These include near-death experiences, telepathy, precognition, and clairvoyance.
  • Moreover, mental intentions have been observed to affect reality.  This is psychokinesis, from spoon-bending to shading the probabilities dictated by quantum mechanics.

Finally, the idea that consciousness is primary connects to mystical texts that go back thousands of years. 

Dao existed before heaven and earth, before the ten thousand things.  It is the unbounded mother of all living things.

                     — from the Dao De Jing of Lao Tzu


Please look for my new page at ExperimentalFrontiers.ScienceBlog.com, coming soon.

What to Look For in a Biological Clock

In this article, I’m reporting on 

  • new proteomic clock from Adiv Johnson and the Stanford lab of Benoit Lehalier
  • new methylation clock developed with “deep learning” algorithms by an international group from Hong Kong 
  • the advanced methylation clock developed by Morgan Levine, Len Guarente, and Elysium Health

Prelude

Aging clocks = algorithms that compute biological age from a set of measurable markers. Why are they interesting to us? And what makes one better than another?

The human lifespan is too long for us to do experiments with anti-aging interventions and then evaluate the results based on whether our subjects live longer. The usefulness of an aging clock is that it allows us to quickly evaluate the effects on aging of an intervention, so we can learn from the experiment and move on to try a variant, or something different.

Many researchers are skeptical about using clock algorithms to evaluate anti-aging interventions. I think they are right to be asking deep questions; I also think that in the end the epigenetic clocks in particular will be vindicated for this application.

It may seem obvious that we want the clock to tell us something about biological aging at the root level. We are entranced by the sophisticated statistical techniques that bioinformaticists use to derive a clock based on hundreds of different omic factors. But all that has to start with a judgment about what’s worth looking at.

Ponder this: The biostatisticians who create these clocks are optimizing them to predict chronological age with higher and higher correlation coefficient r. But if they achieve a perfect score of r=1.00, the clock becomes useless. It cannot be used to tell a 60-year-old with the metabolism of a 70-year-old from another 60-year-old with the metabolism of a 50-year-old, because both will register 60 years on this “perfect” clock.

It’s time to back up and ask what we think aging is and where it comes from, then optimize a clock based on the answer. As different people have different answers, we will have different clocks. And we can’t objectively distinguish which is better. It depends on whose theory we believe.

Straw man: AI trained to impute age from facial photos now has an accuracy of about 3½ years, in the same ballpark with methylation clocks. If we used these algorithms to evaluate anti-aging interventions, we would conclude that the best treatments we have are facelifts and hair dye.

Brass tacks: People with different positions about the root cause of aging all agree that (a) aging manifests as damage, and (b) methylation and demethylation of DNA take place under the body’s tight and explicit site-by-site regulation.

But what is the relationship between the methylation and the damage? There are three possible answers.

  1. (from the “programmed” school) Aging is programmed via epigenetics. The body downregulates repair mechanisms as we get older, while upregulating apoptosis and inflammation to such an extent that they are causes of significant damage.
  2. (from the “damage” school) The body accumulates damage as we get older. The body tries to rescue itself from the damage by upregulating repair and renewal pathways in response to the damage.
  3. (also from the “damage” school) Part of the damage the body suffers is dysregulation of methylation. Methylation changes with age are stochastic. Methylation becomes more random with age.

My belief is that (1), (2), and (3) are all occurring, but that (1) predominates over (2). The “damage” school of aging would contend that (1) is excluded, and there are only (2) and (3).

How can these three types of changes contribute to a clock? 

(3) makes a crummy clock, because, by definition, it’s full of noise and varies widely from person to person and from cell to cell. There is no dispute that a substantial portion (~50%) of age-related changes in DNA methylation are stochastic. But these changes are not useful and, in fact, most of the algorithms used to construct methylation clocks tend to exclude type (3) changes. I won’t say anything more about stochastic changes in methylation, but I’ll acknowledge that there is more to be said and refer you to this article if you’re interested in methylation entropy.

If you are from the “damage” school, you don’t believe in (1), so this leaves only type (2). If changes in methylation are the body trying to rescue itself, then any intervention that makes the body’s methylation “younger” is actually dialing down protection repair. You expect that reducing methylation age will actually hasten aging and shorten life expectancy. You have every reason to distrust a clinical trial or lab experiment that uses methylation age as criterion for success.

White cell count is used as a reliable indication of cancer. As cancer progresses, white cell count increases. The higher a person’s white cell count, the closer he is to death. So let’s build a “cancer clock” based on white blood count, and let’s use it to evaluate anti-cancer interventions. The best intervention is a chemical agent that kills the most white blood cells. It reliably sets back the “cancer clock” to zero and beyond. But we’re puzzled when we find that people who get this intervention die rapidly, even though the cancer clock predicted that they were completely cured. The problem is that white blood cells are a response to cancer, not its cause.

If you are from the “programmed” school, you think that (1) predominates, and that a clock can be designed to prefer type (1) changes to (2) and (3). Then methylation clocks measure something akin to the source of aging, and we can expect that if an intervention reduces methylation age, it is increasing life expectancy.

The fact that methylation clocks trained on chronological age alone (with no input concerning mortality or disease state) turn out to be better predictors of life expectancy than age alone is a powerful validation of methylation technology. But only if you believe (for other reasons) that methylation is an upstream cause of aging. You could expect this from either type (1) or type (2) methylation changes.

I believe that aging is an epigenetic life program, and that methylation is one of several epigenetic mechanisms by which it is implemented. That’s why I have faith in methylation clock technology.

Conversely, people who believe that the root cause of aging is accumulated damage are right to discount evidence from epigenetic clocks as it pertains to the efficacy of particular treatments. As in the cancer example above, treatments that create a younger methylation age can actually be damaging.

The basis for my belief that aging is an epigenetic program is the subject of my two books, and was summarized several years ago in this blog. I first wrote about methylation as a cause of aging in this space in 2013. For here and now, I’ll just add that we have direct evidence for changes of type (1). Inflammatory cytokines are up-regulated with age. Apoptosis is upregulated with age. Antioxidants are downregulated with age. DNA repair enzymes and autophagy enzymes and protein-folding chaperones are all down-regulated with age. All these are changes in gene expression, presumably under epigenetic control.

Which is more basic, the proteome or the methylome?

For reasons I have elaborated often in the past, I adopt a perspective on aging as an epigenetic program. I think of methylation clocks as close to the source, because methylation is a dispersed epigenetic signal. But the proteome is, by definition, the collection of all signals transmitted in blood plasma, including all age signals and transcription factors that help to program epigenetics cell-by-cell. The proteome is generated by transcription of the DNA body-wide, which transcription is controlled by methylation among other epigenetic mechanisms. So one might argue from this that the methylome is further upstream than the proteome. On the other hand, methylation is just one among many epigenetic mechanisms, and the proteome is the net result of all of them. On this basis, I would lean toward a proteomic clock as being a more reliable surrogate for age in clinical experiments, even better than methylation clocks. It is a historic fact, however, that methylation clocks have a 6-year headstart. Methylation testing is entering the mainstream, with a dozen labs offering individual readings of methylation age, priced to attract end-users.

Let’s see if proteomic clocks can catch up. The new technology is based on SOMAscan assays, and so far is marketed to research labs, not individuals or doctors, and it is priced accordingly. The only company providing lab services is SOMAlogic.com of Boulder, CO. SOMAscan is an aptamer-based proteomics assay capable of measuring 1,305 human protein analytes in serum, plasma, and other biological matrices with high sensitivity and specificity.” [ref] As I understand it, they have a microscope slide with 1305 tiny dots, each containing a different aptamer attached to a fluorescent dye. An aptamer is like an engineered antibody, optimized by humans to mate to a particular protein. Thus 1305 different proteins can be measured by applying a sample (in our case, blood plasma) to the slide, chemically processing the slide to remove aptamers that have not found their targets, then photographing the slide and analyzing the readout from the fluorescent dye.

Aptamers are synthetic molecules that can be raised against any kind of target, including toxic or non immunogenic ones. They bind their target with affinity similar or higher than antibodies. They are 10 fold smaller than antibodies and can be chemically-modified at will in a defined and precise way. [NOVAPTech company website]

Curiously, aptamers are not usually proteins but oligonucleotides, cousins of RNA, simply because the chemical engineers who design and optimize these structures have had good success with the RNA backbone. The SOMA in SOMAlogic stands for “Slow Off-rate Modified Aptamers”, meaning that the aptamers have been modified to make them stick tight to their target and resist dissociating.

An internal proteome-methylome clock?

It’s possible that there is a central clock that tells the body “act your age”. I have cited evidence that there is such a clock in the hypothalamus, and that it signals the whole body via secretions [20152017].

Another possibility is a dispersed clock. The body’s cells manufacture proteins based on their epigenetic state, the proteins are dispersed in the blood, some of these are received by other cells and affect the epigenetic state of those cells. This is a feedback loop with a whole-body reach, and it is a good candidate for a clock mechanism in its own right.

I’m interested in the logic and the mathematics of such a clock in the abstract. Any feedback loop can be a time-keeping mechanism. Such a mechanism is
_____Epigenetics ⇒ Protein secretion ⇒ Transcription factors ⇒ Epigenetics
This is difficult to document experimentally, but it is an attractive hypothesis because it would explain how the body’s age can be coordinated system-wide without a single central authority, which would be subject to evolutionary hijacking, and might be too easily affected by individual metabolism, environment, etc. But the body’s aging clock must be both robust and homeostatic. If it is thrown off by small events, it must return to the appropriate age.  So my question—maybe there are readers who would like to explore this with me—is whether it is logically possible to have a timekeeping mechanism that is both homeostatic and progressive, without an external reference by which it can be reset.

Last year, Lehalier and a Stanford-based research group jumpstarted the push toward a methylomic aging clock with this publication [my write-up here]. The same group has a follow-up, published a few weeks ago. The new work steps beyond biologically agnostic statistics to incorporate information about known functions of the proteins that they identified last year. The importance of this is twofold: It suggests targets for anti-aging interventions. And it supports the creation of a clock composed of upstream signals that have been verified to have an effect on aging. I argued in the long Prelude above that this is exactly what we want to know in order to have confidence in an algorithmic clock as surrogate to evaluate anti-aging interventions.

They work with a database I had not known about before: the Human Ageing Genomic Resources Database.  HAGR indexes genes related to aging and summarizes studies that document their functions. Some highlights of the proteins they identified:

  • Inflammatory pathways are right up there in importance. No surprise here. But if you can use inflammatory epigenetic changes to make an aging clock, you have a solid beginning.
  • Sex hormones that change with age turn out to be even more prominent in their list. The first several involve FSH and LH. These are hormones connected with women’s ovarian cycles; but after menopause, when they are not needed, their prominence shoots up, and not just once-a-month, but always on. Men, too, show increases in LH and FSH with age, though they are more subtle. I first became aware of LH and FSH as bad actors from the writings of Jeff Bowles more than 20 years ago.
  • “GDF15 It is a protein belonging to the transforming growth factor beta superfamily. Under normal conditions, GDF-15 is expressed in low concentrations in most organs and upregulated because of injury of organs such as such as liverkidneyheart and lung.” [Wikipedia]  “GDF15 deserves a story of its own. The authors identify it as the single most useful protein for their clock, increasing monotonically across the age span. It is described sketchily in Wikipedia as having a role in both inflammation and apoptosis, and it has been identified as a powerful indicator of heart disease. My guess is that it is mostly Type 1, but that it also plays a role in repair. GDF15 is too central a player to be purely an agent of self-destruction.” [from my blog last year]
  • Insulin is a known modulator of aging (through caloric restriction and diabetes).
  • Superoxide Dismutase (SOD2) is a ubiquitous antioxidant that decreases with age, leaving the body open to ROS damage.
  • Motilin is a digestive hormone. Go figure. Until we understand more, my recommendation would be to leave this one out of the aging clock algorithm.
  • Sclerostin is a hormone for bone growth. It may be related to osteoporosis, and well worth inclusion. 
  • RET and PTN are called “proto-oncogenes” and are important for development, but associated with cancer later in life.

Which proteins are most relevant?

The Horvath clocks have been created using “supervised” optimization, which involves human intelligence that oversees the application of sophisticated algorithms. But what happens if you automate the “supervised” part? On the one hand, you must expect mistakes and missed opportunities that you wouldn’t have with human supervision. On the other hand, once you have a machine learning algorithm, you can apply it over and over to different subsets of the data, produce hundreds of different clocks, and choose those that perform best. That’s what Johnson and co-authors have done in the current paper. They describe creating 1565 different clocks based on different subsets of a universe of 529 proteins. In my opinion, their most important work combines biochemical knowledge with statistical algorithms. The work using statistical algorithms alone are much less interesting, for reasons detailed in the Prelude above.

Summary

This new offering from Lehalier and Johnson is a great step forward in that

  • proteins in the blood are a broader picture of epigenetics than methylation alone
  • specific proteins are linked to specific interventions that are reliably connected to aging in the right direction. Crucially, the clock is designed to have type (1) epigenetic changes (from the Prelude above) and to exclude type (2)

Next steps

  • to calibrate the clock not with calendar age but with future mortality. This would require historic blood samples, and it is the basis of the Levine/Horvath PhenoAge clock.
  • to optimize the clock separately for different age ranges or, equivalently, to use non-linear fitting techniques in constructing the clock algorithm
  • to commercialize the Aptomer technology, so that it is available more widely and more cheaply

Elysium Index

Elysium is a New York company advised by Leonard Guarente of MIT and Morgan Levine (formerly Horvath’s student, now at Yale). They have an advanced methylation clock available to the public, which they claim is more accurate than any so far. Other clocks are based on a few hundred CpG sites that change most reliably with age, but the Index clock uses 150,000 separate sites (!) which, they claim, offers more stability. The Horvath clocks can be overwhelmed by a single CpG site that is measured badly. (I have personal experienc with this.) Elysium claims that variations from one day to the next or one lab slide to the next tend to average out over such a large number of contributions. On the other hand, as a statistician, I have to wonder about deriving 150,000 coefficients from a much smaller number of inividuals. The problem is called overfitting, and the risk is that the function doesn’t work well outide the limited data set from which it was derived.

In connection with the DataBETA project, I have been talking to Tina Hu-Seliger, who is part of the Elysium team that developed Index. I am impressed that they have done some homework that other labs have not done. They compare the same subject in different slides. They store samples and freeze them and compare results to fresh samples. They compare different clocks using saliva and blood.

I wish I could say more but Elysium Index is proprietary. There is a lot I have not been told, and there is more that I know that I have been asked not to reveal. I don’t like this. I wish that all aging research could be open sourced so that researchers could learn from one another’s work.

Two other related papers

DeepMAge is a new methylation clock, published just this month, based on more sophisticated AI algorithms instead of the standard 20th-century statistics used by Horvath and others thus far. Galkin and his (mostly Hong Kong, mostly InSilico) team are able to get impressive accuracy in tracking chronological age. This technology has forensic applications, in which evidence of someone’s calendar age is relevant, independent of senescence.  And the technology may someday be the basis for more accurate predictions of individual life expectancy. But, as I have argued above, a good clock for evaluating anti-aging measures must look at more than statistics. Correlation is not the same as causation, and only detailed reference to the biochemistry can give confidence that we have found causation.

Biohorology is a review paper from some of this same InSilico team together with some prominent academics, describing the latest crop of aging clocks. The ms is long and detailed, yet it never addresses the core issue that I raise in the Prelude above, about the need to distinguish upstream causes of aging from downstream responses to damage.

The beginning of the ms contains a gratuitous and outdated dismissal of programmed aging theories.

“Firstly, programmed aging contains an implicit contradiction with observations, since it requires group selection for elderly elimination to be stronger than individual selection for increased lifespan.”

Personally, I bristle at reading statements like this. which ignore an important message of my own work and, more broadly, ignore the broadened understanding of evolution that has emerged over the last four decades.

“Secondly, in order for the mechanism to come into place, natural populations should contain a significant fraction of old individuals, which is not observed either (Williams, 1957).” 

This statement was the basis not just of Williams’s 1957 theory, but more explicitly of the Medawar theory 5 years earlier. Neither of these eminent scientists could have known that their conjecture about the absence of senescence in the wild would be thoroughly disproven by field studies in the 1990s, The definitive recent work on this subject is [Jones, 2014].

Take-home message

For the purpose of evaluating anti-aging treatments, the ideal biological clock should be created with these two techniques:

  • It should be trained on historic samples where mortality data is available, rather than current samples where all we know is chronological age, and
  • Components should be chosen “by hand” to assure all are upstream causes of aging rather than downstream responses to damage. (Type 1 from analysis above.)

Deep Mind Knows how Proteins Fold

This week, Deep Mind, a London-based Google company, claims to have solved the number one most consequential problem in computational biochemistry: the protein-folding problem.  If true, this could be the start of something big.


What does it mean, and why is it important? Let’s start with signal transduction. This is a word for the body’s chemical computer. The nervous system, of course, constitutes a signal-processing and decision-making engine; and in parallel, there is a chemical computer. The body has molecules that talk to other molecules that talk to other molecules, sending a cascade of ifs and thens down a chain of logic. The way molecules with very complex shapes fit snugly together is the language of the chemical computer. These molecules with intricate shapes are proteins, and they are not formed in 3D. Rather, DNA provides instructions for a linear peptide chain of amino acids which are transcribed in ribosomes (present in every cell) to create a chain of amino acids, chosen from a canonical set of 20. Each peptide chain folds into a protein with a characteristic shape, and it is these shapes that constitute the body’s signaling language. Most age-related diseases can be traced to an excess or a deficiency of these protein signal molecules.

So signal proteins are targets of medical research. Pharmaceutical interventions may modify signal transduction, perhaps by goosing signaling at some juncture, or by siphoning off a particular signal with another chemical designed to fit perfectly into its bumps and hollows. Up until now, there has been a lot of trial and error in the lab, looking for chemicals with complementary shapes. Imagine now that the Deep Mind press release is not exaggerating, and they really can reliably predict the shape that a peptide will take once it is folded. Then many months of laboratory experiments can be replaced with many hours of computation. All the trial-and-error work can be done in cyberspace. An inflection point in drug development, if it’s true.

Why it’s a Hard Problem

Computers solve large problems by breaking them down into a great many small ones. But protein folding can’t be solved by looking separately at each segment of the protein molecule. Everything affects everything else, and the optimal shape is a property of the whole. Proteins are typically huge molecules, with hundreds or thousands of amino acids chained together. The peptide bonds allow for free rotation. So the number of shapes you can form with a given chain is truly humongus. The sheer number of possibilities would overwhelm any computer program that tried to deal with the different shapes one at a time.

The thing that stabilizes a given shape is hydrogen bonding. Nominally, each hydrogen atom can form only one bond to a carbon or oxygen, but every hydrogen is a closet bigamist, and it longs to couple with a nearby carbon or (better still) oxygen atom even as it is bound primarily to its LTR partner. Every twist and bend in the molecular chain allows some new opportunities for hydrogen bonding, while removing others. The breakthrough in computing came 1% inspiration, 99% perspiration (Edisonn’s recipe). A key input was to map the structure of 170,000 known, natural proteins, and to train the computer to be able to retrodict the known results. Then, when working with a new and unknown shape, the computer makes decisions that are based on its past success.

How does it make the decisions? No one knows. One of the most successful techniques in artificial intelligence uses generic layers of input and output with programmable maps, and the maps are trained to give the right answer in known cases. But the fundamental logic that drives these decisions remains opaque, even to the programmers. 

 

It gets more complicated

Many proteins don’t have a unique folded state. They are in danger of folding the wrong way. So there are proteins called chaperones that help them to get it right. These chaperones don’t explicitly dictate the proetein’s final structure, but rather they place the protein in a protected environment. There are 20,000 different proteins needed in the human body, but only a handful of different chaperones.


Factoid: Most inorganic chemical reactions take place on a time scale of billionths of a second. Organic reactions are somewhat slower. But protein folding happens on a human time scale of seconds, or even minutes.


The AI that finds a protein’s ultimate structure must have knowledge of the environment in which the protein folds. It is not merely computing something intrinsic to the sequence of amino acids that makes up the nacent protein. To underscore this problem, proteins fold incorrectly almost as often as they fold correctly. There is an army of caretaker proteins that inspect and correct already-folded proteins. Misfolded proteins tend to clump together and there are chemicals specialized in puilling them apart. For the lost causes, there are proteasomes, which break the peptide bonds and recycle a damaged protein into constituent parts. The name ubiquitin derives from the fact that these protein recyclers are found in every part of every cell.

The question arises, how do these caretaker proteins know what is the correct shape and what is a misfolded shape? Remember that the number of chaperones and caretakers is vastly smaller than the number of proteins that they attend to, so they cannot contain detailed information about the proper conformation of each protein they service. And this leads to a deep question for AI: It’s hard enough to know how a particular protein chain will fold into a conformation that is thermodynamically optimized. But the conformation optimized for least energy may or may not be the one that is useful to the body.

Prions are mysterious

In the late 1970s, a young neurologist named Stanley Prusiner began to suspect that misfolded proteins could be infectious agents. He coined the term prion for a misfolded protein that could cause other proteins to misfold. This idea defied ideas about how pathogens evolve, and in particular ran afoul of Francis Crick’s Central Dogma of Molecular Biology, which said that information was always stored in DNA and transferred downstream to proteins.

The evolutionary provenance of prions remains a mystery, but it is now well-established that certain misfolded proteins can cause a chain reaction of misfolding. The process is as mysterious as it is frightening. Neil Ferguson, who has become infamous this year for his apocalyptic COVID contagion models, frightened the UK in an earlier episode into slaughtering and incinerating more than 6 million cows and sheep, in a classic example of panic leading to overkill.

Prusiner had to wait less than 20 years before the medical community acceded to his heresy. He was awarded the Nobel Prize in 1997.

Example and Teaser

This example is from a review I am preparing for this space next week. I am reading two recent papers about proteins in the blood that change as we age. Assuming that these signals are drivers of aging, what can be done to enhance the action of those that we lose, or suppress the action of those that increase with age? The connection to the present column is that knowledge of protein folding can be used to engineer proteins that redirect the body’s chemical signal transduction at a given intervention point. For example, FSH (follicle-stimulating hormone) is needed just a few days of a woman’s menstrual cycle, but FSH levels rise late in life, with disastrous consequences for health. FSH shoots up in female menopause, and in males it rises more gradually.

FSH drives the imbalance in blood lipids associated with heart disease and stroke. In lab rodents, FSH can be blocked with an antibody, or by genetic engineering, with consequent benefits for cardiovascular health [ref] and loss of bone mass [ref]. The therapy also reduces body fat “Here, we report that this antibody sharply reduces adipose tissue in wild-type mice, phenocopying genetic haploinsufficiency for the Fsh receptor gene Fshr. The antibody also causes profound beiging*, increases cellular mitochondrial density, activates brown adipose tissue and enhances thermogenesis.” [ref] In the near future, we may be able to use computer-assisted protein design to create a protein that blocks the FSH receptor and do safely in humans what was done with genetic engineering in mice.
_______________
*Beiging is turning white adipose tissue to brown. Briefly, the white fat cells are permanent and cause diabetes, while the brown are burned for fuel.

Hyperbaric Hyperbole

An Israeli study came out last week that has been described as rejuvenation via hyperbaric oxygen. I’m not taking it very seriously, and I owe you an explanation why.

  • The main claim is telomere lengthening. I used to think of telomeres as the primary means by which aging is programmed, but since the Danish telomere study [Rode 2015], I think that telomeres play a minor role.
  • I think that methylation age is a far better surrogate than telomere length. The study doesn’t mention methylation age, but reading between the lines…
  • I think the study’s results can be explained by elimination of senescent white blood cells. This might explain the observed increase in average telomere length, even without expression of telomerase. 
  • Are there signs of senolytic benefits in other tissues? That’s the big question going forward.

A study was published in the Aging (Albany) last week claiming to lengthen telomeres and eliminate senescent cells in a test group of 20 middle-aged adults using intermittent hyperbaric oxygen treatment. It was promoted as age reversal in popular articles [for example], apparently with the encouragement of Tel Aviv University.

Telomeres as a surrogate marker for aging

Several years ago, I was enthusiastic about the use of telomere length as a measure of biological age. Telomeres shorten progressively with age, and I thought this mechanism provided a good candidate for a mechanism of programmed aging. But when the Rode study came out of Copenhagen (2015), I saw that the scatter in telomere length was too large for this idea to be credible.

I came to think that telomere shrinkage plays a minor role in aging. Around the same time, I became enthusiastic about methylation clocks. Methylation changes with age are correlated far more strongly with less scatter.

So I think that methylation is plausible as a primary cause of aging, and telomere shrinkage, less so.

Telomere length vs age, new data

 

The Treatment

The air we breathe is only 21% oxygen. Breathing pure oxygen, five times as concentrated as in air, is a temporary therapy (hours at a time, but not days) for people who have impaired lungs. But prolonged exposure to pure O2 can injure the lungs and other tissues as well. Oxygen is highly reactive, and the body’s antioxidant system is gauged to the environments in which we evolved, so oxygen therapy is not to be taken lightly.

Hyperbaric Oxygen Therapy (HBOT) is oxygen at double full strength. The patient breathes pure oxygen at twice atmospheric pressure. If you just put a tube in your mouth with that much pressure, you wouldn’t be able to hold it, or to exhale. But the body can withstand high pressures as long as it’s all around, not just inside the lungs. If you SCUBA dive, at 30 feet below the surface the ambient pressure is two atmospheres, and SCUBA tanks adjust to feed air into your mouth at a pressure that is matched to the surrounding water.

(Incidentally, pressure varies a lot with altitude, so that in Denver it’s 20% lower than New York. Two years ago, I trekked in the Himalayas at 17,000 feet, where the air pressure is only half the standard (sea level) value, and of course there is only half as much oxygen.)

HBOT needs to arrange higher ambient pressure, not just in the oxygen tank. The patient has to be enclosed in a chamber where the ambient pressure is twice atmospheric pressure. Pure oxygen is expensive enough that the ambient air is just normal air at high pressure, and the patient is given oxygen to breathe from a tank. The patient can be in a pressurized room or lying in a personalized chamber.

HBOT has been around for a century, and standard medical uses are for detoxification, gangrene, and chronic infections.  More recently, HBOT has been used with success for traumatic injury, especially nerve damage. There are studies in mice in which HBOT in combination with a ketogenic diet has successfully treated cancer.

In the new Israeli study, subjects received 90 minutes of HBOT therapy 5 days a week for 12 weeks. For 5 minutes of every 20, patients breathed ordinary 21% air. The intermittent treatment was described as inducing some hypoxia adaptations. Apparently, the body adjusts to the high oxygen environment, and then it senses (relative) oxygen deprivation for those 5 minutes.

How does it work?

There is no accepted theory for how HBOT works, so I feel free to speculate. The primary role of a highly oxidative environment is to destroy. That’s probably how HBOT treats infections, since bacteria are generally more vulnerable to oxidative damage than cells of our bodies. Another thing that HBOT does well is to eliminate necrotic tissue, and I wouldn’t be surprised if it turns out to be an effective cancer treatment, since tumor cells thrive in an anaerobic environment. But the body also uses ROS (reactive oxygen species) such as H2O2 as distress signals that dial up chemical protection and repair. This is akin to hormesis, and I’m inclined to think that when HBOT promotes nerve growth, it is via a distress signal.

Results

Authors of the new study make two claims: that telomeres are lengthened in several classes of white blood cells, and that senescent white blood cells are eliminated. Let’s take them in reverse order.

Elimination of senescent cells has been a promising anti-aging therapy since pioneering work of van Deursen at the Mayo Clinic. A quick refresher: telomeres get shorter each time cells replicate, and in our bodies, some of the cells that replicate most (stem cells and their offspring) develop short telomeres late in life that threaten their viability. Cells with short telomeres go into a state of senescence, in which they send out signals (inflammatory cytokines) that increase levels of inflammation in the body and can also induce senescence in adjacent cells, in a chain reaction. Senescent cells are a tiny proportion of all cells in the body, and Van Deursen showed that the body is better off without them. Just by selectively killing senescent cells in a mouse model, he was able to extend their lifespan by about ~25%. But to do the experiment, he had to genetically engineer the mice in such a way that the senescent cells would be easy to kill selectively. Ever since this study, the research community has been looking for effective senolytic agents that could kill senescent cells and leave regular cells alone (without having to genetically engineer us ahead of time).

The new Israeli study demonstrates that senescent white blood cells have been reduced. (Red blood cells have no chromosomes, so they can’t have short telomeres and can’t become senescent in the same way. They just wear out after a few months.) The effect continued after the 60 hyperbaric sessions were over, suggesting that HBOT kills the cells slowly, or damages them so that they die later. Apparently, the reduction was measured by separating different cell types and counting them. There was a great deal of scatter from one patient to the next.

The first claim is that average telomere length was increased in some populations of white cell sub-types. Again, there was a great deal of scatter in the data, with some of the subjects decreasing telomere length and others. For example, when they say that B cell telomeres increased by 22% + 40%, I interpret that to mean that the mean telomere length increased by 22%, but the combined standard deviations from the before and after measurements was 40% of the original length. Hence, a great deal of scatter.

Aside about statistics (With apologies — this from my geeky side)

First, what does that mean 22% + 40% ? How can that be statistically significant? Answer: The standard deviation of a set of measurements is a measure of the scatter. It tells you how broadly they differ from one another. If you’re looking for the average of that distribution, you can be pretty sure that the average isn’t out at the edges, so the uncertainty in the average is a lot smaller than the standard deviation. How much smaller? The answer is the square root of N rule. The “standard error of the mean”, or SEM, is the standard deviation divided by the square root of the number of points, or √N. So the 40% standard deviation gets divided by the square root of the number of subjects in the study, √26=5.1, and “22% + 40%” should really be reported as 22% + 8%. The mean is 22% and the uncertainty in that 22% is 8%.

The way this group did the statistics was based on

  • Finding the average telomere length among 26 subjects after the study
  • Dividing by the average telomere length among 26 subjects before the study

First they average, then they divide.

But it’s well-known (to statisticians) that the most sensitive test is to reverse the operations. First divide, then average. In other words, compare each subject’s telomeres after the study with the same subject before the study. If you do the statistics this way, then the original scatter among the different subjects all cancels out. You can start with subjects of vastly different telomere lengths, and it doesn’t matter to the statistics, so long as each one of them changes in a consistent way.

If you average first (before dividing), the scatter among the initial group imposes a penalty in statistical significance, even though that has nothing to do with effectiveness of the treatment.

So this raises the question: Why did the authors do the statistics this less-sensitive way? They hint at an answer: “repeated measures analysis shows a non-significant trend (F=4.663, p=0.06)” They seem to be saying that the test which normally gives a better p value, in this case gives a worse p value.

That can only happen if the the people who had the longest telomeres at the end of the study were not the same as the people who had the longest telomeres at the beginning.

Here’s what I think is really going on

Telomerase is the enzyme that increases telomere length. We think of telomerase as anti-aging, and supplements such as astragalus and gotu kola and silymarin are gobbled up for their telomerase activation potential. When we think of longer telomeres as a result of a study, we imagine that telomerase has been activated.

But in this case, I think that the average has gone up simply because the cells with short telomeres have been killed off. The authors are telling us that there are less senescent cells as a result of the treatment. Senescent cells are the ones with the shortest telomeres. At the beginning, the average telomere length is an average of a wide range of cells with long and short telomeres. At the end, you have the same long telomeres in the average, but the shortest ones are gone, so the average has increased.

I’m suggesting that telomerase has not been activated. There has been no elongation of telomeres, but the average length has increased because cells with the shortest telomeres have been eliminated.

It’s only a hypothesis, but it might help explain why the people who had the longest average telomere length at the beginning were not the same as the people who had the longest average telomere length at the end. The senescent cells that were being eliminated had no relationship to the telomere length in other cells.

Next steps

One thing I’d like to know is whether the HBOT treatment affected methylation age by any of the Horvath clocks. I’ve written to the authors with this question, and haven’t received a response. Maybe they did the methylation testing and didn’t report the results because they were negative—just a guess.

But even without reprogramming methylation, the therapy can be valuable if it is eliminating senescent cells generally, and not just in white blood cells. An easy first test would be whether inflammatory cytokines in the blood decreased after the treatment. Confirmation would come from the kind of test van Deursen did, assaying senescent cells in different tissues.

If hyperbaric oxygen can be shown to decrease methylation age, that would be a promising finding. If not, but the treatment has general senolytic effects (not just in white blood cells), it may yet have value as an anti-aging treatment. Maybe the authors already know the answers to these questions; if not, they should be busy finding out.

Ten Elements of the False COVID Narrative (last 5)

I am heartened that the tide seems to be turning. The Great Barrington Declaration is attracting thousands of scientists’ signatures each day. And the World Health Association’s COVID spokesperson has done an about-face and come out in opposition to lockdowns, recognizing explicitly the suffering, the poverty, and the health implications of the policy most of the world has pursued these 6 months.

The global response to COVID claims science for its foundation, and my aim in this series is to show that what is being done does not represent a scientific consensus, and is deeply variant from past public health practices. I don’t understand who is behind this, but I suspect that it is not mere incompetence or bureaucratic inertia; this suspicion is based on

  • Fraudulence of chloroquine trials
  • Suppression of scientific dissent
  • Evidence that SARS-CoV-2 originated in a lab, and suppression of this evidence in the scientific literature and in the press
  • Secrecy in planning the political response to COVID
  • Neglect of all the ancillary harms from lockdown in deciding on a response. (This warning published last March in the NYTimes by a senior epidemiologist from Yale probably could not be published in October.)
  • Well-established, safe and effective treatments for COVID are being bypassed to hang the world’s future on the mirage of a vaccine, though vaccines are (1) far more expensive and (2) much harder to prove safe and effective [See #10 below]
  • Public announcements and even the way the numbers are calculated are inciting widespread fear in the public. I think this fear is far more than is warranted, and I suspect that this is by design.

The method behind this madness remains elusive to me. But political journalists outside the established media are emphasizing the military connection. One investigative journalist whom I respect for her courage and her diligence is Whitney Webb. Here, she shows us that Operation Warp-speed is a military project much more than a public health project. It is plausible to me that COVID originated in a bioweapons research lab. And from the beginning, the US response was planned not by public health experts but by secret meetings of military leaders

I hope you will explore these connections and come to your own conclusions. My more modest goal in this series is to establish that “science” cannot be invoked to justify the lockdowns, the masking, the secrecy, the closure of schools and churches and cultural institutions. Least of all can “science” justify censorship, because the process by which science reaches for truth depends on open debate from a diversity of perspectives.


6. “New cases of COVID are expanding now in a dangerous Second Wave”

We’re concerned not for the virus but for the suffering and death that it causes. In March and April, we were frightened by the rising numbers of COVID deaths. But in May, CDC stopped reporting daily deaths and switched to reporting daily cases.

Traditionally, “cases” are defined as people who become seriously ill. That was the definition for a short while. Then it was “people who test positive for the virus”. On May 19, CDC started adding people who tested positive for antibodies to the virus as “cases”. We’re told that there is a troubling increase in COVID cases lately. If people really were getting sick, this would be disturbing. But if it is an increase in perfectly healthy people testing positive for antibodies, it is a wholly good thing. It’s called “herd immunity”.

No test is infallible, and invariably there are people who test positive who don’t really have the virus. These are false positives. As the prevalence of COVID has dropped with summer weather and more of the population already exposed (herd immunity), the rates are so low in many urban areas that false positive tests are swamping the true positives, and we really can’t say anything about trends. This recent article concludes that the quality of available data is no longer a reliable basis for policy data decisions.

https://www.nytimes.com/2020/08/29/health/coronavirus-testing.html

The low death rates are, of course, a good thing. The problem is that the false positives are being reported without explanation as though they were meaningful data about prevalence of COVID.

COVID is no longer among the top 5 causes of death in America. Why is our government slanting the reports in ways that keep us scared? I don’t have an answer to this question. I know there is a great deal of money riding on vaccines, and that by any sane criterion, COVID vaccines are past their usefulness, even if we had reason to believe they were safe. But I don’t think this fully explains the fear campaign. I suggest that it’s my job and yours to keep asking questions.

7. “Dr Fauci and the CDC are guiding our response to COVID according to the same principles of epidemic management that have protected public health in the past.”

On the contrary, standard public health procedure is to quarantine the sick and protect the most vulnerable. Telling a whole country full of healthy people to stay at home is entirely new, unstudied, a sharp departure from previous practices.

Closing down manufacturers, offices, stores, churches, concert halls, theaters, even closing private homes to social and family guests—all this is a radical new experiment. There are no scientific studies to justify it, because it has never been done in the past.

Containment of the virus is feasible if it is begun very early, when the virus is geographically contained and the number of cases is small enough that every case can be accounted for. It’s then possible for severe isolation to halt the virus in its tracks. (This was the strategy pursued by China.) Once there are thousands of cases, it is feasible to slow the spread, but not to change the fact that eventually, everyone in the population will be exposed.

Dr Fauci was clearly aware of this, because when he made his March announcement, he was asking America to isolate only for a few weeks. His goal was explicitly to “flatten the curve”, meaning to make sure the disease didn’t spread so rapidly that hospital ICUs would be overwhelmed. At the beginning, he (quite reasonably) did not claim that the measures he prescribed to America would contain the virus, but only slow its spread.

It worked. Except in a few isolated regions, there was never a shortage of hospital beds. But six months later, we are still masking and social distancing, long past when the original justification for these measures has been forgotten.

8. “Asymptomatic carriers are an important vector of disease transmission, which must be isolated if we are to stop the spread of COVID”

The justification for separating healthy people from other healthy people is the idea that we never know who is really healthy. We know from past history that colds and flu become contagious a day or two before they have symptoms, though the viral load that they transmit is greatly increased once the virus has taken hold and they are coughing and sneezing.

Extending quarantine from the traditional application to people who are obviously sick to the general population is a huge innovation, imposing tens of trillions of dollars in lost productivity worldwide, as well as social and psychological hardship. Isolation kills. It could only have been justified by evidence that the virus could not be contained by the same methods that have been used for all previous epidemics. Where is the evidence that asymptomatic carriers are a critical link in the chain of transmission?

Dr Fauci got it right at first when he said, “In all the history of respiratory-born viruses of any type, asymptomatic transmission has never been the driver of outbreaks. The driver of outbreaks is always a symptomatic person.” [Jan 28] Subsequently, there were anecdotal articles documenting particular cases in which asymptomatic transmission did occur [one, two, three]. How can we know if asymptomatic carriers are an important part of the dynamic spread of the disease? This paper is the only attempt I have found to study the question with a detailed mathematical model; but, in the end, it just calculates unknowns from unmeasurables, and reaches no conclusion. We are left with common sense, which says that patients with symptoms have much higher viral levels (that’s why they are sick). They are also coughing and aspirating more of the virus (that’s why the virus evolved to make us cough). When Maria van Kerkhove, speaking for the WHO, stated that asymptomatic transmission was not important, she was reined in by those who control the narrative, and she walked back the statement the next day.

9. “The lower death rates now compared to April are due to protective measures such as social distancing, mask-wearing, and limited travel.”

Why would we expect lower death rates? From measures intended to limit social contact and spread of the virus, we should expect lower infection rates. But that’s not happening; instead, we have higher case rates coupled with lower death rates. This can reasonably be explained by (1) changes in definition of what constitutes a “case” (see #6 above), (2) wider testing, (3) the virus evolving, as most viruses tend to do, toward higher infectivity and lower fatality, and (4) fall weather.

10. “With enough resources, pharmaceutical scientists can develop a vaccine in a matter of months, and provide reasonable assurance that it is safe.”

This is the most dangerous of all the fictions and, not incidentally, the one most closely related to $6 billion in NIH investments and tens of billions in projected corporate profits.

The subject of vaccines is highly polarizing. On the one hand, the mainstream press, especially the scientific press, has been hammering with singular purpose the message that vaccines are safe and effective and necessary not just for individual protection but for public health. On the other hand, there is about one third of the American public who distrust what they hear about vaccines, enough so that they will refuse a vaccine (if not coerced). [Updated to half of Americans, according to recent Pew survey] So much has been written about vaccine safety that I would not presume to try to convince you one way or the other in a few paragraphs. I can tell you that my own attitude changed when I had a bad reaction four years ago to a pneumonia vaccine (PCV13), and learned that there is no corporate liability for vaccine injuries. An act of Congress in 1986 exempted vaccines from the standard testing for safety and efficacy that other medications must pass, and also indemnified vaccine companies from all liability for harm caused by either design or manufacture. In my opinion, this is a dangerous situation, as it removes all motivation for companies to make a safe product. Recent amendments to the 2005 PREP act take the extraordinary extra step, for COVID vaccines only, of absolving the companies for liability in advance for fraud and intentional infliction of harm.[I thought this was true when I wrote it in October.]

I’ll close this series by defending my claim above that, compared to treatments, vaccines are (1) far more expensive and (2) much harder to prove safe and effective.

  1. One reason that vaccines are more expensive for the public (and correspondingly more profitable for the industry) is that vaccines are for everyone, while treatments are only for less than 1% of the population that becomes sick enough to need them. There is a race to patent a vaccine, a race for billions of dollars in private profits that derive from spending public research funds, and the profit potential is distorting our public priorities. The best treatment we have is hydroxychloroquine, which is out of patent, has a 65-year safety record, and costs pennies per dose. FDA can only legally approve vaccines on a fast track basis if they find that no viable treatments are available. This is ample explanation for the campaign to discredit chloroquine and other effective treatments.
  2. Because a vaccine is given to 100 times as many people, it must be 100 times safer in order to impose the same health burden from side effects. COVID is only life-threatening for people who are old and/or disabled; so to establish the safety of a vaccine, clinical trials must include people who are old and/or disabled. The relevant question is: are people who receive the vaccine dying at a lower rate than people who received a placebo?  But none of the trials are being designed to ask this question.

    There is a reason why vaccines are tested over many years, and why “warp-speed” testing cannot tell us what we need to know. Though a vaccine is always designed with one particular pathogen in mind, the effects of vaccination—beneficial and detrimental—extend to the immune system generally. This is the complex subject of cross-immunity [refrefrefref]. It is generally true that live virus vaccines tend to confer cross immunity toward non-target viruses, while vaccines made from protein fragments tend to impair immunity to non-target infections. Only one of the candidate vaccines is derived from live, attenuated virus. The new class of RNA vaccines [Moderna] is entirely untested, and we have no idea what the long-term effects would be, but initial results give us pause.

If you are open to an honest and competent criticism of vaccine science and politics, I recommend Robert F. Kennedy’s web site.


The Bottom Line

The story that we are being told about an ultra-lethal virus that “jumped to humans” and the scientific community converging on a response proportional to the threat—this story is unraveling, as more and more doctors and public health professionals are adding their voices to a global movement to restore sanity and integrity in the pandemic response.

Ten Elements of the False COVID Narrative (first 5)

Last week, I called for scientists to come forward and make a public statement that the world’s response to COVID is not consistent with best public health practices. As if in answer to my prayer, a meeting was held at Great Barrington, MA, from which emerged this statement, signed by doctors and professors from the world’s most prestigious institutions, as well as hundreds of professionals and thousands of others. You can sign, too. In this video, the three main authors present their message.

Their proposed strategy is to protect the old and most vulnerable and quarantine people with COVID symptoms, while allowing the young and strong to go back to school, go back to work, acquire herd immunity for the benefit of everyone. This is fully aligned with past practice, and is just what Dr David Katz (Yale School of Public Health) proposed in the New York Times and in a video presentation back in March. 

What they didn’t sayThe authors of the statement were cognizant of politics and avoided judgment and recrimination. I agree, this was wise. They avoided talking about the evidence that the virus was laboratory-made. I agree, this was wise. They avoided mentioning the ineffectiveness of face masks. I agree, this was wise. They avoided mentioning effective treatment strategies of which chloroquine is the best we have. I think this was a political judgment with which I disagree. Their statement would have been so much stronger if they were able to say that the limited risk that they proposed for the young and healthy will be that much lower because effective early and preventive treatment is available.


Here are ten messages that are essential pieces of the standard COVID narrative, but which are unfounded in actual science, and the promised rebuttals to each.

  1. “The origin of the SARS-CoV-2 virus was one of many random events in nature in which a virus jumps from one species to another.”
  2. “Chloroquine kills patients and is too dangerous to use against COVID”
  3. “The Ferguson model warned us of impending danger in time to take action and dodge a bullet.”
  4. “American deaths from COVID: 200,000 and counting”
  5. “Masks and social distancing are keeping the virus in check in our communities”
  6. “New cases of COVID are expanding now in a dangerous Second Wave”
  7. “Dr Fauci and the CDC are guiding our response to COVID according to the same principles of epidemic management that have protected public health in the past.”
  8. “Asymptomatic carriers are an important vector of disease transmission, which must be isolated if we are to stop the spread of COVID”
  9. “The lower death rates now compared to April are due to protective measures such as social distancing, mask-wearing, and limited travel.”
  10. “With enough resources, pharmaceutical scientists can develop a vaccine in a matter of months, and provide reasonable assurance that it is safe.”

Detailed rebuttals and references

1. “The origin of the SARS-CoV-2 virus was one of many random events in nature in which a virus jumps from one species to another.”

Strong but not dispositive evidence points to genetic engineering as the most probable origin of the virus. I wrote about this in detail last April in two installments, [Part 1Part 2].

There is no credible path by which a virus with the characteristics of SARS-CoV-2 could have appeared naturally in Wuhan last December. The “wet market” hypothesis died, while no one was looking. The bats that harbor SARS’s closest cousin virus live 1,000 miles west of Wuhan, and the pangolin viruses that harbor another part of the genome live 1,000 miles east of Wuhan. The SARS-CoV-2 genome includes a furin cleavage site and a spike protein matched to the human ACE-2 receptor. These very modifications to bat coronaviruses were the subject of published research, sponsored by our own NIAID and conducted at Univ of NC and the Wuhan Institute of Virology.

2. “Chloroquine kills patients and is too dangerous to use against COVID”

Evidence for the effectiveness of chloroquine + zinc is overwhelming. It was the drug of choice to treat the first SARS epidemic in 2003. Countries in which chloroquine is used have COVID death rates typically four times lower than countries in which use is restricted.

source: HCQtrial.com

Dozens of credible studies have found major benefits of chloroquine, especially if it is used early and especially if it is accompanied by zinc supplementation. (Apparently, the mechanism of action is to open cell membranes to allow infected cells to be flooded with zinc, which effectively stops the virus from replicating. Quercetin is an over-the-counter supplement which has the same effect of opening cell membranes to zinc ions, and there are a few studies of quercetin for COVID [for example, onetwothree].)

Suppression of chloroquine treatment has defied historic precedents, and represents the most extreme denial of real science on this list of 10. Chloroquine is a cheap, widely-used drug with a 65-year history of use by millions of patients. It has a well-studied safety profile, since it is routinely prescribed not only for malaria treatment but as prophylactic protection for people traveling to areas where they are at risk of malaria exposure. It is also standard treatment of lupus.

For the first time, doctors have been restricted in the off-label prescription of a drug. (Why aren’t they screaming about this?) WIth the combined effects of intimidation of doctors, actual restrictions, and policies of pharmacies, chloroquine treatment is effectively unavailable in most US states.

A major study in May was published prominently in The Lancet, claiming that among 100,000 COVID patients on three continents, the death rate of those taking chloroquine was three times higher than those who did not receive chloroquine. Many smaller studies around the world were immediately canceled and never re-started. But when the authors could not produce data to support their calculations, the study was retracted by its authors without comment. I am not alone in calling the Lancet study a major scientific fraud, but none of the authors of the study or the editors of the Lancet have been held accountable to date.

Smaller frauds are perpetrated with studies that are designed to fail. (Anyone who has epidemiological experience knows how much easier it is to design a study to fail than to design a study that can succeed.) There are three ways this is usually done:

  • Failure to incorporate zinc supplementation.
  • Starting late. Once patients are in the hospital, treatment with HCQ is less effective, and by the time they are dying from a cytokine storm, HCQ is useless.
  • Using toxic dosages, up to 4x the standard chloroquine dose, which triggers heart arrhythmias in some patients.

Some of these “designed to fail” studies actually showed significant benefit, and were reported in such a way as to understate their significance. (Anyone with experience in reading pharmacology studies has seen that almost always, the authors put their best results out front at the risk of overstating their significance.) Here’s an example of doublespeak in a recent review:

“Trials show low strength of evidence for no positive effect on intubation or death and discharge from the hospital, whereas evidence from cohort studies about these outcomes remains insufficient.”

Is this sentence intended deliberately to confuse with double negatives? “Low strength of evidence for no positive effect?” What they really found was “overwhelming evidence for YES positive effect”. In the only large study among the eight reviewed, the death rate of patients receiving chloroquine was half the death rate among controls, despite the fact that all patients were started on chloroquine much later than optimal, and without supplemental zinc.

3. “The Ferguson model warned us of impending danger in time to take action and dodge a bullet.”

Neil Ferguson is head of the UK-SAGE, The Scientific Advisory Group for Emergencies. Ferguson and his team at Imperial College have made draconian predictions that failed to materialize on many occasions in the past.

In 2002, he calculated that the mad cow disease would kill about 50,000 British people and another 150,000 once it was transmitted to sheep. There were only 177 deaths. In 2005, he predicted that the bird flu would kill 65,000 Britons. The total was 457 deaths…[Fergusson], true to his alarmist mindset, predicted with his “mathematical model” that 550,000 British people would die from Covid, as well as more than 2 million Americans, if a fierce lockdown did not come into effect. Benjamin Bourgeois

Subsequently, the population death rate of COVID-19 was discovered to be an order of magnitude smaller than what Ferguson was assuming, the lockdown was shown to be ineffective (see below), and still the death tolls in Britain and the US were not close to Ferguson’s predictions.

Ferguson predicted that without a lockdown, Sweden would suffer 100,000 deaths through June, 2020. In reality, the COVID death count for Sweden is 5,895 (as of 1 October), and the death rate is below one per day.

Was Ferguson the most credible biostatistician that the European governments could find in planning a response to COVID last winter, or was he only the most terrifying? Why were no other experts consulted?

4. “American deaths from COVID: 200,000 and counting”

At every turn, the COVID death count has been overestimated.

  • Hospitals were incentivized to add COVID to diagnosis and death certificates.
  • In an unprecedented departure from past practice, CDC instructed doctors to report COVID as the cause of death whenever patients seemed to have symptoms consistent with COVID, or of they tested positive for COVID and died of something else. Cases about motorcycle accidents reported as COVID deaths are no joke.
  • The tests themselves have a high false positive rate. PCR tests were previously used only for laboratory research, not for diagnosis. They involve making 35 trillion copies (based on 45 amplification stages) of every stretch of RNA in a sample from a patient’s nose or mouth and looking for some that match a stretch from the COVID genome.

It is impossible to know what the real death count has been, but three weeks ago CDC released the bombshell that people who died of COVID alone with no pre-existing chronic diseases was only 6% of the reported total.

5. “Masks and social distancing are keeping the virus in check in our communities”

Wearing a mask is perceived as an act of caring by a large proportion of Americans. But the actual benefit in slowing spread of the virus is small enough that not benefit has been detected in the overwhelming majority of studies to date. Here is a bibliography of 35 historic studies showing that face masks have no meaningful effect on the spread of viruses, and 7 more studies that document health hazards from masks. Yes, wearing masks for long periods of time imposes its own health risks, especially when the masks are not removed and washed frequently. This is certainly significant for people required to wear them many hours at a stretch.

Here is the conclusion of one meta-analysis from the CDC web page. The authors find that the benefit is too small to rise to statistical significance even in a compilation of ten studies:

In our systematic review, we identified 10 RCTs that reported estimates of the effectiveness of face masks in reducing laboratory-confirmed influenza virus infections in the community from literature published during 1946–July 27, 2018. In pooled analysis, we found no significant reduction in influenza transmission with the use of face masks (RR 0.78, 95% CI 0.51–1.20; I2 = 30%, p = 0.25)

In recent months, several studies have been published that contradict the historic findings, and seem to justify the use of masks. Here is one that is prominently published (PNAS) and highly cited:

Our analysis reveals that the difference with and without mandated face covering represents the determinant in shaping the trends of the pandemic. This protective measure significantly reduces the number of infections.

Here’s how this conclusion is reached: In three locations where face masks were introduced (Wuhan, Italy, NYC), the authors note a linear rise in incidence of COVID, followed by the curve bending over later on. Their estimate of effectiveness is derived by subtracting the number of actual cases from the number of cases which would have occurred if the linear increase had continued through the period of observation.

An obvious objection to this analysis is that the curve always bends over. The initial rise is exponential as the virus expands into an unexposed population, and then it bends over and eventually falls, as the virus runs out of susceptible people to infect. For a short stretch after the exponential phase, the curve may look like a straight line, but inevitably the curve is destined to decline as the population is gradually developing herd immunity. Authors of this study make no attempt to separate the effect of herd immunity from the effect of masking. To do the comparison correctly, it should compare these three cases to control cases, regions in which no masking requirement was decreed. Did the curve turn over more quickly in locations with masks compared to locations without?

This objection and others were voiced by Paul Hunter, Louise Dyson, and Ed Hill in (separate) responses to the study on the UK Science Media Center website. They point out that the kind of shoddy science published in PNAS would never have received such prominent attention in an unpoliticized environment.

Viruses are spread either by aerosols or by droplets. Droplets are exhaled water that contains virus particles, and masks can trap droplets. They are the dominant mode of spread when people are in very close contact, as in a doctor-patient relationship. But droplets fall quickly from the air, especially in humid summer weather, and droplets don’t penetrate deep in the lungs, where viruses are most dangerous. Aerosols are molecular-scale virus particles, far too small to be stopped by a mask. They are the predominant form of virus spread, and outdoors they are the only way the virus spreads.

In urban environments, there are always tiny quantities of prevailing viruses in the air, and for the great majority of people this is a benefit. It means that just going about their business, they are exposed to tiny quantities of virus that educate their immune systems without accumulating to a load sufficient to cause disease. The best outcome for populations—indeed, the normal outcome for every flu season in the past—is that most people acquire T-cell immunity in this way, and then the virus can no longer spread through the population. By imposing lockdown and social distancing, governments the world over have curtailed this well-known, natural process for acquisition of herd immunity.

What is the rationale for slowing spread of the virus? Originally, the stated goal was to “flatten the curve”, so that hospitals would not be overwhelmed by a sudden burden of severe cases all at once. If there was any danger of this, it passed back in April. So, at this point, slowing the spread of the virus is only important if we hope to stop the spread at some future date. This relies on the promise of a vaccine, which, I will argue in part 3, cannot be adequately tested in a relevant time frame. Hence, even the most optimistic assessment of masks and social distancing will not save lives, but only delay deaths by a few months.

NYU Prof. Mark Crispin Miller’s extended essay on masking cites copious evidence for their ineffectiveness as well more stories than you want to read about recent violence that has erupted between masked and unmasked factions, or between law enforcement officials and unmasked civilians.

Tentative conclusions

It was four years after 9/11 that I finally considered the possibility: this was never about brown-skinned men with boxcutters who hijacked airplanes; it was about restrictions on travel and free expression and a new Federal bureaucracy gathering information about our whereabouts and our contacts, all imposed in the name of keeping us safe. This time, I am a little less slow on the uptake, and I am beginning to suspect that COVID 19 is not about a viral pandemic; it is about restrictions on travel and free expression and a new Federal bureaucracy gathering information about our whereabouts and our contacts, all imposed in the name of keeping us safe.

END OF PART 2

Link to Part 3
Link to Part 1