Osteo Options

How do you tell if an osteoporosis treatment is working? Stress-testing your femur until it breaks is not an option most patients would accept.  “Bone Mineral Density” (BMD) measurements have been the classical surrogate measurement to tell if a drug is working. But then it was discovered that Fosamax increases BMD, but in the long run it might actually make bones weaker.  Fosamax has many side-effects, including both risks and possibly major health benefits. There is also a new generation of drugs that increases BMD, but do they actually strengthen bones?  

My introduction to osteoporosis came almost twenty years ago when a friend fell off his bike and (yes, he was wearing a helmet) his skull shattered, launching a brush with death and a week in the ICU.  He was 46 years old, and had taken steroids all his life to control asthma.  No one told him that steroids leach calcium from the bones.  After precision medical care and an extraordinary act of will, he recovered fully.

He had never been tested for osteoporosis, but after the accident his doctors prescribed Fosamax (=alendronate, similar to Actonel, Boniva), which was standard treatment at the time.  Fosamax increases mineral bone density (MBD), the calcium content of bones, which had been the standard diagnostic for osteoporosis.  For the first few years, Fosamax seems to improve bone strength. But recently, it has been discovered that the chronic effect of low bone turnover is actually to weaken the bone, even as the MBD remains higher.  Fosamax can increase the risk of hip fracture in patients who take the drug long-term. Fosamax is a risk factor for esophageal cancer, but reduces risk of breast cancer.  Other Fosamax side effects include heart arhythmia which could be merely uncomfortable or could lead to congestive heart failure, and a rare degeneration of the jaw.  In some patients, side effects include nausea, vomiting, diarrhea, and stomach cramps.  In recent years, Merck has defended several thousand individual lawsuits over fractures suffered by people taking Fosamax.  (There is as yet no class action.)

Fosamax and other bisphosphonates drugs seem to be on their way to being replaced, and for several good reasons.  But then came an intriguing 2011 study suggesting that Fosamax could be a life extension drug.  More on this below.

 

Non-prescription Prevention

Bone loss is common – some would say universal – as we age.  Men get it as often as women, though women fracture hips with twice the frequency of men.

We should all be taking precautions that will delay and lessen the impact of osteoporosis.  Two of the best things we can do are exercises for strength and keeping our vitamin D levels up.  Both measures have multiple benefits, beyond bone health.  Exercise is the best anti-aging tonic, the best anti-depressant, and contributes to every aspect of health and well-being.  High blood levels of vitamin D help prevent infectious disease and cancer.  Next time you have blood drawn, find out your vitamin D levels, and target 60 or higher, 100 or more if you have osteoporosis.  Few people have optimal vitamin D levels, and (depending on your personal metabolism and ability to absorb it, and also how much time you spend in the sun) it may take supplementation with tens of thousands of IU daily to get up to these levels.

Plentiful calcium in the diet may be necessary but not sufficient.  Curiously, calcium supplements alone have been found to be only marginally useful for osteoporosis.  Foods that are high in calcium include dairy, green leafy vegetables, and shellfish. Be sure to get enough vitamin K2. (another ref) For people with celiac tendencies, avoiding gluten is helpful.  Here are Dr Mercola’s recommendations.

For post-menopausal women, hormone therapy lowers the risk of osteoporosis, but women and their doctors consider multiple pros and cons in making this decision.  Some say that bioidentical hormones offer the benefits of HRT with a lower downside.

 

Osteoporosis and Caloric Restriction

People on low-calorie diets generally experience robust health, stronger, more energetic, less susceptible to virus and bacterial infections.  As a group, they (I should say “we”) have but two complaints: we chill easily, and we are prone to osteoporosis.  Conversely, carrying extra weight may be bad for all other aspects of health, but it helps to promote bone growth.  This effect is not large, however, and I don’t recommend gaining weight as a remedy for osteoporosis.

 

Bone metabolism

Bones are alive, and far from static.  They are continuously being broken down and reformed within the body. When it works, this dynamic process keeps the bones strong and healthy.  Bones contain two kinds of cells, whose names differ by a single letter: osteoclasts and osteoblasts.  They are the yin and the yang of bone remodeling.  The blasts build new bone and the clasts tear down old bone.

The traditional view has been that loss of bone mineral density (BMD) comes from an imbalance between these two processes.  Most present drugs target the clasts, seeking to lessen bone resorption, rather than accelerating the formation of new bone by the blasts.  (The exception is Forteo, which promotes new bone growth hormonally, and has its own downsides.)  But the two processes are metabolically linked, and it may be that maintaining turnover is vital to bone health, independent of the measured BMD.

Prescription drugs

Fosamax is in a class of drugs called bisphosphonates, which function by poisoning the clasts. Prolia (=denosumab=Xgena)  prevents creation of new clasts, but does not interfere with the old ones.  It requires delivery by injection, usually just twice a year.  It seems to work better than Fosamax and the bisphosphonates, with fewer side-effects. Prolia has been used in treatment of cancer, and may reduce risk of cancer before the fact. The mechanism of Prolia is to inhibit a signal called NF-kappa-B. which, in turn, regulates gene expression and apoptosis (cell suicide).  So Prolia may be expected to have a cascade of effects through the metabolism.  The worst of these might be to impair the immune function of white blood cells, since both T and B cells require NF-kappa-B in order to mature.  Despite this caution, there is reason to believe that the net side effects might be anti-aging and anti-inflammatory.  Curcumin and caloric restriction are both good things, and both inhibit NF-kappa-B.  On the other side, here is an article by Lara Pizzorno, who is down on both bisphosphonates and Prolia.    She points out that Prolia has only been licensed since 2010, and there have not been enough people taking the drug for long enough to know whether some of them will experience the same “atypical” hip fractures that have plagued users of Fosamax.  She thinks that bones are strengthened by the ongoing process of remodeling, and that bone resorption is half of the yin and the yang of that process.  Thus any drug that works through inhibiting bone resorption can be expected to behave as the bisphosphanates do, improving nominal MBD, while actually weakening the bones.  Time will tell if she is right.

The up-and-coming drug is called odanacatib, and it has already undergone more than 15 years of tests, but still has not been submitted for FDA approval.  Odanacatib can be taken orally (every two weeks) and it interferes with a chemical signal called collagenase cathepsin K, and thereby inhibits bone resorption.  It is reported to have less severe side effects than Fosamax or Prolia, but again it works by inhibiting resorption, so by Pizzorno’s theory, it might turn out to have the same fatal flaw.

Forteo (teriparatide) is the only medication that promotes new bone growth rather than inhibiting resorption.  It is a synthetic variation on human parathyroid hormone, and is usually self-injected on a daily schedule.  It has uncomfortable side-effects, and is associated with bone cancer.  For this reason, it is approved for treatment courses of two years or less.  Forteo’s benefits disappear quickly when you stop taking it.

 

The Insulin Connection

And so we come to a favorite theme of mine: the role of the insulin metabolism in aging.  The simple view is that the body can operate in either of two modes.

  1. Food is plentiful.  Life is easy.  Time to have kids and die.
  2. Food is scarce.  Life is tough.  Not a good time to have kids.  Focus on surviving.

Mode #1 corresponds to rapid aging, self-destruction, poor health.  “Metabolic syndrome” is another name for Mode #1.  Mode #2 is slow aging and robust health.  Insulin is a signal that swtiches the body between these two modes.

What I have just learned is that osteoporosis is plugged into this system.  The osteoblasts that build bone are sensitive to insulin.  Insulin signals the body to add bone mass, but also dials up the self-destruction of aging.  The osteoblasts, in turn, secrete a hormone called osteocalcin, which helps to keep the body in mode #2 (where we want to be). Osteocalcin is activated in the process of bone resorption.  If this is correct, then it may be unwise to interfere with bone resorption.  The premise of the mainstream of pharmacology for osteoporosis is called into question.  In the short run, we may be able to tip the balance between bone anabolism and bone resorption by dialing down the latter, but we do so at the price of accelerated aging, which will get us in the end. On the other hand…

Fosamax and Mortality:  Can bisphosphonate drugs protect more than bones?

It was after writing the above that an astute reader of this column called my attention to several studies published since 2011 claiming that people who take bisphosphonate drugs experience benefits that seem to go beyond bone fractures.  Risk of cardiovascular disease is decreased.  Risk of breast cancer is also down.  And the risk of mortality from any cause has been found to be lower in several different studies [Ref #1, Ref #2, Ref #3, Ref #4].  This effect is large – much too large to be accounted for by merely reducing the deaths associated with falls and fractures.  The studies measure a reduction in all-cause mortality ranging from 30% all the way to 80%.  This has been confirmed enough times that I take it seriously; on the other hand, the idea is new and unexpected enough that it is not incorporated into clinical thinking in the field.

One possibility is that the reasoning in the above section on insulin is wrong, and that the effect of bisphosphonate drugs is to increase insulin sensitivity, possibly through osteocalcin.  But in any case, if Fosamax is an life extension drug, this must weigh into clinical practice, and everyone’s individual decision whether to take it.

Big Questions

Concerning prescription drugs, there is great promise and great uncertainty.  Do bisphosphonates (including Fosamax) have a beneficial effect on the metabolism far wider than bone strength?  If so, what other drugs share this advantage?  At what point in time does the action of bisphosphonates tend to stop strengthening bones and start weakening them?  And does this tend to happen for every patient, or only for some people?  Can short courses of bisphosphonates be combined safely with short courses of Forteo?  This is a situation where an expert doctor who can think about your individual situation will be most essential.

The Natural Approach

The low-calorie diet that I often recommend as your best path to staying young longer adds to your risk of osteoporosis.  Perhaps this provides more incentive to pursue a non-prescription path, and to start earlier.  Quoting Pizzorno:

Published in the prestigious Journal of Environmental and Public Health, the COMB study (Combination of Micronutrients for Bone) demonstrated unequivocally that providing our bones with the nutrients they need along with regular weight-bearing exercise is as or more effective than any of the bisphosphonates or strontium ranelate (the unnatural drug version of strontium). And a lot less expensive!

What was the protocol utilized in the COMB Study?  Daily vitamin D3 (2,000 IU), DHA (250 mg), K2 (in the form of MK-7,100 mcg), strontium citrate (680 mg), magnesium (25 mg), and dietary calcium. In addition, daily impact exercise was encouraged.

As one of the lead researchers, aptly named Dr. Stephen Genius, noted, not only was this combination of nutrients that bones require “at least as effective as bisphosphonates or strontium ranelate in raising BMD levels in hip, spine, and femoral neck sites,” but the nutrient supplement regimen was also effective “in individuals where bisphosphonate therapy was previously unsuccessful in maintaining or raising BMD.”

Evolution of Evolution, and Evolution of Death

Evolution has bootstrapped its own process,  creating the conditions that lead to more efficient evolution.  Some biologists find this surprising, but it has undoubtedly occurred.  Among the traits that lead to more efficient evolution is aging.  Is this the basis on which aging has evolved?  This is a theory that several biologists have promoted.  I embrace it with a qualified “yes” – I think that aging evolved first to promote demographic stability.  

Last week, I told you that some features of life evolved though they had nothing to do with what we think of as Darwinian fitness. They contribute instead to the process of evolution itself. You can think of these traits as promoting the increase in fitness over evolutionary time, but not to the present fitness of the individual or community that bears the trait.

Examples of evolvability traits include

  • Sex, the mixing of genes, permits evolution to experiment in parallel, trying many different combinations of traits at once.
  • The hierarchical organization of the genome allows for modularity in development, so that organs and appendages can be shuffled, added and deleted without having to re-invent them in their entirety each time.
  • Different rates of mutation in different parts of the genome mean that the core metabolism can be protected from disastrous tinkering, while more contingent details of biochemistry are subject to experimentation.
  • Population diversity is an evolvability trait. A population with no diversity at all is not subject to natural selection. Back in the 1920s, R.A. Fisher proved the Fundamental Theorem of Natural Selection which says that the evolutionary rate of increase of fitness in a population is proportional to the variance of the fitness within the population.
  • Aging (and a shorter life span) contribute to evolvability because (1) generation time is shorter; the population turns over more rapidly; all evolutionary change happens that much more quickly, and (2) diversity is promoted because no individual gets to go on reproducing for too long, dominating the next generation with its own progeny.

It is undeniable that evolvability has been a product of evolution. And yet, this doesn’t jive with the mainstream of evolutionary science. The body of evolutionary theory developed in the 20th century implies that natural selection ought to be nearsighted. The success of a gene in penetrating a population ought to depend largely on its consequences for the viability and reproductive success of its individual bearer. The gene’s long-term effect for progeny in an entire community or species ought to be a far less potent influence on natural selection. This is because mutations appear first in a single individual. The first test it must pass: can it spread to dominate a local population deme*? Only a gene that succeeds at this level can ever be tested for its long-range effect on the population.

It’s a fact that evolvability has evolved. We may not be able to describe the mechanism, or explain how evolvability traits survived the “first test”, but somehow the process must have worked.

Aging

Much about aging in the biosphere points to the inference that aging has been positively selected**. Nevertheless, evolutionary biologists have resisted this interpretation because it is theoretically implausible. Aging is bad for the individual fitness, and the individual counts for more than the community when it comes to natural selection. It is considered inconceivable that aging could have passed the “first test” in order to come to dominate a local population.

It’s even worse than that. For models based on kin selection, a trait can be selected despite a cost to the individual if its benefits are focused on others that are likely to bear the same gene. But the only benefit of aging is that it leads to death that creates a vacancy in the niche. This vacancy could be filled by a close relative or a distant relative or no relative at all, an animal that doesn’t age or even an animal of a different species that shares some of the same food species. Kin selection and even MLS models are not promising for evolution of aging.

But when we realize aging contributes to evolvability, and that other evolvability attributes managed to evolve, we may ask, “why not aging, too?” We may not be able to imagine exactly how aging was affirmatively selected, but the same may be said for sex and organization of the genome. Whatever mechanism served to evolve these things might have worked equally well to evolve aging.

This lends credibility to the oldest hypothesis (attributed to Weismann, 1892) about how aging might have evolved. It doesn’t resolve the solve the “first test” objection, but discredits the objection by association.  Several of my colleagues have promoted this theory of aging, for example V. Skulachev, G. Libertini, and J. Bowles.  A. Martins has published a computer simulation of how it might work

What do I think of this idea?  I think it’s part of the picture, but not the first part. Let me explain.

The need for demographic stability

I wrote a few weeks ago about the Demographic Theory of Aging. The punch line is that no community of animals can afford to trash its own ecosystem by eating everything in sight and reproducing without restraint. Simulations, theory, and field observations all agree. The consequences of overpopulation are swift and devastating. I told you the story of the Rocky Mountain Locust. Here’s another story, about reindeer introduced to the Isle of St Matthew in the Bering Sea in 1944: There were no large animals on the island until introduced by man.

The reindeer flourished, their population growing by about a third from each season to the next. That may sound like an extraordinary rate, but the ability of the population to expand rapidly is beneficially adaptive in an empty niche, and may be a life-saver after a natural disaster. So the reindeer population followed a trajectory typical of an exotic species that is successfully introduced, growing on an exponential trajectory. Naturalists estimate the carrying capacity of the island at about 2,000 reindeer, and the population crossed that threshold around 1960.

Such is the relentless logic of exponential growth that just four years later, the population was 6,000 reindeer. The winter of 1964 was severe – not a dramatic departure from what the reindeer expected, but more snow than usual. By the end of the winter, the entire population had starved to death. An expedition the following year counted 42 stragglers (and shot 10 of them in the name of sport and science). Reindeer live typically 18-22 years, so the entire saga had unfolded within the lifetime of a single reindeer.
(from Suicide Genes, forthcoming by Josh Mitteldorf)

Population overshoot can wipe out a population swiftly and efficiently. It is the most potent, most credible and most direct form of group selection. It is also the perfect counterpoise to the “selfish gene”, which measures fitness according to individual reproduction rate.

So here’s the story, as I see it: Population control is an essential function of animal life. (Less so for plants – the higher up the food chain you go, the stronger is the pressure to preserve the ecosystem that you are sitting on.) Population control is essentially a group function. It is the reason that the selfish gene provides such a distorted picture of reality. Some plants may be evolved for maximal reproduction, but animals are evolved for a flexible rate of reproduction matched to the overall death rate.

Aging fits well within this picture. Aging tempers population growth and does so in a way that responds flexibly to demographic conditions. In other words, when everybody is starving, no one is dying of old age. But even better: the body responds to conditions of starvation by becoming stronger and more robust, slowing the aging process, doing everything the metabolism can do to survive through the famine.

————-

Putting it all together

It is easy to understand population control, and how it evolved. It is harder to understand how evolvability arose, and how natural selection has favored it – but we know for a fact that it did evolve. My hypothesis is that both are involved in the evolution of aging. First to arise was population control. The race to reproduce as fast as possible was regulated and reined in. Individuals learned to temper their predation and their reproduction in order to protect a common food supply.

Population control can be achieved either by limiting fertility or by limiting life span, or any combination of the two. So stable ecosystems might have been achieved without any aging at all, but solely via a flexibly responsive birth rate. But the choice between lowering birth rate and raising death rate is aided by the need for evolvability. From the standpoint of evolvability, short life span with high fertility is much better than long life span with low fertility. In fact, the pace of evolutionary change is directly proportional to the rate of population turnover. More births with a shorter life span is much to be preferred.

So population control evolved as a mixture of limited fertility and limited life span. But once this was established, and the rules were stamped into the genome that prevented unrestrained reproduction, then there was room for natural selection to be responsive to more subtle considerations, including those that act on a long time. This resolves the mystery of why evolvability and other group-selected adaptations have been so effective. Once the tyranny of the selfish gene is tempered by the powerful and immediate need for stable ecosystems, there was room for more subtle selective forces, acting over a longer time frame. There was room for evolvability to emerge, in a self-reinforcing positive feedback loop that has made the evolutionary process itself so surprisingly effective.

—————

* A deme is a local breeding population, a community of a single species that is mutually sharing genes together.

**For example, the body seems to be able to slow down aging when stressed, indicating that aging is metabolically avoidable. For example, there are affirmative mechanisms for self-destruction at the cellular level (apoptosis and telomere attrition ) that are associated with aging of the whole body. For example, there are genes that regulate aging that have been preserved by evolution at least since the Cambrian Explosion half a billion years ago. I’ve written a great deal on this subject, including a forthcoming book and a recent book chapter.

E squared – the Evolution of Evolution

Darwin’s prescription for evolution involved just blind variation + natural selection as if evolution were inevitable and all that was required was a collection of objects that are able to reproduce themselves imperfectly.  We know now that it is not at all inevitable.  The mode of variation is crucially important to making evolution possible.  Some systems can support evolution while others cannot.  In real biological systems, evolution works unaccountably well.  Is this just a lucky accident?

For example, imagine trying to “evolve” a software program to alphabetize lists of words.  Say you have a program that does the job tolerably well – it works, but it’s slow and inefficient.  To simulate mutation, we change one letter of the program at a time and we ask “better or worse?”  If the program now works better, we keep the change; otherwise we keep the original.  If we do this long enough, can we make a better and better computer program?

It may not surprise you to hear that for all standard computer languages, this procedure won’t work at all.  So we might try to enhance the workability of the model by simulating sex.  Imagine breaking apart and recombining pieces from a number of very similar programs, all of which can alphabetize a word list. But this is not a practical way to create a better computer program, either.  Even if this whole evolutionary process is realized in software that runs at many gigaflops, the program could go on for many times the life of the Universe without ever creating a better algorithm.*

Computer languages do not constitute an evolvable system.  Living systems, on the other hand, have evolvability.  How lucky for us!

 

“Luck?” 

We might not be satisfied attributing the evolvability of life to “luck”.  Perhaps at the dawn of life, a lot of proto-living systems began in many different forms, but it was only a few that happened to be evolvable, and those are the ones that survived.  In other words, evolvability evolved.  But the truth is larger than this and far stranger.  The evolution of evolvability has been an ongoing process, interwoven with the “normal” evolution of fitness, and continuing all through the history of life.

We know this because there are traits that are obviously highly-evolved, but they offer no selective advantage whatsoever, in the traditional sense of survival and reproduction – their only advantages are in the long-range prospects for adaptive change.  How did evolvability traits manage to evolve, without ever offering a selective advantage to the individual carrying that trait, but only to its great, great grandchildren?

The genome is organized like a bureacracy, with command-and-control genes at the top and implementation genes underneath.  In the 1990s, it was discovered that a single gene could be inserted into a fruit fly’s DNA that would cause the ectopic appearance of an entire eye or a wing or a leg on a part of the body where it does not belong.  The term invented for this was hox genes and they perform a function similar to calling a subroutine in a computer algorithm, or a homeowner hiring a contractor to work on his house, or a general issuing an order down the chain of command.

How did the genome come to be organized hierarchically?  This feature offers no advantage in fitness for the individual.  It does, however, contribute to the rate of increase of fitness over evolutionary time.

The advantage of such a system is not that it makes it easier for the body to construct an eye or a leg – it doesn’t.   The advantage is that it permits evolutionary experimentation.  Using HOX genes, the placement of limbs or organs can be optimized in an evolutionary trial-and-error process.  Without having to re-invent the eye or the kidney each time, different body parts can be moved around to create “endless forms most beautiful and wonderful” that Darwin described.  As a way to design any particular organ for one animal, it is a very inefficient way to go; but as a system that can flexibly experiment with legs or wings or eyes or kidneys, hox genes are a brilliant invention.  Did I say “invention”?  Of course, they’re not an invention at all – merely a product of evolution.  But this is a kind of evolution that expands on the traditional “survival of the fittest”.  The idea that ‘evolution = blind variation + natural selection’ has become untenable.

How does evolution manage to give the impression of being “smart”?  There is a chicken-and-egg problem here.  You need an evolvable system to get started with evolution.  You need a highly evolvable system in order to select for evolvability.  So evolvability is a property that is needed in order to create itself.  Think “bootstrapping”.

Besides hierarchical organization of the genome, there are additional ways in which life is optimized for evolution.  The most obvious and prominent is sex, to which we’ll return presently (gives me something to look forward to).  Some places in the DNA are thousands of times more likely to mutate than others, and these hot spots always correspond to opportunities for experimentation.  Meanwhile, genes that control the core metabolism common to all life are tucked away safely beyond the reach of mutation 

Genes are not coded into the DNA as contiguous segments**, but are spread out over smaller units (“transposable elements“) that have to be cut and spliced together to make each single protein.  This is a complex and inefficient process, adding time and energy and potential for errors.  The benefit is that this system promotes evolvability, because functional segments of protein can be cut and spliced in new ways to try out new possibilities without having to evolve them from scratch.

The maintenance of diversity is a major ingredient in evolvability, and it is predominantly appropriate and useful diversity that persists.  How does this come about?

Darwin and the Sources of Variability

Through Darwin’s career, the missing piece in his theory, the mystery that he recognized but never resolved was the maintenance of diversity.  Natural selection cannot work in a uniform population.  It requires diversity as a kind of raw material, which it “consumes” as the less-fit are selected out.

Variability is governed by many unknown laws, more especially by that of correlation of growth. Something may be attributed to the direct action of the conditions of life. Something must be attributed to use and disuse. The final result is thus rendered infinitely complex…These facts seem to be very perplexing, for they seem to show that this kind of variability is independent of the conditions of life. (Origin of Species, First Edition, 1859)

A partial response to this mystery came with Mendel’s understanding of genetics and the mechanism of sexual inheritance.  But it remains true in the 21st Century that when we estimate the rate at which selection collapses diversity and the rate at which useful new diversity is generated by mutation and recombination, we cannot escape concluding that the gain in diversity ought to fail by many orders of magnitude to keep up with its loss.  150 years after Darwin, we still fail to account for the maintenance of diversity in nature.

Evolvability and Sex

The vast majority of species shares genes between consenting adults, mixing and matching in a never-ending quest for new combinations.  Bacteria are promiscuous, floating their genes out into the environment in the form of “plasmids“, and constantly pick up new genes, without regard to their origin.  Single-celled protists swap genes through a process of “conjugation”, actually merging and re-shuffling their genetic identities.  This is sex without reproduction, in which two individuals come together and scramble their genomes.  The two individuals that emerge from the process are re-shuffled combinations of the two original cells.

Almost all multi-celled organisms include some kind of sexual reproduction.  And yet, sex is not at all adaptive in the traditional sense.  For individual fitness, sex is a disaster.  If we cloned ourselves instead of requiring male + female to reproduce, we could be (at least) twice as fit.  The most efficient way to reproduce is simple cloning, and if the most successful individuals reproduced (rapidly!) via cloning, the entire population would, within a few generations, consist in copies of this one type alone.  The advantage of sex comprises only a contribution to evolvability.

By chance, I was witness to the dawn of evolvability theory.

In 1980 I was a grad student and teaching assistant, working for physics Prof David Layzer of the Harvard Astrophysical Observatory.  Layzer is a broadly-cultured man, a musician and a scholar of many sciences.  The course that I taught with him that year was offered to non-science majors, tying together ideas about the behavior of collections of similar objects, from molecules in a gas to animals in a population to galaxies in the Cosmos.  That same year, Layzer wrote a paper entitled Genetic Variation and Progressive Evolution, which he succeeded in getting published in the high-profile journal, American Naturalist.  Layzer was writing for biologists, while thinking like a physicist.  Suppose there were a gene, he mused, that offered no fitness advantage whatever, but which promoted the gradual increase in fitness of offspring and offsprings’ offspring over evolutionary time.  Could such a gene be selected in a Darwinian process.  “Yes”, was what Layzer concluded, and proferred a mathematical proof.

 Layzer’s paper and the ideas within it were roundly ignored, both because he was ahead of his time and because Layzer didn’t speak the language of biologists.  It was not until sixteen years later that a Yale biologist and an AI expert from Hawaii paired up to describe the same ideas in language that a biologist might appreciate.  They were not aware of Layzer’s precedent, and arrived at their ideas completely independently.  This seminal paper of Gunter Wagner and Lee Altenberg put evolvability on the map, and sparked a revolution in evolutionary thinking.  Well, perhaps I overstate the situation; though the paper has been widely cited and the issue recognized, these ideas have yet to affect the foundations of evolutionary theory in a way that logically must follow.

Bootstrapping 

There can be no doubt that without evolvability adaptations, evolvability could never have evolved.  In other words, evolvability promotes itself in a positive feedback loop, or bootstrapping process.  The further evolution of evolvability progresses, the more rapid is further progress in evolvability [sic].

This idea gives us greater respect for evolution, the foundation and basis for life.  Evolution is not a simple process that is bound to happen, beginning whenever some chemical happens to catalyze its own synthesis and proceeding inexorably onward and upward from there.  Evolution as we know it has required this further action of exponentially increasing its own effectiveness, a process that modern evolutionary science can barely describe, let alone understand.

Evolvability and Group Selection

Most evolutionary biologists strain at the gnat of ‘group selection’ but they swallow whole the camel of evolvability.  What I mean by this is that multi-level selection theory (MLS) is well-grounded in traditional evolutionary theory, and requires only a modest theoretical step beyond kin selection.  For historic and cultural reasons going back to the 1960s, many evolutionary biologists categorically dismiss the body of MLS research, insisting that the “selfish gene” is a one-size-fits-all explanation for all evolutionary processes.

Evolvability, in contrast, is an irriducibly radical concept.  It requires group selection on a vast scale that dwarfs MLS accounts.  Evolution of evolvability is a story of how evolution came to be smart, or at least give the illusion of being smart.

A simple yet controversial idea from MLS is that local geography ties together fate of a local animal community, which can be described as having a collective fitness, and which experiences Darwinian selection as a unit.  But evolution of evolvability (E2) goes far beyond this, requiring that selection work on entire lineages that last over many generations required for significant evolution to take place.  Somehow, during all that time, the fittest individuals don’t manage to crowd out those that are collectively good evolvers, though much less fit (by the traditional definition)

Evolvability and Aging

You’ll have to wait until next week.

—————
* There’s a science devoted to evolving computer programs in this way, and it is called genetic algorithms. The process can work when the rules are carefully defined to make sure that pieces of different programs must fit together in a way that makes logical sense.

**in higher life, but not bacteria

Cell phones and cancer

In the 21st Century, we live in a sea of radio waves.  No one wants to think that this might have consequences for our health – there are enough things to worry about that are more within our power to change.  We get plenty of encouragement not to think about the subject from the news media which, come to think of it, are not so easily distinguished from the telecomm companies.  The danger is real, if not so easy to quantify, and common sense suggests we might mitigate the largest sources of risk with minimal inconvenience.

It was twelve years ago that a neighbor asked me if she should worry about exposing her teenage children to cell phone radiation.  I put on my physicist’s hat and patiently explained to her the difference between ionizing and non-ionizing radiation.  Radiation comes in little quantum packets, and the type of radiation corresponds to how much energy is in each packet.  UV, X-rays, gamma rays and cosmic rays all pack enough energy in each photon that they can damage the complex and delicate chemicals, including DNA, on which our life depends.  Radio waves are low-energy radiation.  Each single packet lacks the punch to break a chemical bond, and so the only way they can affect our chemistry is if many of them act together.  This (I explained) is called “heating”.  Unless a radio signal is strong enough to change our temperature, then it can’t be doing any damage.

I was not alone in accepting this theoretical “proof” that non-ionizing radation can’t hurt us.  Throughout the 20th Century, as radio technology was being developed with wider applications in more bands of the spectrum, scientists and regulators universally assumed it was (biologically) a benign technology.

No one, in good conscience, can think that way any more.  

Studies of health hazards linked to microwaves have left the threshold of plausible deniability far behind, and it is only through an extensive program of censorship and scientific disinformation that the subject has been kept from the mainstream of public discourse.  Devra Davis has worked tirelessly to advocate and educate on the connection between cell phone use and brain cancer.

So far, there has been no public health catastrophe, but (as Davis explains), there is a time lag of up to ten years before cancer develops, and the rapid rise in the use of cell phones may take a much larger toll in the coming decade.

Three important and very different questions arise:

  • What can we learn about fundamental cell science from the fact that biological systems are sensitive to radio frequency?

  • What practical measures can we take in our daily lives to mitigate the risk of radio waves?

  • What policies and regulations should government be promoting to guide broadcasters and manufacturers of consumer devices toward safer technologies?

I’ll say a few words about the first two, and refer you to the EMR Policy Institute for the third.

What can we learn about biology?

Strong interactions between radio frequency (RF) radiation and living cells is surprising to a physicist, but perhaps not utterly mysterious.  In my opinion, the most plausible theories involve resonance.

When you tune your radio to a particular station, you are programing the receiver to focus on one particular frequency, corresponding to an exact amount of energy in each quantum packet.  A large number of packets confined to a particular frequency is characteristic of the way that radio communication works (including cell phones, broadcast radio, wifi and bluetooth, etc). Back in the 1970s and 80s, a German-British physicist named Herbert Fröhlich wrote some far-sighted theoretical papers about ways in which biomolecules might respond to RF radiation that happened to resonate with their vibrational frequencies.  When the frequency of a radio wave corresponds to a vibrational mode for a molecule, the interaction is extra-strong, and it may be that the molecule is induced to shake violently. There are so many biomolecules and so many different broadcast channels that resonances are bound to occur by chance.  If this is indeed the reason for biological effects of RF radiation, then it may be that the radio communications that surround us could be made far safer simply by prohibitions against broadcast at certain critical frequencies.

There is a related theory that RF radiation disrupts the membranes on which cells depend to maintian their structure and separate chemical constituents in different parts.

A bit of research has been done (in Croatia and India!) to look for signs of ways that cells respond to radio waves, starting from a purely observational approach without a theory.  This ought to pique the interest of every cell biologist, and new experiments should be devised to search for fundamental new mechanisms.  It is a certainty that profoundly new biology remains to be discovered if this thread is pursued.

(I have difficulty explaining why this isn’t a hot field for new research, except if I think of all the monied interests that feel threatened by research in the field.)  It may be that the effects are all weak and beome manifest only over longer periods of time, and this will make the phenomena a bit harder to study. But this research promises to open a whole new field of knowledge – what are we waiting for?

What can we do to protect ourselves?

So far, we know little about which frequencies might have more effect than others.  Without that information, it makes sense to look just at RF power and compare exposure from different sources – especially those over which we exert some individual control.

Radio power is measured in watts, and power density in watts per square centimeter.  Think of the power from a transmitter rippling outward in an expanding sphere.  As you move away from a source of RF, the power gets rapidly diluted over a larger and larger sphere.  Power density is computed as the output power of the transmitter divided by the area of the sphere.  I assume that it is the power density that dictates the danger, and that we would be prudent to avoid the RF sources in our environment that have the highest power density.

Cell phones – The problem with cell phones arises from the facts that (1) they need to broadcast with enough power to reach a cell phone tower up to 10 miles away, and (2) people hold them close up against the heads.  The transmission power of a cell phone is limited by FCC to 2 watts, and distance to the brain is less than 1 cm.  So power densities inside your head can be as high as 1 watt/cm2.

Microwave ovens – I became aware that microwave ovens leak when I noticed that I couldn’t make a Skype call or watch a Youtube video from my kitchen while the microwave oven was operating.  Apparently it leaked enough to interfere with the wifi in the house.  Some ovens are much leakier than others. But manufacturers don’t list leakage in their specifications, and there are no government or consumer web sites where you can find a comparison.  Meanwhile, there is a thriving market in microwave meters.  Meters that measure just the particular frequency from ovens are inexpensive, meters that cover a broader frequency spectrum, suitable for wifi and cell phones as well, are a few hundred dollars

But in comparison to cell phone emissions, microwave ovens tend to be much smaller.  The meters measure in units of 0.001 watts/cm2, which is hundreds of times smaller than what you receive from holding a cell phone next to your head.

The most practical and effective thing you can do to minimize your RF exposure is to carry a cell phone in a purse or backpack rather than in your pocket, and use a wired headset rather than hold the phone up to your ear.

Laptop computer – Typical wifi power from a laptop computer is 0.1 watt.  If you work with one of these all day long and you hold it close to your body, your computer can be the second most powerful source of RF radiation in your daily life.  The remedy is to turn off the wifi in your laptop and run a network cable to your network hub, rather than relying on wifi.  This can be quite practical at your desk or other work area where you habitually use the computer.

WiFi – Typical home wifi systems radiate about 1 watt.  If you sit right next to the unit while you work, you could be exposing your head and body to a few milliwatts/cm2, comparable to sitting next to a leaky microwave oven.

The exposure from wifi is all over your body, and constant throughout the day spent in your home or office.  How does that compare to a much higher exposure concentrated at your head for a few minutes a day that you use a cell phone?  This is a big question for epidemiologists, and to my knowledge there are no reported data and no one is doing such studies at present.  Tentatively (based only on fuzzy theory), I would focus on the acute, high-intensity exposure and ignore the low-intensity, chronic exposure until better data becomes available.

Microwave and cell phone towers – Typically, they radiate ~300 watts.  If you live right next to one, say 100 meters, then your exposure all day long is still only a few microwatts, which is 1000 times less than the exposure from sitting next to your wifi modem or your microwave oven.  These, in turn, are hundreds of times smaller than the exposure from your cell phone.

Commercial radio broadcasts – The largest of these may broadcast at 50,000 watts  If you happen to be within 1 km of the Empire State Building or Twin Peaks in San Francisco, you could be receiving a few microwatts/cm2 of exposure.

The bottom line

Distance trumps power.  (This is the physics of the inverse-square law.)  Beware the close-up sources and don’t sweat the more powerful ones far away.  Carry your cell phone away from your body, and use a wired headset.  The next level of protection is provided by turning off the wifi in your laptop computer.  Beyond this, you might want to stay a few feet away from your microwave oven and your wifi hub.  Remember that exposure from these is likely to be ~100 times smaller than from your cell phone.  Don’t worry about big broadcast towers, which expose you to radiation intensities that are ~100,000 times smaller than your cell phone.

Meanwhile, there’s an urgent need to study whole-body effects and a possible frequency dependence for the biological effects of radio waves.  My suspicion is that such research is being suppressed by the telecomm industry and its political influence.

The Demographic Theory of Aging

Aging destroys fitness.  How could aging have evolved?  Below is my answer to this question.  This is mainstream science from peer-reviewed journals [Ref 1, Ref 2, Ref 3, but it is my science, and as Richard Feynman warned us*, I’m the last one who can be objective about the merits of this theory.

Too fit for its own good

In 1874, a swarm of Rocky Mountain Locusts descended on the American midwest. They covered the sky and shadowed the earth underneath for hundreds of miles. A single cloud was larger than the state of California. Once on the ground, they ate everything that was green, leaving behind a dust bowl. The earth was thick with egg masses, ready to renew the plague the following year.

Laura Ingalls Wilder wrote in her childhood memoir (in the third person)

Huge brown grasshoppers were hitting the ground all around her, hitting her head and her face and her arms. They came thudding down like hail. The cloud was hailing grasshoppers. The cloud was grasshoppers. Their bodies hid the sun and made darkness. Their thin, large wings gleamed and glittered. The rasping, whirring of their wings filled the whole air and they hit the ground and the house with the noise of a hailstorm. Laura tried to beat them off. Their claws clung to her skin and her dress. They looked at her with bulging eyes, turning their heads this way and that. Mary ran screaming into the house. Grasshoppers covered the ground, there was not one bare bit to step on. Laura had to step on grasshoppers and they smashed squirming and slimy under her feet.

The locusts returned in several more seasons, but the last reported sighting of a Rocky Mountain locust was in 1902. There are preserved specimens in museums and laboratories today, but no living locusts. Entomologists interested in the locust’s rise and fall travel to the glaciers of Wyoming, mining hundred-year-old ice for carcasses that they might study.

Where did they go?  The Rocky Mountain Locust drove itself to extinction by overshooting its sustainable population.

Every animal species is part of a food web, and depends on an ecosystem to survive. If the ecosystem collapses, it takes down every species and every individual with it. Because of their mobility, the locusts were able to devastate many ecosystems, denuding one landscape, then flying hundreds of miles to deposit their children in a fresh location.  Animals that can’t fly become victims of their own greed much more quickly than the locust. If the lions killed every gazelle on the Serengeti, how long would it be before the lions were gone, too?

Evolution of Individuals and Groups

How would an evolutionary biologist describe this situation? Were the locusts too fit for their own good? To capture this story, you have to distinguish between individual fitness and collective fitness. Individually, these locusts were super-competitors. Collectively, they were a circular firing squad.  The science of individual fitness and collective fitness is called Multi-level Selection Theory, and it has been spearheaded by David S Wilson of Binghamton University, based on theoretical foundations by George Price.  MLS is regarded with suspicion by most evolutionary biologists, but embraced by a minority as sound science.

Selfish organisms that consume as much of the available food species as possible may thrive for a time. They may crowd out other individuals of the same species that compete less aggressively.  But as soon as their kind grows to be the majority, they are doomed – they wipe out the food source on which their children depend.

Animals are evolved to be “prudent predators”†.  Species that have exploited their food sources too aggressively, or that have reproduced too fast have become extinct in a series of local population crashes.  These extinctions have been a potent force of natural selection, counterbalancing the better-known selective pressure toward ever faster and more prolific reproduction.

How did Evolutionary Theory go Wrong?

This is an idea that has common-sense appeal to anyone who thinks logically and practically about evolutionary science. In order not to to appreciate this idea, you need years of training in the mathematical science of evolutionary genetics. Evolutionary genetics is an axiomatic framework, built up logically from postulates that sound reasonable, but the conclusions to which they lead are deeply at odds with the biological world we see. This is the “selfish gene” theory that says all cooperation in nature is a sort of illusion, based on a gene’s tendency to encourage behaviors that promote the welfare of other copies of the same gene in closely-related individuals.

The “selfish gene” is an idea that should have been rejected long ago, as absurd on its face. Yes, there is plenty of selfishness and aggression in nature.  But nature is also rich with examples of cooperation between unrelated individuals, and even cooperation across species lines, which is called “co-evolution”.  Species become intimately adapted to depend on tiny details of the other’s shape or habits or chemistry.  Examples of this are everywhere, from the bacteria in your gut to the flowers and the honeybees.  In the same way, predators and their prey (I’m using this word to include plant as well as animal food sources) adapt to be able to co-exist for the long haul.  It is obvious to every naturalist that there is a temperance in nature’s communities, that when ecosystems are out of balance they don’t last very long.

It makes good scientific sense that extinctions from overpopulation are a powerful evolutionary force, and it is part of Darwin’s legacy as well. Natural selection is more than merely a race among individuals to reproduce the fastest. The very word “fitness” came from an ability to fit well into the life of the local community.

But beginning some forty years after Darwin’s death, mathematical thinking has led the evolutionary theorists astray. They have forgotten the first principle of science, which is that every theory must be validated by comparing predictions from the theory to the world we see around us. Predictions of the selfish gene theory work well in the genetics lab, but as a description of nature, they fail spectacularly.

Understanding Aging based on Multi-level Selection

If we are willing to look past the “selfish gene” and embrace the science of multi-level selection, we can understand aging as a tribute paid by the individual in support of the ecosystem.  If it weren’t for aging, the only way that individuals would die would be by starvation, by diseases, and by predation.  All three of these tend to be “clumpy” – that is to say that either no one is dying or everyone is dying at once. Until food species are exhausted, there is no starvation; but then there is a famine, and everyone dies at once. If a disease strikes a community in which everyone is at the peak of their immunological fitness, then either everyone can fend it off, or else everyone dies in an epidemic.  And without aging, even death by predation would be very clumpy.  Many large predators are just fast enough to catch the aging, crippled prey individuals.  If this were not so, then either all the prey would be vulnerable to predators, or none of them would be.  There could be no lasting balance between predators and prey.

Aging helps to level the death rate in good times and bad. Without aging, horde dynamics would prevail, as deaths would occur primarily in famines and epidemics. Population would swing wildly up and down. With aging comes the possibility of predictable life spans and death rates that don’t alternately soar and plummet.  Ecosystems can have some stability and some persistence.

Aging is plastic, providing further support for ecosystem stability

This would be true even if aging operated on a fixed schedule; but natural selection has created an adaptive aging clock, which further enhances the stabilizing effect. When there is a famine and many animals are dying of starvation, the death rate from old age is down, because of the Caloric Restriction effect.  In times of famine and other environmental stress, the death rate from aging actually takes a vacation, because animals become hardier and age more slowly.

When we ask “Why does an animal live longer when it is starving?” the answer is, of course, that the ability to last out a famine and re-seed the population when food once again becomes plentiful provides a great selective advantage.  This may sound like it is an adaptation for individual survival, consistent with the selfish gene.  But we might ask the same question conversely: “Why does an animal have a shorter life span when there is plenty to eat?” When we look at it this way, it is clear that tying aging to food cannot  be explained in terms of the selfish gene.  In order to be able to live longer under conditions of starvation, animals must be genetically programmed to hold some fitness in reserve when they have plenty to eat, and this offers an advantage only to the community, not to the individual.

Hormesis is an important clue concerning the evolutionary meaning of aging. This word refers to the fact that when an individual is in a challenging environment, its metabolism doesn’t just compensate to mitigate the damage, but it overcompensates. It becomes so much stronger that it lives longer with challenge than without. The best-known example is that people (and animals) live longer when they’re underfed than when they’re overfed. We also know that exercise tends to increase our life expectancy, despite the fact that exercise generates copious free radicals (ROS) that ought to be pro-aging in their effect.

Without aging, it is difficult for nature to put together a stable ecosystem. Populations are either rising exponentially or collapsing to zero. With aging, it becomes possible to balance birth and death rates, and population growth and subsequent crashes are tamed sufficiently that ecosystems may persist.  This is the evolutionary meaning of aging:  Aging is a group-selected adaptation for the purpose of damping the wild swings in death rate to which natural populations are prone.  Aging helps to make possible stable ecosystems.

___________

“ The first principle is that you must not fool yourself, and you are the easiest person to fool.” – R P Feynman (from the Galileo Symposium, 1964)

† Here “predator” can mean herbivore as well as carnivore.  This is the common usage in ecology.