Insight @Guelph

JUST HOW NIGH IS THE END?


"Eon after eon, how did the vertebrates escape being wiped out by something utterly lethal?"
BY JOHN LESLIE
Last year, my book The End of the World: The Science and Ethics of Human Extinction appeared in paperback. Humans, it argued, could quite easily disappear during the next few centuries. This suggests itself when we consider the various dangers facing us, then look at the doomsday argument.

  The doomsday argument, discovered by the Cambridge cosmologist Brandon Carter, uses the "anthropic principle" that Carter formulated in the 1970s. The anthropic principle reminds us that we may well live in a highly unusual place, time or universe. Even if almost all places, times or universes were hostile to life, we intelligent living beings would necessarily find that our place, time and universe were life-permitting. But while it can in this way encourage us to think our location exceptional, "anthropic" reasoning can also warn us against thinking it more exceptional than is necessary for us to find ourselves there.

  Suppose, for instance, that a million technological civilizations, all roughly the same size, will have evolved during the lifetime of our universe. Don't hurry to believe that you live in the very earliest! If all individuals in these civilizations believed such a thing, only about one in a million would be right.

  Well, now, what if the human race became extinct soon? You and I would then have been rather ordinarily located in human population history. Because of the population explosion, roughly one in 10 of all humans who have entered the world are still alive with you and me. But what if humans instead survived for many more centuries, perhaps managing to colonize their entire galaxy? You and I would then have been in an exceptionally tiny class. Perhaps fewer than one in a billion would have lived when we did. Now, we ought to have some reluctance to think ourselves so exceptional, when it is so easy to believe instead that humans will soon be extinct.

  That, in a nutshell, is the doomsday argument. It uses the principle, central to Bayesian probability theory, that hypotheses are weightier when they would lead one to quite expect what is actually observed. If the human race suffered extinction shortly, you'd have been fairly ordinarily placed in the population history of your species. If extinction were much delayed, then you'd have been placed very extraordinarily. Draw your conclusions!

  Although the risk of Doom Soon might at first seem small, the doomsday argument could make us view it as disturbingly large. The argument has one possible flaw. Perhaps the world is truly indeterministic instead of just unpredictable by us. If so, then there wouldn't yet be any suitably guaranteed "fact of the matter" of how long the human race will survive. Bayesian reasoning, encouraging us to see our temporal position as none too exceptional, would be hindered by this. The doomsday argument would operate smoothly only when used for attacking great confidence in the future of humankind - confidence that a lengthy future "is as good as determined."

  Other supposed flaws are illusory. For instance, we mustn't reason that there could be nothing unusual in finding ourselves alive as early as today, since the people of a heavily populated future aren't alive yet and therefore cannot find themselves anywhere. To see the weakness of such reasoning, consider the following scenario. In one century, emeralds would be given to three humans. In some much later century, five thousand humans would likewise get emeralds. Told only these facts, you get one of the emeralds. Suppose you said to yourself that if yours were the earlier century, then the emerald-getters of the later century "wouldn't be alive yet," so you couldn't be among them. Suppose you then concluded that you might just as well bet that you lived in the earlier century. If every emerald-getter betted like that, only three would win their bets.

  The doomsday argument has made me much less optimistic. Inclined at first to say that the risk of Doom Soon was only about five per cent, I found myself changing this to 30 per cent. Where had the figure of five per cent come from? Although a "guesstimate" only, it wasn't pulled out of thin air. Consider nuclear war, for a start. The collapse of the Soviet Union hasn't made the bombs vanish, and the danger of accidental war may actually be higher than ever. Biological warfare has become very threatening through advances in genetic engineering. Essentially, new diseases can be developed by gene splicing, then distributed in aerosol form. The pollution crisis continues to grow. Damage to the ozone layer has highlighted the vulnerability of oceanic phytoplankton, crucial to the health of our planet. And greenhouse warming, produced by human-made increases in carbon dioxide, methane and other gases, could perhaps be fatal. Failure to cut greenhouse-gas emissions could mean quite a large chance of a greenhouse runaway. Methane - molecule for molecule some 30 times as powerful a warmer as carbon dioxide - could be released in huge amounts from warming lands and continental shelves. More warming, more methane; more methane, more warming - and then other factors join in, temperatures shooting upwards calamitously.

  The poisoning of air, land and water by the thousands of chemicals now synthesized industrially, the destruction of habitats such as rain forests and wetlands, and the concentration of modern agriculture on just a few species might culminate in environmental collapse. Even 10 years ago, humans took a quarter of the photosynthetic product of their planet.

  Population and economic activity have since grown rapidly. The population doubling time is now as short as 40 years. In huge cities, diseases quickly develop dozens of new strains, which are then spread worldwide by travellers. On first reaching Australia, myxomatosis killed all but two in every thousand infected rabbits. Mightn't something like that hit us humans soon? How lucky are we that AIDS isn't spread by coughing, like the related visna virus that infects sheep?

  Eon after eon, how did the vertebrates escape being wiped out by something utterly lethal? Well, if disease had exterminated all intelligent life on 99 per cent of all life-bearing planets, we intelligent living beings would necessarily find ourselves on a planet from the remaining one per cent.

  More exotic dangers are also worth a mention. You may know that a supernova could destroy Earth's ozone layer or that we could suffer a large comet or asteroid impact, but have you heard of the quark matter and vacuum instability disasters that might result from experiments in high-energy physics? In the first, a tiny piece of quark matter changes all that it touches into more of itself until the entire Earth is consumed. Admittedly, people think that any quark matter would instead repel ordinary matter, but nobody can yet be sure. And the currently popular argument that cosmic rays would already have seeded a quark matter disaster, if one were physically possible, perhaps forgets that the rays would be producing quark matter only in minuscule lumps and in the upper atmosphere. The lumps could all decay radioactively long before they touched enough ordinary matter to allow them to grow.

  Again, although even head-on collisions between cosmic rays - some of them pack as much punch as rifle bullets - haven't reached the energies needed for a vacuum instability disaster, some think that physicists could attain such energies during the next few centuries.

  Now, a vacuum instability disaster has justly been called "the ultimate ecological catastrophe." The idea here is that space as we know it, "our vacuum," may be filled with a field (technically speaking, a scalar field) that is like a ball trapped in a hollow, a ball unable to run downhill unless given a violent shove. Physicists might produce such a shove, creating a tiny bubble in which the field had "dropped to the valley." The bubble would expand at nearly the speed of light, destroy the whole galaxy and then keep going.

  Incredulous? Consult Physics Letters B, Dec. 13, 1990, on vacuum instability. On quark matter, try Physical Review D, Dec. 1, 1984.

  To turn to something less bizarre, there are the possible dangers of genetic engineering for peaceful purposes. Conceivably this could end in tragedy - for instance, in a "green scum disaster" in which some ingeniously modified organism choked out everything else. Or a rather similar "gray goo calamity" might one day be caused by nanotechnology. This means use of very tiny machines able to produce all sorts of things, including more of themselves, because they carry their own miniaturized computers.

  Not miniaturized computers, however, but full-sized ones are what pose the main threat in the near future. They control nuclear weapons to a degree kept secret from us. They govern increasingly important supply systems, which might break down catastrophically. And people working on artificial intelligence often expect computers to excel humans shortly. Several of them suggest that it might then be good if all of us were replaced by computers, which could lead happier and longer lives, as well as being smarter. At least one expert has said he is working towards this. Might the computers soon be working towards it themselves?

  Bad philosophy might be playing a role here, because it is far from clear that computers would ever be truly conscious and therefore with lives worth living. And bad philosophy is definitely present in ethical theories that tell us the extinction of all intelligent life would be no tragedy. Numerous philosophers now think of ethics as just a way of reducing conflicts between people, if there are any people. They teach that merely possible people of future generations, people who would never exist if, say, we lost the ozone layer, cannot possess anything as real as a right to be born. If anyone listened to philosophers, this doctrine would be yet another danger.

  Imagine that, of intelligent species scattered through the universe, at least one in 10 survived nuclear bombs, industrial pollution, etc., then colonized their galaxies. Wouldn't the vast majority of intelligent beings find themselves in species that had indeed colonized their galaxies? But if so, why do we find ourselves in a species that hasn't?

Retired philosophy professor John Leslie, FRSC, is author of The End of the World: The Science and Ethics of Human Extinction, originally published by Routledge in 1996 and released in paperback last year.