Philosopher Nick Bostrom considers the discovery of life on Mars bad news. As in really bad news -- so bad, in fact, that finding even relatively primitive organisms eking out an existence among the ice would entail nothing less than our ultimate doom.
Bostrom sets the theoretical stage this way:
The more complex the life-form we found, the more depressing the news would be. I would find it interesting, certainly--but a bad omen for the future of the human race.
How do I arrive at this conclusion? I begin by reflecting on a well-known fact. UFO spotters, Raëlian cultists, and self-certified alien abductees notwithstanding, humans have, to date, seen no sign of any extraterrestrial civilization.
By dismissing a significant body of evidence suggestive of some form of nonhuman contact, Bostrom manipulates the playing field in such a way that he can argue essentially anything he likes. Evidently Bostrom expects the lay reader will buy into his daft notion that the UFO phenomenon has something to do with Raëlians. And his smearing of "self-certified alien abductees" is the stuff of rabid pseudo-debunkery.
Bostrom goes on to illustrate the concept of a "Great Filter" -- a kind of evolutionary black hole through which a potential extraterrestrial intelligence must pass in order to fulfill its destiny. (Bostrom's hypothetical ETs are a conspicuously anthropomorphic lot, but I'll cut him some slack; given the vastness of the observable universe, is it that bizarre to expect that a relatively tiny number will possess traits in keeping with our own?)
Pondering the sort of threat necessary to silence a candidate ET civilization, Bostrom writes:
The Great Filter, then, would have to be something more dramatic than run-of-the mill societal collapse: it would have to be a terminal global cataclysm, an existential catastrophe. An existential risk is one that threatens to annihilate intelligent life or permanently and drastically curtail its potential for future development. In our own case, we can identify a number of potential existential risks: a nuclear war fought with arms stockpiles much larger than today's (perhaps resulting from future arms races); a genetically engineered superbug; environmental disaster; an asteroid impact; wars or terrorist acts committed with powerful future weapons; superintelligent general artificial intelligence with destructive goals; or high-energy physics experiments. These are just some of the existential risks that have been discussed in the literature, and considering that many of these have been proposed only in recent decades, it is plausible to assume that there are further existential risks we have not yet thought of.
A bit later, Bostrom cuts to the chase:
If the Great Filter is ahead of us, we have still to confront it. If it is true that almost all intelligent species go extinct before they master the technology for space colonization, then we must expect that our own species will, too, since we have no reason to think that we will be any luckier than other species. If the Great Filter is ahead of us, we must relinquish all hope of ever colonizing the galaxy, and we must fear that our adventure will end soon--or, at any rate, prematurely. Therefore, we had better hope that the Great Filter is behind us.
I must admit that I'm taken aback by Bostrom's assumption that "colonizing the galaxy" is necessarily the raison d'etre of a technologically robust ETI. Although he cites the possibility of less aggressively materialistic aliens early in his piece, it's almost as if he wishes we'd forget about them.
What has all this got to do with finding life on Mars? Consider the implications of discovering that life had evolved independently on Mars (or some other planet in our solar system). That discovery would suggest that the emergence of life is not very improbable. If it happened independently twice here in our own backyard, it must surely have happened millions of times across the galaxy. This would mean that the Great Filter is less likely to be confronted during the early life of planets and therefore, for us, more likely still to come.
By now you get the idea: if life is commonplace, we can expect to encounter an insurmountable existential hurdle at some point in the future -- specifically, before we're able to announce our presence to the galaxy (assuming we'd want to, and there are a host of arguments suggesting that it might not be the bright idea we're tacitly assured it is). Bostrom's argument is tantalizing and, on first glance, impressive. But it hinges on so many anthropocentric conceits that it reduces itself from a legitimate "either/or" to a merely interesting philosophical conjecture.
It's equally clear that Bostrom is most likely in for a dose of ennui; our solar system abounds with the ingredients for life, from Mars to Europa and beyond. Indeed, we may have already found it.
But none of this bothers me nearly so much as the fatalism at the core of Bostrom's thesis, which purports to reveal the role of intelligence in the universe but delivers little more than litany of uncertainties dressed in racy new clothes.
Bostrom is, of course, perfectly free to quake with dread when we finally confirm the existence of extraterrestrial life. Meanwhile, I'll be breaking open the champagne.