Wednesday, February 18, 2009

The "Great Filter"

Oxford philosopher Nick Bostrom recently wrote an article for "Technology Review" championing the concept of a "Great Filter" -- a sort of existential black hole assumed to either preclude the emergence of complex life or else destroy advanced civilizations (thus SETI's failure to detect intelligent signals). Bostrom posits that we should hope to find our own solar system devoid of primitive life, since such a discovery would indicate that the Great Filter lies in our future instead of our past, thereby effectively condemning humanity to extinction before we're able to announce our presence to the galaxy (assuming we'd want to).

If Bostrom's right, then he's managed to neatly encapsulate the Fermi Paradox within a cautionary philosophical framework -- no mean feat. But his argument is boundlessly porous, imposing anthropocentric logic on extraterrestrials about which we know nothing. As I've argued in a previous post, there's little reason to suggest that Enrico Fermi's famous "paradox" is anything of the sort. Unfortunately, Bostrom's acceptance of the Fermi Paradox as a cosmic directive, rather than an engaging scientific challenge, constitutes a glaring failure of imagination.

It probably goes without saying that Bostrom ignores the UFO controversy and its implications for ETI. But one doesn't need to accept UFOs as evidence of visitation to discern grave problems in Bostrom's notional "Great Filter." The most daunting problem lies in his presumption that technologically inclined ETs will necessarily make themselves known, whether through electromagnetic pollution or works of astro-engineering. Of course, Bostrom's conjecture only makes sense if the aliens are essentially like us, driven by a form of galactic colonialism. He all-but ignores the possibility that advanced ET societies might possess vastly different imperatives, perhaps eschewing the harsh realities of deep space for other, no less-aspiring ventures.

Galaxy-conquering civilizations might set the stage for science fiction novels (Isaac Asimov's enduring, if dated, "Foundation" series springs to mind), but in light of our failure to readily detect their broadcasts they seem more than a little like quaint extrapolations of our own technology-fixated era. Absence of evidence isn't evidence of absence; in his haste to praise the merits of the Great Filter, Bostrom presents a scenario that seems almost deliberately contrived to engage academics at the expense of genuine inquiry.

Which isn't to say we needn't fear the very real threats facing our species. If ET civilizations are common, it beggars belief to assume that all of them survive indefinitely. On the other hand, perhaps some of them manage to reach a technological "island of stablity" in which their social structures exist in relative harmony with their technological potency. For example, astronomer Milan Ćirković has argued in favor of ET "city-states" that, aside from harboring sustainable civilizations, would fail to be easily detected -- an idea that makes at least as much sense of the Fermi Paradox as Bostrom's fatalism.

This piece originally appeared at

1 comment:

TJ said...

My favourite Fermi's paradox solution is from Charles Stross' Accelerando.

Basically as the speed-of-thought of your species increases through mind uploads and intelligence augmentation you get to the point where the perceived length of time it takes to broadcast information even over planetary distances becomes intolerable.

As such space exploration, because of the sheer lengths of time involved, becomes something posthumans (or post-ETs) simply don't have the attention span for.