The Great Filter Hypothesis – Where Are All The Alien Civilizations?

It’s difficult to not be pessimistic when considering humanity’s future prospects. Many people would agree that it’s more likely than not that we’ll eventually do ourselves in. And in fact, some astrobiologists theorize that all advanced civilizations hit the same insurmountable developmental wall we have. They call it the Great Filter. It’s a notion that’s often invoked to explain why we’ve never been visited by extraterrestrials.

With no evidence of intelligent life other than ourselves, it appears that the process of starting with a star and ending with “Advanced Universe Exploring life” must be unlikely. This implies that at least one step in this process must be improbable. This list, while incomplete (I am sure), describes the following nine steps in an “evolutionary path” that results in the colonization of the observable universe:

  1. The right star system (including organics and potentially habitable planets)
  2. Reproductive molecules (e.g., RNA)
  3. Simple (prokaryotic) single-cell life
  4. Complex (eukaryotic) single-cell life
  5. Sexual reproduction
  6. Multi-cell life
  7. Tool-using animals with big brains
  8. Where we are now
  9. Colonization explosion.

Great-Filter1The Great Filter is the idea that although there is lots of matter, we observe no “Advanced Universe Exploring life”, like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already ‘passed’ the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.

According to the Great Filter hypothesis, at least one of these steps — if the list were complete — must be improbable. If it’s not an early step (i.e., in our past), then the implication is that the improbable step lies in our future and our prospects of reaching step 9 (interstellar colonization) are still bleak. If the past steps are likely, then many civilizations would have developed to the current level of the human species. However, none appear to have made it.

However, none appear to have made it to step 9, or the Milky Way would be full of colonies. So perhaps step 9 is the unlikely one, and the only thing that appears likely to keep us from step 9 is some sort of catastrophe or the resource exhaustion leading to rampant consumption of the available resources (like for example highly constrained energy resources). So by this argument, finding multicellular life on Mars (provided it evolved independently) would be bad news, since it would imply steps 2–6 are easy, and hence only 1, 7, 8 or 9 (or some unknown step) could be the big problem.

Although steps 1–8 have occurred on Earth, any one of these may be unlikely. If the first seven steps are necessary preconditions to calculating the likelihood (using the local environment) then an anthropically biased observer can infer nothing about the general probabilities from its (pre-determined) surroundings.

downloadOne concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter – one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/’Grey goo’, nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute consider AI the main existential threat.

The concern with AI is something like this:

  1. AI will soon greatly surpass us in intelligence in all domains.
  2. If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
  3. Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.

Or, you can think of it this way.

… AI leads to intelligence explosion, and because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good.

So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are ‘friendly’ (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.

Where is everybody?

23115life_innerYes, it is possible that the extreme difficulty and rarity of life’s origin or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI because that should be visible.)

This made me realize that UFAI should also be counted as an ‘expanding lasting life’, and should be deemed unlikely by the Great Filter.

Alternatively, if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.

It also gives a stronger argument – as the UFAI is the ‘expanding life’ we do not see, the beliefs, ‘the Great Filter lies ahead’ and ‘UFAI is a major existential risk’ lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don’t see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).

Untitled-design-3-360x240This is, of course, a variant on the Fermi paradox: We don’t see clues to widespread, large-scale engineering, and consequently, we must conclude that we’re alone. But the possibly flawed assumption here is when we say that highly visible construction projects are an inevitable outcome of intelligence. It could be that it’s the engineering of the small, rather than the large, that is inevitable. This follows from the laws of inertia (smaller machines are faster, and require less energy to function) as well as the speed of light (small computers have faster internal communication). It may be – and this is, of course, speculation – that advanced societies are building small technology and have little incentive or need to rearrange the stars in their neighbourhoods, for instance.

They may prefer to build nanobots instead of Dyson spheres. It should also be kept in mind that, as Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic“… I would like to add to the phrase “or be unrecognizable altogether“.

So, if the Great Filter is behind us, it would do much to explain the Fermi Paradox and the absence of extraterrestrial influence on the cosmos. Should that be the case, we may very well have a bright future ahead of us. The Milky Way Galaxy is literally ours for the taking, our future completely open-ended.

But before we jump to conclusions, it’s only fair to point out that we’re not out of the woods yet. There could very well be another Great Filter in the future — one just as stingy as the filters of our past. The universe, while giving the appearance of bio-friendliness, may, in reality, be extremely hostile to intelligent life.

Please SHARE This Article If You Enjoyed It.

If You Enjoyed This Article Please Join Me On Facebook At

Facebook.com/groups/TheAngryUFOlogist

Friend me on Facebook @ trevor.a.wozny

[/responsivevoice]