Tegmark’s talk at Oxford
Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race’s survival and flourishing in the short and long term. He’s strongly into the importance of xrisk reduction.
At 30:20 in the talk:
A very nice touch a bit later on when he says he worries about this also as a father, which reinforces the point that x-risk isn’t just something academic, but would have an actual, real impact to his actual family’s actual well-being. It’s easy to banish x-risk discussions to some academic sphere of armchair-theorycrafting, and not realize that if e.g. the planet explodes, that encompasses your house as well. Even your comfy chair!
It’s from the ‘The Hitchhiker’s Guide to the Galaxy’. There, I saved you a google.
I’m a bit confused about the prior that he uses in order to assign uniform probability on the existence of extraterrestrial life. Although I agree with that a logarithmic flat prior is a good idea for this problem, it is important to acknowledge that it is biased towards the unconstrained large scales. Since there is a minimum length scale by construction (the size of the earth or so) it would look more fair if he imposed a large scale cutoff as well (at radius of the observable Universe say). This way we can no longer claim that the extraterrestrial life is most likely to be found further than the edge of our Universe, but we could possibly still rule out our own galaxy.
Aside from that, an excellent (and entertaining) talk by Tegmark.
He is concerned that AIs might not be conscious. Interestingly, this is IIRC the exact opposite fear to Eliezer, who is afraid that they might be (though I may be misremembering). I think Tegmark is mainly talking about UFAIs that replace us (rather than FAIs that protect us) - so basically he’s saying he’d value a conscious clippy, but not an unconscious one.
Does he define “conscious”?
No. Elsewhere he has said “I believe that consciousness is the way information feels when being processed”, but in this talk he seems to make a little bit of a retreat. He describes a positive singularity with p-zombie AI/robots that have perception and appear conscious, but aren’t “aware” of the world around them. He makes no clarification of how perception differs from awareness and doesn’t mention introspection at all.
So… basically he doesn’t know what he is talking about?
Neither does anyone who is talking about consciousness...
indeed.