I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:
IMO, people want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia—that would stop us from taking advantage of them.
However, Francis’s objections to virtual worlds seem even more silly to me. I’ve been hearing that simulations aren’t real for decades now—and I still don’t really understand why people get into a muddle over this issue.
Sue’s article is here: She won’t be me.
Robin’s article is here: Meet the New Conflict, Same as the Old Conflict—see also O.B. blog post
Francis’s article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.
Marcus Hutter: Can Intelligence Explode?.
I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:
However, Francis’s objections to virtual worlds seem even more silly to me. I’ve been hearing that simulations aren’t real for decades now—and I still don’t really understand why people get into a muddle over this issue.
Hanson link doesn’t seem to work.
It seems to be back now.