“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared current brain science.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Hopefully volunteers could be found; but in any case, the stakes here are the end of the world, the end justifies the means.
To whoever downvoted Roko’s comment—check out the distinction between these ideas:
One Life Against the World
Ends Don’t Justify Means (Among Humans)
I’d volunteer and I’m sure I’m not the only one here.
Heroes of the future sign up in this thread ;-)
You’re not, though I’m not sure I’d be an especially useful data source.
I’ve met at least one person who would like a synesthesia on-off switch for their brain—that would make your data useful right there.
Looks to me like that’d be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who’d like that, too.
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
I think that WBE stands a greater chance of precipitating a friendly singularity.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
I worry these modified ems won’t share our values to a sufficient extent.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).