The notion that higher IQ means that more money will be allocated to solving FAI is idealistic.
Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right.
Even if individuals have a high IQ that doesn’t mean that they don’t fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence.
Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Christian, FAI is hard because it doesn’t necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That’s why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to “look before we jump”.
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared current brain science.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn’t mean that they don’t fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Christian, FAI is hard because it doesn’t necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That’s why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to “look before we jump”.
Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn’t smart enough to improve itself.
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Hopefully volunteers could be found; but in any case, the stakes here are the end of the world, the end justifies the means.
To whoever downvoted Roko’s comment—check out the distinction between these ideas:
One Life Against the World
Ends Don’t Justify Means (Among Humans)
I’d volunteer and I’m sure I’m not the only one here.
Heroes of the future sign up in this thread ;-)
You’re not, though I’m not sure I’d be an especially useful data source.
I’ve met at least one person who would like a synesthesia on-off switch for their brain—that would make your data useful right there.
Looks to me like that’d be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who’d like that, too.
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
I think that WBE stands a greater chance of precipitating a friendly singularity.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
I worry these modified ems won’t share our values to a sufficient extent.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).