If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE’s. (Ignoring the ethical issues here.)
Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn’t act against your interests even if is powerful and without hampering its intellect, is a big ‘intractable’ problem.
I suspect no one is working on it and no one is going to, even though we have working models of these intellects today. A new design might be easier to work with, but it will still be a lot harder than it wil seem to be worth—as long as the AI’s are doing near human level work.
Aim for an AI design that’s easy enough to work on saftey that people actually will work on safety… and it will start to look a lot like SIAI ideas.
However, given that WBE’s will in any case be developed (and we can mention IA as well) , I’d like to see more consideration of how to keep brain-based AI’s as safe as possible before they enter their Intelligence Explosion—even though we understand that after an Explosion, there is little you can do.
One trouble is that that is essentailly tacking mind enslavement on to the WBE proposition. Nobody wants that. Uploads wouldn’t volunteer. Even if a customer paid enough of a premium for an employee with loyalty modifications, that only rolls us back to relying on the good intent of the customer.
This comes down to the exact same arms race between friendly and ‘just do it’ . With extra ethical and reverse-engineering hurdles. (I think we’re pretty much stuck with testing and filtering based on behavior. And some modification will only be testable after uploading is available)
Mind you I’m not saying don’t do work on this, I’m saying not much work will be done on it.
I think we’re going to get WBE’s before AGI.
If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE’s. (Ignoring the ethical issues here.)
Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn’t act against your interests even if is powerful and without hampering its intellect, is a big ‘intractable’ problem.
I suspect no one is working on it and no one is going to, even though we have working models of these intellects today. A new design might be easier to work with, but it will still be a lot harder than it wil seem to be worth—as long as the AI’s are doing near human level work.
Aim for an AI design that’s easy enough to work on saftey that people actually will work on safety… and it will start to look a lot like SIAI ideas.
Right, SI’s basic idea is correct.
However, given that WBE’s will in any case be developed (and we can mention IA as well) , I’d like to see more consideration of how to keep brain-based AI’s as safe as possible before they enter their Intelligence Explosion—even though we understand that after an Explosion, there is little you can do.
One trouble is that that is essentailly tacking mind enslavement on to the WBE proposition. Nobody wants that. Uploads wouldn’t volunteer. Even if a customer paid enough of a premium for an employee with loyalty modifications, that only rolls us back to relying on the good intent of the customer.
This comes down to the exact same arms race between friendly and ‘just do it’ . With extra ethical and reverse-engineering hurdles. (I think we’re pretty much stuck with testing and filtering based on behavior. And some modification will only be testable after uploading is available)
Mind you I’m not saying don’t do work on this, I’m saying not much work will be done on it.
Yes, creating WBEs or any other AIs that may have personhood, brings up a range of ethical issues on top of preventing human extinction.