I’m planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn’t exist)
My position is roughly this.
The nature of intelligence (and its capability for FOOMing) is poorly understood
The correct actions to take depend upon the nature of intelligence.
As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.
And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.
When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can’t be anything but what they say it is.
I want to get the same feeling off the group studying intelligence as I do from that type of research. They don’t need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.
Questions I hope they would be asking:
Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?
If it wasn’t then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.
Interesting points. Speaking only for myself, it doesn’t feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.
For a different angle, here’s an old theory of Michael Vassar’s—I don’t know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Definitely not just that. Knowing what the right thing is, and being able to do it before it’s too late, are also required. And talent implies a greater innate capacity for learning to do so. (I’m sure he meant in prospect, not retrospect).
It’s fair to say that some of what we identify as “talent” in people is actually in their motivations as well as their talent-requisite abilities.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound.
And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?
How often in human history have organizations announced, “Mission accomplished—now we will release our employees to go out and do something else”?
It doesn’t seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.
I think that what SIAI works on is real and urgent, but if I’m wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn’t seem like a disastrous outcome.
From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn’t awful to look for something useful for the organization to do rather than dissolving it.
The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.
Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don’t begrudge them a few additional decades of corporate existence.
I’m planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn’t exist)
My position is roughly this.
The nature of intelligence (and its capability for FOOMing) is poorly understood
The correct actions to take depend upon the nature of intelligence.
As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.
And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.
What would the charity you’d like to contribute to look like?
When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can’t be anything but what they say it is.
I want to get the same feeling off the group studying intelligence as I do from that type of research. They don’t need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.
Questions I hope they would be asking:
Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?
If it wasn’t then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.
Interesting points. Speaking only for myself, it doesn’t feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.
For a different angle, here’s an old theory of Michael Vassar’s—I don’t know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Definitely not just that. Knowing what the right thing is, and being able to do it before it’s too late, are also required. And talent implies a greater innate capacity for learning to do so. (I’m sure he meant in prospect, not retrospect).
It’s fair to say that some of what we identify as “talent” in people is actually in their motivations as well as their talent-requisite abilities.
And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?
How often in human history have organizations announced, “Mission accomplished—now we will release our employees to go out and do something else”?
It doesn’t seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.
I think that what SIAI works on is real and urgent, but if I’m wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn’t seem like a disastrous outcome.
From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn’t awful to look for something useful for the organization to do rather than dissolving it.
The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.
Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don’t begrudge them a few additional decades of corporate existence.
I like this concept.
Assume your theory will fail in some places, and keep pressing it until it does, or you run out of ways to test it.
FHI?
The Future of Humanity Institute.
Nick Bostrom’s personal website probably gives you the best idea of what they produce.
A little too philosophical for my liking, but still interesting.