After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power.
The other thing evolution needed is a very complex and influenceable environment. I don’t know how complex an environment a GAI needs, but it’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
[I]t’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
Or they’ll just forgo boxes and friendliness to get the project online faster.
Maybe I’m just being pessimistic, but I wouldn’t count on adequate safety precautions with a project half this complex if it’s being run by a modern bureaucracy (government corporate or academic). There are exceptions, but it doesn’t seem like most of the organizations likely to take an interest care more about long term risks than immediate gains.
The other thing evolution needed is a very complex and influenceable environment. I don’t know how complex an environment a GAI needs, but it’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
Or they’ll just forgo boxes and friendliness to get the project online faster.
Maybe I’m just being pessimistic, but I wouldn’t count on adequate safety precautions with a project half this complex if it’s being run by a modern bureaucracy (government corporate or academic). There are exceptions, but it doesn’t seem like most of the organizations likely to take an interest care more about long term risks than immediate gains.