Really? I’m starting to think maybe they might be correct.
The reason for why I posted this is because I was interested in the reactions it would evoke. It seems that many people here think that any information whatsoever is valuable and that one should update on that evidence.
It is very interesting to see how many people here are very skeptical of those results, even though they are based on comparatively hard evidence and they were signed by 180 top-notch scientists. Many of the same people give higher estimates, while demanding none or little evidence, for something that sounds as simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
I think many people here have updated their belief, I did. My initial prior was very low, though, so I still retain a very low probability for FTL neutrinos.
simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
Natural language isn’t a great metric in this context. Also, recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
...recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn’t true for FTL phenomena because many people are aware of how unlikely that possibility is.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Some people think that complexity issues are even more fundamental than the laws of physics.
Sure. And I’m probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don’t see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can’t even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Well, but that’s not an unreasonable position. If I don’t have strong evidence either way on a question I should move my estimates close to 50%, That’s in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.
SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they’re generated. See here for more. Note that I, like the author of the post I’ve linked, do not believe that this finding will stand up. It’s just that if it does stand up, it will be because the constant velocity assumption is wrong.
If I don’t have strong evidence either way on a question I should move my estimates close to 50%...
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.
The reason for why I posted this is because I was interested in the reactions it would evoke. It seems that many people here think that any information whatsoever is valuable and that one should update on that evidence.
It is very interesting to see how many people here are very skeptical of those results, even though they are based on comparatively hard evidence and they were signed by 180 top-notch scientists. Many of the same people give higher estimates, while demanding none or little evidence, for something that sounds as simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
I think many people here have updated their belief, I did. My initial prior was very low, though, so I still retain a very low probability for FTL neutrinos.
Natural language isn’t a great metric in this context. Also, recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn’t true for FTL phenomena because many people are aware of how unlikely that possibility is.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Sure. And I’m probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don’t see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can’t even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.
Well, but that’s not an unreasonable position. If I don’t have strong evidence either way on a question I should move my estimates close to 50%, That’s in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.
SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they’re generated. See here for more. Note that I, like the author of the post I’ve linked, do not believe that this finding will stand up. It’s just that if it does stand up, it will be because the constant velocity assumption is wrong.
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.