You know, the idea that SI might at any moment devote itself to suppressing AI research is one that pops up from time to time, the logic pretty much being what you suggest here, and until this moment I have always treated it as a kind of tongue-in-cheek dig at SI.
I have only just now come to realize that the number of people (who are not themselves affiliated with SI) who really do seem to consider suppressing AI research to be a reasonable course of action given the ideas discussed on this forum has a much broader implication in terms of the social consequences of these ideas. That is, I’ve only just now come to realize that what the community of readers does is just as important, if not more so, than what SI does.
I am now becoming genuinely concerned that, by participating in a forum that encourages people to take seriously ideas that might lead them to actively suppress AI research, I might be doing more harm than good.
I’ll have to think about that a bit more.
Arepo, this is not particularly directed at you; you just happen to be the data point that caused this realization to cross an activation threshold.
I am now becoming genuinely concerned that, by participating in a forum that encourages people to take seriously ideas that might lead them to actively suppress AI research, I might be doing more harm than good.
Assuming that you think that more AI research is good, wouldn’t adding your voice to those who advocate it here be a good thing? It’s not like your exalted position and towering authority lends credence to a contrary opinion just because you mention it.
I think better AI (of the can-be-engineered-given-what-we-know-today, non-generally-superhuman sort) is good, and I suspect that more AI research is the most reliable way to get it.
I agree that my exalted position and towering authority doesn’t lend credence to contrary opinions I mention.
It’s not clear to me whether advocating AI research here would be a better thing than other options, though it might be.
People with similar background are entering in AI field because they like reduce x-risks, so it’s not obvious this is happening. If safety guided research supress AI research, then be it. Extremely rapid advance per se is not good, if the consequence is extiction.
You know, the idea that SI might at any moment devote itself to suppressing AI research is one that pops up from time to time, the logic pretty much being what you suggest here, and until this moment I have always treated it as a kind of tongue-in-cheek dig at SI.
I have only just now come to realize that the number of people (who are not themselves affiliated with SI) who really do seem to consider suppressing AI research to be a reasonable course of action given the ideas discussed on this forum has a much broader implication in terms of the social consequences of these ideas. That is, I’ve only just now come to realize that what the community of readers does is just as important, if not more so, than what SI does.
I am now becoming genuinely concerned that, by participating in a forum that encourages people to take seriously ideas that might lead them to actively suppress AI research, I might be doing more harm than good.
I’ll have to think about that a bit more.
Arepo, this is not particularly directed at you; you just happen to be the data point that caused this realization to cross an activation threshold.
Assuming that you think that more AI research is good, wouldn’t adding your voice to those who advocate it here be a good thing? It’s not like your exalted position and towering authority lends credence to a contrary opinion just because you mention it.
I think better AI (of the can-be-engineered-given-what-we-know-today, non-generally-superhuman sort) is good, and I suspect that more AI research is the most reliable way to get it.
I agree that my exalted position and towering authority doesn’t lend credence to contrary opinions I mention.
It’s not clear to me whether advocating AI research here would be a better thing than other options, though it might be.
People with similar background are entering in AI field because they like reduce x-risks, so it’s not obvious this is happening. If safety guided research supress AI research, then be it. Extremely rapid advance per se is not good, if the consequence is extiction.