Thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars
You’re confusing peoples’ goals with their expectations.
The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.
I don’t know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?
The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don’t just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don’t do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.
I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.
Do you have some convincing counterarguments?
Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.
Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.
You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.
The basic athematic of AI risk is, [orthogonality
thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]
These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.
Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.
I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.
You’re confusing peoples’ goals with their expectations.
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true. Even if MIRI turned out to be a cynical cult, I wouldn’t take this to be evidence against the claims in that paper. Do you have some convincing counterarguments?
I don’t know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?
The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don’t just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don’t do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.
I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.
Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.
Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.
You criticize mere arguments and then respond with some of your own. Of all the non-normal LessWrong memes, the orthogonally thesis doesn’t strike me as particularly out there.
The basic athematic of AI risk is, [orthogonality thesis] + [agents more powerful than us seem feasible with near-future technology] + [the large space of possible goals] = [we have to be very carful building the first AIs]
These seem like conservative conclusions derived from conservative assumptions. You don’t even have to buy recursive self improvement at all.
Ironically, I think the blog you posted was an example of rank scientism. I mean, sure induction is great. But by his reasoning, we really shouldn’t worry about global warming until we’ve tested our models on several identical copies of earth. He thinks if its not physics, then its tarot.
I agree with many of your criticisms of MIRI. It was (as far as I can tell) extremely poorly run for a very long time, but don’t go throwing out the apocalypse with the bathwater. Isn’t it possible that MIRI is a dishonest cult and AI is extremely likely to kill us all.
I feel like citing Malthus as striking you as starkly true is a poor argument.