going entirely against the outside view about something you know nothing about.
Downvoted for misuse of ‘the outside view’. Choosing a particular outside view on a topic which the poster alegedly ‘knows nothing about’ would be ‘pulling a superficial similarity out of his arse’.
Replace the ‘outside view’ reference with the far more relevant reference to ‘expert consensus’.
The whole point of the outside view is that “pulling a superficial similarity out of your arse” often works better than delving into complicated object-level arguments. At least a superficial similarity is more entangled with reality, more objective, than that 80% number I made up in the shower. If you want to delude yourself, it’s easier to do with long chains of reasoning than with surface similarities.
The “Probability that AI will kill us = 80%” is not a figure the poster pulled out of their ass. It is Anna Salmon’s figure from her talk: “Probability that AI without safeguards will kill us = 80%”- and the poster attributed it to her.
Anna Salmon may well have pulled the figure out of her ass—but that seems like a different issue.
I wrote the post while tired last night, probably not a good idea.
The numbers were not what I was trying to get across, (you can make them a lot smaller across the board and I wouldn’t have a problem). It is the general shape of the problem, and the interfering nature of the actions for each world.
Do you think what we know about AI is so low that we shouldn’t even try to think about trying to shape its development?
Downvoted for pulling numbers out of your ass and going entirely against the outside view about something you know nothing about.
Statements like “Probability that AI will kill us = 80%” are entirely devoid of content.
Downvoted for misuse of ‘the outside view’. Choosing a particular outside view on a topic which the poster alegedly ‘knows nothing about’ would be ‘pulling a superficial similarity out of his arse’.
Replace the ‘outside view’ reference with the far more relevant reference to ‘expert consensus’.
The whole point of the outside view is that “pulling a superficial similarity out of your arse” often works better than delving into complicated object-level arguments. At least a superficial similarity is more entangled with reality, more objective, than that 80% number I made up in the shower. If you want to delude yourself, it’s easier to do with long chains of reasoning than with surface similarities.
The “Probability that AI will kill us = 80%” is not a figure the poster pulled out of their ass. It is Anna Salmon’s figure from her talk: “Probability that AI without safeguards will kill us = 80%”- and the poster attributed it to her.
Anna Salmon may well have pulled the figure out of her ass—but that seems like a different issue.
I wrote the post while tired last night, probably not a good idea.
The numbers were not what I was trying to get across, (you can make them a lot smaller across the board and I wouldn’t have a problem). It is the general shape of the problem, and the interfering nature of the actions for each world.
Do you think what we know about AI is so low that we shouldn’t even try to think about trying to shape its development?