Link post
I’m curious as to what y’all think of the points made in this post against AI risk from 2 AI researchers at Princeton. If you have reason to think any points made are particularly good or bad, write it in the comments below!
This was already referenced here: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to
I think it would be better to comment there instead of here.
That post was completely ignored here: 0 comments and 0 upvotes during the first 24 hours.
I don’t know if it’s the timing or the content.
On HN, which is where I saw it, it was ranked #1 briefly, as I recall. But then it got “flagged”, apparently.
Good point!
This post was worth looking at, although its central argument is deeply flawed.
I commented on the other linkpost: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to?commentId=fBsrSQBgCLZd4zJHj
The post isn’t even Against AI Doom. It is against the idea that you can communicate a high confidence in AI doom to policy makers.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
This was already referenced here: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to
I think it would be better to comment there instead of here.
That post was completely ignored here: 0 comments and 0 upvotes during the first 24 hours.
I don’t know if it’s the timing or the content.
On HN, which is where I saw it, it was ranked #1 briefly, as I recall. But then it got “flagged”, apparently.
Good point!
This post was worth looking at, although its central argument is deeply flawed.
I commented on the other linkpost: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to?commentId=fBsrSQBgCLZd4zJHj
The post isn’t even Against AI Doom. It is against the idea that you can communicate a high confidence in AI doom to policy makers.