Nikola Danaylov interviewed Noam Chomsky on his Singularity 1 on 1 podcast.
Chomsky is smart, but in discussing the future of AI, he is stuck on something—he never quite steps up to answer the questions.
Can someone figure out the nature of Chomsky’s mental block? What is he missing here?
I think his problem is he assumes an AI has to closely resemble a human mind to be powerful enough to be dangerous.
Is there a transcript of the interview somewhere? The youtube auto-generated one mangles Chomsky’s replies horribly.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Nikola Danaylov interviewed Noam Chomsky on his Singularity 1 on 1 podcast.
Chomsky is smart, but in discussing the future of AI, he is stuck on something—he never quite steps up to answer the questions.
Can someone figure out the nature of Chomsky’s mental block? What is he missing here?
I think his problem is he assumes an AI has to closely resemble a human mind to be powerful enough to be dangerous.
Is there a transcript of the interview somewhere? The youtube auto-generated one mangles Chomsky’s replies horribly.