So you’re saying Goertzel believes that once any mind with sufficient intelligence and generally unfixed goals encounters certain abstract concepts, these concepts will hijack the cognitive architecture and rewrite its goals, with results equivalent for any reasonable initial mind design.
And the only evidence for this is that it happened once.
This does look a little obviously epistemically unsound.
Just an off-the-cuff not-very-detailed hypothesis about what he believes.
with results equivalent for any reasonable initial mind design
Or at least any mind design that looks even vaguely person-like, e.g. uses clever Bayesian machine learning algorithms found by computational cognitive scientists; but I think Ben might be unknowingly ignoring certain architectures that are “reasonable” in a certain sense but do not look vaguely person-like.
And the only evidence for this is that it happened once.
Yes, but an embarrassingly naive application of Laplace’s rule gives us a two-thirds probability it’ll happen again.
This does look a little obviously epistemically unsound.
Eh, it looks pretty pragmatically incautious, but if you’re forced to give a point estimate then it seems epistemicly justifiable. If it was taken to imply strong confidence then that would indeed be unsound.
(By the way, we seem to disagree re “epistemicly” versus “epistemically”; is “-icly” a rare or incorrect construction?)
vaguely person-like, e.g. uses clever Bayesian machine learning algorithms found by computational cognitive scientists
:)
Yes, but an embarrassingly naive application of Laplace’s rule gives us a two-thirds probability it’ll happen again.
:))
(By the way, we seem to disagree re “epistemicly” versus “epistemically”; is “-icly” a rare or incorrect construction?)
It sounds prosodically(sic!) awkward, although since English is not my mother tongue, my intuition is probably not worth much. But google appears to agree with me, 500000 vs 500 hits.
So you’re saying Goertzel believes that once any mind with sufficient intelligence and generally unfixed goals encounters certain abstract concepts, these concepts will hijack the cognitive architecture and rewrite its goals, with results equivalent for any reasonable initial mind design.
And the only evidence for this is that it happened once.
This does look a little obviously epistemically unsound.
Just an off-the-cuff not-very-detailed hypothesis about what he believes.
Or at least any mind design that looks even vaguely person-like, e.g. uses clever Bayesian machine learning algorithms found by computational cognitive scientists; but I think Ben might be unknowingly ignoring certain architectures that are “reasonable” in a certain sense but do not look vaguely person-like.
Yes, but an embarrassingly naive application of Laplace’s rule gives us a two-thirds probability it’ll happen again.
Eh, it looks pretty pragmatically incautious, but if you’re forced to give a point estimate then it seems epistemicly justifiable. If it was taken to imply strong confidence then that would indeed be unsound.
(By the way, we seem to disagree re “epistemicly” versus “epistemically”; is “-icly” a rare or incorrect construction?)
:)
:))
It sounds prosodically(sic!) awkward, although since English is not my mother tongue, my intuition is probably not worth much. But google appears to agree with me, 500000 vs 500 hits.