I’d be delighted to talk about this. I am of the opinion that existing frontier models are within an order of magnitude of a human mind, with existing hardware. It will be interesting to see how a sensible person gets to a different conclusion.
I am also trained as an electrical engineer, so we’re already thinking from a common point of view.
I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn’t know very much about current AI, and isn’t interested enough to talk to strangers online—he’s in his 70s and figures that if AI does eventually destroy the world it probably won’t be in his own lifetime. :/
He might also argue “even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars.”
I’d be delighted to talk about this. I am of the opinion that existing frontier models are within an order of magnitude of a human mind, with existing hardware. It will be interesting to see how a sensible person gets to a different conclusion.
I am also trained as an electrical engineer, so we’re already thinking from a common point of view.
I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn’t know very much about current AI, and isn’t interested enough to talk to strangers online—he’s in his 70s and figures that if AI does eventually destroy the world it probably won’t be in his own lifetime. :/
He might also argue “even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars.”