I feel like once we basically understand how the human predictive algorithm works, it may not be possible to improve on that algorithm (without massive and time-costly experimentation) no matter what the level of intelligence of the entity trying to improve on it. (The reason I gave: The human one has been developed by trial-and-error over millions of years in the real world, a method that won’t be available to the GMAGI. So there’s no guarantee that a greater intelligence could find a way to improve this algorithm without such extended trial-and-error)...
The “I feel” opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
It does seem like the only way people can maintain this confusion beyond 10 seconds of thought...
The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.
I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).
The “I feel” opening is telling.
I think that you might be attributing too much to an expression uttered in an informal conversation.
In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
What do you mean by “feelings” and “preferences”. The use of intuition seems to be universal, even within the field of mathematics. I don’t see how computational bounded agents could get around “feelings” when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like “predictive algorithms” doesn’t change anything about the fact that making predictions about subjects that are poorly understood is error prone.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
Yes. He just doesn’t seem to be someone whose opinion on artificial intelligence should be considered particularly important. He’s just a layman making the typical layman guesses and mistakes. I’m far more interested in what he has to say on warps in spacetime!
The “I feel” opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.
I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).
I think that you might be attributing too much to an expression uttered in an informal conversation.
What do you mean by “feelings” and “preferences”. The use of intuition seems to be universal, even within the field of mathematics. I don’t see how computational bounded agents could get around “feelings” when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like “predictive algorithms” doesn’t change anything about the fact that making predictions about subjects that are poorly understood is error prone.
Yes. He just doesn’t seem to be someone whose opinion on artificial intelligence should be considered particularly important. He’s just a layman making the typical layman guesses and mistakes. I’m far more interested in what he has to say on warps in spacetime!