I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
An AI is very likely to have beliefs or behaviors that are irrational...
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
Honest answer: Yes. For example 1 utilon per paperclip.
I appreciate the example. It will serve me well. Upvoted.