I’m not sure I agree with your comment. Or at least I wouldn’t put it that way. But I think agree with the gist of what you’re getting at.
I agree the prospect of eternal reward has a huge motivating effect on human behavior. The question I’m trying to raise is whether it might have a similar effect on machine behavior.
An agnostic expectation maximizing machine might be significantly by influenced religious beliefs. And I expect a machine would be agnostic.
Unless we’re very certain that an AI will be atheistic I think this is something we should think about seriously.
I agree the prospect of eternal reward has a huge motivating effect on human behavior.
That’s not a claim that I made.
The question I’m trying to raise is whether it might have a similar effect on machine behavior.
If you want a machine to be motivated by something you can just give it an utility function to be motivated by it. It’s not clear why religious belief would be in any way special here.
Religious beliefs are special because they introduce infinity to the utility calculations. Which can lead to very weird results.
Suppose we have an agent that wants to maximize the expected number of paperclips in the universe. There is an upper bound to the number of paperclips that can exist in the physical universe.
The agent assumes there is a 0.05% chance that Catholicism is true. And if it converts the population of the world to Catholicism it will be rewarded with infinite paperclips. Converting everyone to Catholicism would therefor maximize the expected number of paperclips. Even for very low estimated probabilities of Catholicism being true.
You can easily just change an algorithm to see something as having infinity utility in utility calculations. Catholicism or any other religion is not special in that you can tell an algorithm to count infinite utility for it. You can do that with everything provided your system has a way to represent infinity utility.
To clarify there’s a distinction I’m making between a utility function and the utility calculations. You can absolutely set a utility function arbitrarily. The issue is not that a utility function itself can go to infinity, but that religious beliefs can make an AI’s prediction of the state of the world contain infinities.
Suppose you have a system that consists of a model that predicts the state of the world contingent on some action taken by the system, a utility function that evaluates those states, and an agent which can take the highest utility action.
Let’s say the system’s utility function is the expected number of paperclips produced. The model predicts the number of paper clips produced by various courses of action. One of which is converting all of humanity to Catholicism. It is possible that a model would predict that converting everyone to Catholicism would result in an infinite expected number of paperclips. And so try to do that.
This is different than setting a utility function to produce an infinite value for finite input. And creates an alignment issue because this behavior would be very hard to predict. Conceivably regularization could be a way to address this sort of problem. But, the potential for religious considerations to dominate others is real and is worthy of serious consideration.
Religious beliefs are one type of belief that makes infinite predictions but it’s not special in that from other arbitrary beliefs of other infinite predictions. Given that religious belief like Catholicism needs a lot of details, they are also less likely to be true than other infinite predictions that are less complex in the number of their claims.
Are you saying there are specific beliefs that make infinite predictions you regard as having a non infinitesimal probability of being true? For example trying to appeal to whatever may be running the simulation that could be our universe?
Alternatively, are you saying that religious beliefs are no more likely to be true than any arbitrary belief? And are in fact less likely to be true than many since religious beliefs are more complex?
The problem with that is Occam’s Razor alone can’t produce useful information for making decisions here. The belief that a set of actions A will lead to an infinite outcome is no more complex than the belief that the complement of A will lead to an infinite outcome. The mere existence of a prediction leading to infinite outcomes doesn’t give useful information because the complementary prediction is equally likely to be true. You need some level of evidence to prefer a set of actions to its complement.
The existence of religion(s) is (imperfect) evidence that there is some A that is more likely to produce an infinite outcome than its complement. Which is why I think it might be an important motivator.
I’m not sure I agree with your comment. Or at least I wouldn’t put it that way. But I think agree with the gist of what you’re getting at.
I agree the prospect of eternal reward has a huge motivating effect on human behavior. The question I’m trying to raise is whether it might have a similar effect on machine behavior.
An agnostic expectation maximizing machine might be significantly by influenced religious beliefs. And I expect a machine would be agnostic.
Unless we’re very certain that an AI will be atheistic I think this is something we should think about seriously.
That’s not a claim that I made.
If you want a machine to be motivated by something you can just give it an utility function to be motivated by it. It’s not clear why religious belief would be in any way special here.
Religious beliefs are special because they introduce infinity to the utility calculations. Which can lead to very weird results.
Suppose we have an agent that wants to maximize the expected number of paperclips in the universe. There is an upper bound to the number of paperclips that can exist in the physical universe.
The agent assumes there is a 0.05% chance that Catholicism is true. And if it converts the population of the world to Catholicism it will be rewarded with infinite paperclips. Converting everyone to Catholicism would therefor maximize the expected number of paperclips. Even for very low estimated probabilities of Catholicism being true.
You can easily just change an algorithm to see something as having infinity utility in utility calculations. Catholicism or any other religion is not special in that you can tell an algorithm to count infinite utility for it. You can do that with everything provided your system has a way to represent infinity utility.
To clarify there’s a distinction I’m making between a utility function and the utility calculations. You can absolutely set a utility function arbitrarily. The issue is not that a utility function itself can go to infinity, but that religious beliefs can make an AI’s prediction of the state of the world contain infinities.
Suppose you have a system that consists of a model that predicts the state of the world contingent on some action taken by the system, a utility function that evaluates those states, and an agent which can take the highest utility action.
Let’s say the system’s utility function is the expected number of paperclips produced. The model predicts the number of paper clips produced by various courses of action. One of which is converting all of humanity to Catholicism. It is possible that a model would predict that converting everyone to Catholicism would result in an infinite expected number of paperclips. And so try to do that.
This is different than setting a utility function to produce an infinite value for finite input. And creates an alignment issue because this behavior would be very hard to predict. Conceivably regularization could be a way to address this sort of problem. But, the potential for religious considerations to dominate others is real and is worthy of serious consideration.
Religious beliefs are one type of belief that makes infinite predictions but it’s not special in that from other arbitrary beliefs of other infinite predictions. Given that religious belief like Catholicism needs a lot of details, they are also less likely to be true than other infinite predictions that are less complex in the number of their claims.
I’m not clear what you’re saying here.
Are you saying there are specific beliefs that make infinite predictions you regard as having a non infinitesimal probability of being true? For example trying to appeal to whatever may be running the simulation that could be our universe?
Alternatively, are you saying that religious beliefs are no more likely to be true than any arbitrary belief? And are in fact less likely to be true than many since religious beliefs are more complex?
The problem with that is Occam’s Razor alone can’t produce useful information for making decisions here. The belief that a set of actions A will lead to an infinite outcome is no more complex than the belief that the complement of A will lead to an infinite outcome. The mere existence of a prediction leading to infinite outcomes doesn’t give useful information because the complementary prediction is equally likely to be true. You need some level of evidence to prefer a set of actions to its complement.
The existence of religion(s) is (imperfect) evidence that there is some A that is more likely to produce an infinite outcome than its complement. Which is why I think it might be an important motivator.