“If you search for “potatoes” the engine could choose to return results for “tomatoes” instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results.”
“If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz...”
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Except for degree of complexity, what’s the difference? “Choice” can be applied to anything modeled as an Agent.
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Sorry, I read this as nonsense. What does it mean for a light switch to “want”?
To determine the “preferences” of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.
Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as “preferring not to die”, and use that model to make predictions about how the amoeba will respond to various situations.
I think the light switch example is far fetched, but the search engine isn’t. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.
“If you search for “potatoes” the engine could choose to return results for “tomatoes” instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results.”
“If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz...”
When you flip the light switch “on” it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the “on” position.
Except for degree of complexity, what’s the difference? “Choice” can be applied to anything modeled as an Agent.
Sorry, I read this as nonsense. What does it mean for a light switch to “want”?
To determine the “preferences” of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.
Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as “preferring not to die”, and use that model to make predictions about how the amoeba will respond to various situations.
I think the light switch example is far fetched, but the search engine isn’t. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.
Don’t forget that the original context was morality.
You don’t think it is far-fetched to speak of morality of search engines?
Yes, it is.