A little background: I have an above average commute to work and make use of the time by listening to public radio. I have been doing this for just over a year without “doing my part” and contributing. The primary justification of this has been that my commute path and times have me listening to three different public radio stations. I never could decide which station to pledge; which needed it more, which I liked best, which had the least annoying pledge breaks.
The other day, during a pledge break, they played a promo by Ira Glass which went something like:
I’m going to say something that has never happened in a pledge break before. We don’t need your money. You do not have to call. There is no evidence to back that up. Every year we say you have to pledge and give your money or we will go away, but year after year, we are still here, even though you didn’t pledge.
You should call because its the right thing to do. You like public radio, enough to listen to a pledge break, so you should pledge, not because it is logical but because it is right...
This struck a note with me. Perhaps because of my recent attention here at LW (does that count as focus bias?). It brought two LW relevant questions to mind.
If pledging public radio is the right thing to do, but all of the evidence suggests I personally do not have to pledge, what rational algorithm achieves that outcome? It is not like you can make a ‘lives saved per dollar’ figure for NPR, it is either there or not. I guess in a really poorly funded station, one might be able to come up with a figure for minutes of programming per dollar. Does doing the “right thing” simply produce a warm feeling? Or is it more like I should pledge because everyone should pledge, similar to “I should tell the truth so no one lies to me”?
Would pledging public radio make a good metric for the friendliness of an AI? Obviously not an unchangeable line of code that says “pledge NPR”, but an AI that decides becoming a member of KQED is a good thing to do. I’m sure there are plenty of other situations that are similar like donating to open source software that you use or paying to park in the state forest parking lot instead of parking on the street and walking in for free. It might seem silly, but an AI that chooses to become a member of the local public radio will probably also choose not killing everyone over some increase in another utility function.
When what is rational is not what is “right”
A little background: I have an above average commute to work and make use of the time by listening to public radio. I have been doing this for just over a year without “doing my part” and contributing. The primary justification of this has been that my commute path and times have me listening to three different public radio stations. I never could decide which station to pledge; which needed it more, which I liked best, which had the least annoying pledge breaks.
The other day, during a pledge break, they played a promo by Ira Glass which went something like:
I’m going to say something that has never happened in a pledge break before. We don’t need your money. You do not have to call. There is no evidence to back that up. Every year we say you have to pledge and give your money or we will go away, but year after year, we are still here, even though you didn’t pledge.
You should call because its the right thing to do. You like public radio, enough to listen to a pledge break, so you should pledge, not because it is logical but because it is right...
This struck a note with me. Perhaps because of my recent attention here at LW (does that count as focus bias?). It brought two LW relevant questions to mind.
If pledging public radio is the right thing to do, but all of the evidence suggests I personally do not have to pledge, what rational algorithm achieves that outcome? It is not like you can make a ‘lives saved per dollar’ figure for NPR, it is either there or not. I guess in a really poorly funded station, one might be able to come up with a figure for minutes of programming per dollar. Does doing the “right thing” simply produce a warm feeling? Or is it more like I should pledge because everyone should pledge, similar to “I should tell the truth so no one lies to me”?
Would pledging public radio make a good metric for the friendliness of an AI? Obviously not an unchangeable line of code that says “pledge NPR”, but an AI that decides becoming a member of KQED is a good thing to do. I’m sure there are plenty of other situations that are similar like donating to open source software that you use or paying to park in the state forest parking lot instead of parking on the street and walking in for free. It might seem silly, but an AI that chooses to become a member of the local public radio will probably also choose not killing everyone over some increase in another utility function.