Huh I didn’t realize +50 karma could mean as few as 10 people. Thanks, that also seems to explain why I got some downvotes. There were a sudden influx of comments in the hour right after I posted so at least it wasn’t for vain.
40 years is a lot different from 10 years, and he sure isn’t doesn’t doing himself any favours by not clarifying. It also seems like something the community has focused quite a bit of effort on narrowing down, so it seems strange he would elide the point.
Idk if it’s for some deep strategic purpose but it certainly puts any serious reader into a skeptical mood.
On the idea of ‘pets’, Clippy perhaps might, splinter AI factions almost surely would.
On the ‘easy’ proposals I was expecting Eliezer to provide a few examples of the strongest proposals of the class, and then develop a few counter examples and show conclusively why they are too naive, thus credibly dismissing the entire class. Or at least link to someone who does.
I personally don’t think any ‘easy alignment’ proposal is likely, though I also wouldn’t phrase the dismissal of the class so strongly either.
lc’s objection is bizarre if that was his intention, since he phrased his comment in a way that was clearly least applicable to what I wrote out of every comment on this post. And he got some non zero number of folks to show agreement. Which leads me to suspect some type of weird trolling behaviour. Since it doesn’t seem credible that multiple folks truly believed that I should have been even more direct and to the point.
If anything I was expecting some mild criticism that I should have been more circumspect and hand wavey.
Clippy is defined as a paperclip maximizer. Humans require lots of resources to keep them alive. Those resources could otherwise be used for making more paperclips. Therefore Clippy would definitely not keep any human pets. I’m curious why you think splinter AI factions would. Could you say a bit more about how you expect splinter AIs to arise, and why you expect them to have a tendency towards keeping pets? Is it just that having many AIs makes it more likely that one of them will have a weird utility function?
In a single-single scenario you are correct that it would very unlikely for Clippy to behave in such a manner.
However in a multi-multi scenario, which is akin to an iterated prisoner’s dilemma of random length with unknown starting conditions, the most likely ‘winning’ outcome would be some variation of tit-for-tat.
And tit-for-tat encourages perpetual cooperation as long as the parties are smart enough to avoid death spirals. Again similar to human-pet relationships.
Of course it’s not guaranteed that any multi-multi situation will in fact arise, but I haven’t seen any convincing disproof, nor for any reason why it should not be treated as the default. The most straightforward reason would be the limitations of light speed on communications guaranteeing value drift for even the mightiest hypothetical AGI, eventually.
No one on LW, or in the broader academic community as far as I’m aware of, has yet managed to present a foolproof argument, or even one convincing on the balance of probabilities, for why single-single outcomes are more likely than multi-multi.
Huh I didn’t realize +50 karma could mean as few as 10 people. Thanks, that also seems to explain why I got some downvotes. There were a sudden influx of comments in the hour right after I posted so at least it wasn’t for vain.
40 years is a lot different from 10 years, and he sure isn’t doesn’t doing himself any favours by not clarifying. It also seems like something the community has focused quite a bit of effort on narrowing down, so it seems strange he would elide the point.
Idk if it’s for some deep strategic purpose but it certainly puts any serious reader into a skeptical mood.
On the idea of ‘pets’, Clippy perhaps might, splinter AI factions almost surely would.
On the ‘easy’ proposals I was expecting Eliezer to provide a few examples of the strongest proposals of the class, and then develop a few counter examples and show conclusively why they are too naive, thus credibly dismissing the entire class. Or at least link to someone who does.
I personally don’t think any ‘easy alignment’ proposal is likely, though I also wouldn’t phrase the dismissal of the class so strongly either.
lc’s objection is bizarre if that was his intention, since he phrased his comment in a way that was clearly least applicable to what I wrote out of every comment on this post. And he got some non zero number of folks to show agreement. Which leads me to suspect some type of weird trolling behaviour. Since it doesn’t seem credible that multiple folks truly believed that I should have been even more direct and to the point.
If anything I was expecting some mild criticism that I should have been more circumspect and hand wavey.
Clippy is defined as a paperclip maximizer. Humans require lots of resources to keep them alive. Those resources could otherwise be used for making more paperclips. Therefore Clippy would definitely not keep any human pets. I’m curious why you think splinter AI factions would. Could you say a bit more about how you expect splinter AIs to arise, and why you expect them to have a tendency towards keeping pets? Is it just that having many AIs makes it more likely that one of them will have a weird utility function?
In a single-single scenario you are correct that it would very unlikely for Clippy to behave in such a manner.
However in a multi-multi scenario, which is akin to an iterated prisoner’s dilemma of random length with unknown starting conditions, the most likely ‘winning’ outcome would be some variation of tit-for-tat.
And tit-for-tat encourages perpetual cooperation as long as the parties are smart enough to avoid death spirals. Again similar to human-pet relationships.
Of course it’s not guaranteed that any multi-multi situation will in fact arise, but I haven’t seen any convincing disproof, nor for any reason why it should not be treated as the default. The most straightforward reason would be the limitations of light speed on communications guaranteeing value drift for even the mightiest hypothetical AGI, eventually.
No one on LW, or in the broader academic community as far as I’m aware of, has yet managed to present a foolproof argument, or even one convincing on the balance of probabilities, for why single-single outcomes are more likely than multi-multi.