Yes, I think so and apparently so does Kahneman. I don’t think this is particularly controversial. Kahneman does say that positive reinforcement is more efficient (both in animals and humans).
mapnoterritory
Daniel Kahneman in Thinking, Fast and Slow:
I had stumbled onto a significant fact of the human condition: the feedback to which life exposes us is perverse. Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty.
There reason for that lies in regression to the mean when training (example of flight instructors in the israel airforce):
I pointed out to the instructors that what they saw on the board coincided with what we had heard about the performance of aerobatic maneuvers on successive attempts: poor performance was typically followed by improvement and good performance by deterioration, without any help from either praise or punishment.
Since positive reinforcement is so counterintuitive: don’t forget to reward yourself for rewarding somebody for good behaviour! :)
Me neither. I am actually not familiar with his work, but knew he is known in the singularity/transhumanism camp. I’ve heard two discussions with him (with Paul Krugman and on Singularity on 1 on 1) and he came across as well articulated and with a decent understanding of the issues. He talked about how he changed his mind and grew more skeptical of singularity, but I don’t know what causes this hostile reaction… :) Oh well...
Beer with Charlie Stross in Munich
Note that this claim is distinct from the claim that (due to general economic theory) it’s more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we’re looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.
I don’t think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodies.
As for “essentially irreplaceable”—in a very particular sense the exact arrangement of particles each second of every human being is “unique” and “essentially irreplaceable” (bar now quantum mechanics). An extreme “archivist/rare art collector/Jain monk” AI might want to keep therefore these collections (or some of their snapshots), but I don’t see this to be too compelling. I am sure we could win a lot of sympathy if AGI could be shown to automatically entail some sort of ultimate compassion, but I think it is more likely we have to make it so (hence the FAI effort).
If I want to be around to see the last moments of Sun, I will feel a sting of guilt that the Universe is slightly less efficient because it is running me, rather than using those resources for some better, deeper experiencing, more seeing observer.
Thanks, this is a very useful explanation!
Alright, thank you. As far as the last paragraph goes, I took it of course more on the “metaphorical” level. I agree their evolutionary agent might be too restricted to be fully interesting (though it is valuable if their inferiority is demonstrated analytically not only from simulations).
Since it seems you have lot’s of experience with IPD, what do you think about the case B)? The paper makes the claim specifically for the ZD strategies, but do you think this “superrationally” result could generalize to any strategy which has also a theory of mind? On the other hand Hofstadter’s idea was in the context of one-shot PD, so this might be not apply in general at all… I need to learn more about this subject...
The assumption is that at least part of the panel already have the relevant domain specific knowledge. There is some time investment to re-read and prepare for the discusion of course (plus the technical part of editing etc.). A monthly podcast could be possibly doable.
Paper: Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent
Hi everybody,
I’ve been lurking here for maybe a year and joined recently. I work as an astrophysicist and I am interested in statistics, decision theory, machine learning, cognitive and neuro-psychology, AI research and many others (I just wish I had more time for all these interests). I find LW to be a great resource and it introduced me to many interesting concepts. I am also interested in articles on improving productivity and well-being.
I haven’t yet attended any meet-up, but if there was one in Munich I’d try to come.
I’d certainly willing to help somehow, but my wording was careful on purpose—I haven’t yet gotten through all the sequences and don’t think I could contribute much to the discussion at this stage. But I’d like to help with organizing and making this work.
I have no experience with podcasting but was assuming that this is a lot of work. I now think a monthly podcast would be better and more feasible than a bi-weekly one. I was reluctant to post this suggestion because I know I don’t have the knowledge and time to drive it, but I hoped that there could be people in the audience who might like the idea and could bring it to fruition.
Less Wrong: The podcast
I’d recommend to post this variant to boardgamegeek.com to reach a wider audience. The “variants” forum for Pandemic is here. Kudos on the bias list, it’s really well done!
gwern is at http://www.gwern.net/ Very good content!