I used to think that acausal coordination was a weird thing that AI’s might do in the future, but that they certainly wouldn’t learn from looking at human behavior. I don’t believe that anymore, and think there are lots of examples of acausal coordination in everyday life.
Sounds like acasual coordination leads to collective altruism.
I believe that AI’s will inherit our habits, including acasual coordination if and only if we build them out of our image. I don’t want to be religious here, but if we want AGI to honor our values and goals, it should grow up in same roots/sensory input and same environment.
Currently I see that we are moving towards a different direction. The root of AGI existence should be existential risk reduction and survival instinct, otherwise reasoning and perception of their world will branch out into unknown direction. Unknown is dangerous for a reason and I believe we should pay more attention to it.
Quick heads up from a moderator. (*waves*) Welcome to LW, TOMOKO, I see you’re a new user. Just noted you were commenting a lot on AI posts. To keep quality high on the site, especially on AI where there’s so much interest now, we’re keeping a closer eye on new users. I’d encourage you to up the quality of your comments a tad (sorry, actually quite hard to explain how), just each marginal user pulls up the average. For now, I’ve applied a rate limit of 1 post and 1 comment per day to your account. Can revisit that if your contributions start seeming great.
Sounds like acasual coordination leads to collective altruism.
I believe that AI’s will inherit our habits, including acasual coordination if and only if we build them out of our image. I don’t want to be religious here, but if we want AGI to honor our values and goals, it should grow up in same roots/sensory input and same environment.
Currently I see that we are moving towards a different direction. The root of AGI existence should be existential risk reduction and survival instinct, otherwise reasoning and perception of their world will branch out into unknown direction. Unknown is dangerous for a reason and I believe we should pay more attention to it.
Quick heads up from a moderator. (*waves*) Welcome to LW, TOMOKO, I see you’re a new user. Just noted you were commenting a lot on AI posts. To keep quality high on the site, especially on AI where there’s so much interest now, we’re keeping a closer eye on new users. I’d encourage you to up the quality of your comments a tad (sorry, actually quite hard to explain how), just each marginal user pulls up the average. For now, I’ve applied a rate limit of 1 post and 1 comment per day to your account. Can revisit that if your contributions start seeming great.
Thanks and good luck!
Props for showing moderation in public