This answer is interesting, but underspecified for somebody who’s never heard of this. What is Game B? Where is it? Google just returns a bunch of board game links.
Skimmed the wiki, watched the first 15 minutes of the video, still have no idea whether there is anything specific. So far it seems to me like a group of people who are trying to improve the world by talking to each other about how important it is to improve the world.
You seem to know something about it, could you please post three specific examples? (I mean, examples other than making a video or a web page about how Game B is an important thing.)
That’s a bit like saying: “What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?”
I haven’t seen easy answers or a good link for them. At the same time, the project is one of the answers for the question in the OP.
I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.
There’s the Game B discourse around creating social norms that defeat moloch.
This answer is interesting, but underspecified for somebody who’s never heard of this. What is Game B? Where is it? Google just returns a bunch of board game links.
edit: Ah, finally got to https://www.gameb.wiki/
I’m not sure what the best point of entry is. Youtube videos like https://www.youtube.com/watch?v=HL5bcgpprxY do give some explanation.
Skimmed the wiki, watched the first 15 minutes of the video, still have no idea whether there is anything specific. So far it seems to me like a group of people who are trying to improve the world by talking to each other about how important it is to improve the world.
You seem to know something about it, could you please post three specific examples? (I mean, examples other than making a video or a web page about how Game B is an important thing.)
That’s a bit like saying: “What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?”
I haven’t seen easy answers or a good link for them. At the same time, the project is one of the answers for the question in the OP.
I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.