I’m not sure if you thought of it while reading my comment or if it’s generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people. It’s good to be actively reminded about it though for entropy reasons, so I appreciate it.
I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn’t sound like the impression I got. CEV, for example, is one example that deals with the ethical mess of which people’s values are worth including. The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me. Elon’s proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we’ll still be far inferior to good AGI.
Thanks.
I’m not sure if you thought of it while reading my comment or if it’s generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people. It’s good to be actively reminded about it though for entropy reasons, so I appreciate it.
I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn’t sound like the impression I got. CEV, for example, is one example that deals with the ethical mess of which people’s values are worth including. The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me. Elon’s proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we’ll still be far inferior to good AGI.