I love being accused of being GPT-x on Discord by people who don’t understand scaling laws and think I own a planet of A100s
There are some hard and mean limits to explainability and there’s a real issue that a person that correctly sees how to align AGI or that correctly perceives that an AGI design is catastrophically unsafe will not be able to explain it. It requires super-intelligence to cogently expose stupid designs that will kill us all. What are we going to do if there’s this kind of coordination failure?
I love being accused of being GPT-x on Discord by people who don’t understand scaling laws and think I own a planet of A100s
There are some hard and mean limits to explainability and there’s a real issue that a person that correctly sees how to align AGI or that correctly perceives that an AGI design is catastrophically unsafe will not be able to explain it. It requires super-intelligence to cogently expose stupid designs that will kill us all. What are we going to do if there’s this kind of coordination failure?