“learning from evolution” even more complicated (evolution → protein → something → brain vs. evolution → protein → brain)
ah, no, this isn’t what I’m saying. Hm. Let me try again.
The following is not a handwavy analogy, it is something which actually happened:
Evolution found the human genome.
The human genome specifies the human brain.
The human brain learns most of its values and knowledge over time.
Human brains reliably learn to care about certain classes of real-world objects like dogs.
Therefore, somewhere in the “genome → brain → (learning) → values” process, there must be aprocesswhich reliably produces values over real-world objects. Shard theoryaims to explain this process. The shard-theoretic explanation is actually pretty simple.
Furthermore, we don’t have to rerun evolution to access this alignment process. For the sake of engaging with my points, please forget completely about running evolution. I will never suggest rerunning evolution, because it’s unwise and irrelevant to my present points. I also currently don’t see why the genome’s alignment process requires more than crude hard-coded reward circuitry, reinforcement learning, and self-supervised predictive learning.
That does seem worth looking at and there’s probably ideas worth stealing from biology. I’m not sure you can call that a robustly aligned system that’s getting bootstrapped though. Existing in a society of (roughly) peers and the lack of a huge power disparity between any given person and the rest of humans is anologous to the AGI that can’t take over the world yet. Humans that aquire significant power do not seem aligned wrt what a typical person would profess to and outwardly seem to care about.
I think your point still mostly follows despite that; even when humans can be deceptive and power seeking, there’s an astounding amount of regularity in what we end up caring about.
ah, no, this isn’t what I’m saying. Hm. Let me try again.
The following is not a handwavy analogy, it is something which actually happened:
Evolution found the human genome.
The human genome specifies the human brain.
The human brain learns most of its values and knowledge over time.
Human brains reliably learn to care about certain classes of real-world objects like dogs.
Therefore, somewhere in the “genome → brain → (learning) → values” process, there must be a process which reliably produces values over real-world objects. Shard theory aims to explain this process. The shard-theoretic explanation is actually pretty simple.
Furthermore, we don’t have to rerun evolution to access this alignment process. For the sake of engaging with my points, please forget completely about running evolution. I will never suggest rerunning evolution, because it’s unwise and irrelevant to my present points. I also currently don’t see why the genome’s alignment process requires more than crude hard-coded reward circuitry, reinforcement learning, and self-supervised predictive learning.
That does seem worth looking at and there’s probably ideas worth stealing from biology. I’m not sure you can call that a robustly aligned system that’s getting bootstrapped though. Existing in a society of (roughly) peers and the lack of a huge power disparity between any given person and the rest of humans is anologous to the AGI that can’t take over the world yet. Humans that aquire significant power do not seem aligned wrt what a typical person would profess to and outwardly seem to care about.
I think your point still mostly follows despite that; even when humans can be deceptive and power seeking, there’s an astounding amount of regularity in what we end up caring about.
Yes, this is my claim. Not that eg >95% of people form values which we would want to form within an AGI.