IMO a fun project (for those like me who like this but are clearly not smart enough to be part of a Singularity developing team): create an object-based environment with maybe rule-based reproductive agents, with customizable explicit world rules (as in a computer game, not as in physics) and let them evolve. Maybe users across the world can add new magical artifacts and watch the creatures fail hilariously...
On a more related note, the post sounds ominous for any hope of a general AI. There may be no clear distinction between protein computer and just protein, between learning and blindly acting. If we and our desires are in such a position, aren’t any AI we can make indirectly also blind? Or as I understand, Eliezer seems to think that we can (or had better) bootstrap, both for intelligence/computation and morality. For him, this bootstrapping, this understanding/theory of the generality of our own intelligence (as step one of the bootstrapping) seems to be the Central Truth of Life (tm). Maybe he’s right, but for me with less insight into intelligence, that’s not self-evident. And he didn’t explain this crucial point anywhere clearly, only advocated it. Come on, it’s not as if enough people can be a seed AI programmer even armed with that Central Truth. (But who knows.)
IMO a fun project (for those like me who like this but are clearly not smart enough to be part of a Singularity developing team): create an object-based environment with maybe rule-based reproductive agents, with customizable explicit world rules (as in a computer game, not as in physics) and let them evolve. Maybe users across the world can add new magical artifacts and watch the creatures fail hilariously...
On a more related note, the post sounds ominous for any hope of a general AI. There may be no clear distinction between protein computer and just protein, between learning and blindly acting. If we and our desires are in such a position, aren’t any AI we can make indirectly also blind? Or as I understand, Eliezer seems to think that we can (or had better) bootstrap, both for intelligence/computation and morality. For him, this bootstrapping, this understanding/theory of the generality of our own intelligence (as step one of the bootstrapping) seems to be the Central Truth of Life (tm). Maybe he’s right, but for me with less insight into intelligence, that’s not self-evident. And he didn’t explain this crucial point anywhere clearly, only advocated it. Come on, it’s not as if enough people can be a seed AI programmer even armed with that Central Truth. (But who knows.)
Evolutionary biologists basically do this, without the interactivity. They create and run computer simulations on rule-based reproducing agents.