“Search versus design” explores the basic way we build and trust systems in the world. A few notes:
My favorite part is the definitions about an abstraction layer being an artifact combined with a helpful story about it. It helps me see the world as a series of abstraction layers. We’re not actually close to true reality, we are very much living within abstraction layers — the simple stories we are able to tell about the artefacts we build. A world built by AIs will be far less comprehensible than the world we live in today. (Much more like biology is, except made by something that is much smarter and faster than us instead of stupider and slower.)
The post puts in the time to bring into the conversation a lot of other work that attempts to help build simple stories about the AI artefacts that we are building, which I appreciate.
The post is pretty simply written, for me, and I understand all the examples and arguments.
It also attempts to (briefly) describe a novel direction of future work for solving the problem of building untrustworthy systems with selection, and that’s exciting.
For looking at the alignment problem clearly and with a subtly different frame than other discussions, one that resonates for me, and that points to new frames for a solution, I am voting this post +9.
“Search versus design” explores the basic way we build and trust systems in the world. A few notes:
My favorite part is the definitions about an abstraction layer being an artifact combined with a helpful story about it. It helps me see the world as a series of abstraction layers. We’re not actually close to true reality, we are very much living within abstraction layers — the simple stories we are able to tell about the artefacts we build. A world built by AIs will be far less comprehensible than the world we live in today. (Much more like biology is, except made by something that is much smarter and faster than us instead of stupider and slower.)
The post puts in the time to bring into the conversation a lot of other work that attempts to help build simple stories about the AI artefacts that we are building, which I appreciate.
The post is pretty simply written, for me, and I understand all the examples and arguments.
It also attempts to (briefly) describe a novel direction of future work for solving the problem of building untrustworthy systems with selection, and that’s exciting.
For looking at the alignment problem clearly and with a subtly different frame than other discussions, one that resonates for me, and that points to new frames for a solution, I am voting this post +9.