it’s not, in my view at least, useful as a means to dismiss any key point of ai safety—only to calibrate expectations about capabilities scaling curves.
Sure, the brain being thermodynamically efficient does constrain the capabilities scaling curves somewhat, but by far the biggest source of uncertainty remains on the relationship between the number of computations done by an agent and its ability to actually influence the world. I have no idea how many bit flips it takes to behave like an optimal bayesian agent.
Hmm. Have you read all of jake’s posts, and also discovering agents, and also agents and devices, and also alberta plan? or at least, if any of those are new to you, you could start with the abstracts. I don’t currently think an answer already exists to your question, and I find myself not quite able to just pop out a rephrasing of simulators in terms of discovering-agents agency or etc, but it feels like it’s not far out of reach to have a solid hunch about the big-O shape of possible answers. I still need to deep-read agents and devices, though, maybe that has what I need.
it’s not, in my view at least, useful as a means to dismiss any key point of ai safety—only to calibrate expectations about capabilities scaling curves.
Sure, the brain being thermodynamically efficient does constrain the capabilities scaling curves somewhat, but by far the biggest source of uncertainty remains on the relationship between the number of computations done by an agent and its ability to actually influence the world. I have no idea how many bit flips it takes to behave like an optimal bayesian agent.
Hmm. Have you read all of jake’s posts, and also discovering agents, and also agents and devices, and also alberta plan? or at least, if any of those are new to you, you could start with the abstracts. I don’t currently think an answer already exists to your question, and I find myself not quite able to just pop out a rephrasing of simulators in terms of discovering-agents agency or etc, but it feels like it’s not far out of reach to have a solid hunch about the big-O shape of possible answers. I still need to deep-read agents and devices, though, maybe that has what I need.