calorie-constrained massively-parallel slow-firing neurons built according to DNA)
coming back to this post, I had the thought—no amount of nanotechnology will ever change this general pattern. it cannot. no machine you’d rather build would even in principle be able to be anything but an equivalent to that, if you want to approach energy efficiency limits while also having a low error rate. you always want self correction everywhere, low energy overhead in the computing material, etc. you can put a lot more compute together, but when that computronium cooperates with itself, you can bet that as part of the denoising process it’s gonna have to do forgiveness for mistakes that had been claimed to have been disproven.
computronium-to-computronium friendliness is the first problem we must solve, if we wish to solve anything.
and I think we can prove things about the friendliness of computronium despite the presence of heavy noise, if we’re good at rotating the surfaces of possibility in response to noise.
if your ideal speedup model is a diffusion model implemented via physical systems optimization, which it very much seems like it might be, then what we really want to prove is something about agents’ volumetric boundaries in the universe, and their personal trajectories within that boundary. because of the presence of noise, we can only ever verify it up to a margin anyway.
there’s something important to understand here, I think. you never want to get rid of the efficiency. you want to improve your noise ratio, sure, but the shape of physics requires efficient brains to be physically shaped some set of ways if they want to be fast. those ways are varied, and include many, many communication shapes besides our own; but you always want to be volumetric.
and volumetric self-verification introduces a really difficult coordination problem because now you have an honesty problem if you try to do open source game theory. in a distributed system, nodes can lie about their source code, easy! and it’s really hard to do enough error checking to be sure everyone was honest—you can do it, but do you really want to spend that much energy, all the time? seems heavy error checking is something to do when there’s a shape you want to be sure you fully repaired away, such messes, cancers, diseases, etc. your immune system doesn’t need to verify everyone at all times.
in order to ensure every local system prevents honesty problems, it needs to be able to output a verification of itself. but those verifications can become uncorrelated with their true purpose because one of the steps failed invisibly, and having a sufficient network of verifications of a self-perception is doable but quite hard.
it seems like ultimately what you’re worried is that an ai will want to found an ai-only state due to disinterest in letting us spend our energy to self improve a little more slowly.
coming back to this post, I had the thought—no amount of nanotechnology will ever change this general pattern. it cannot. no machine you’d rather build would even in principle be able to be anything but an equivalent to that, if you want to approach energy efficiency limits while also having a low error rate. you always want self correction everywhere, low energy overhead in the computing material, etc. you can put a lot more compute together, but when that computronium cooperates with itself, you can bet that as part of the denoising process it’s gonna have to do forgiveness for mistakes that had been claimed to have been disproven.
computronium-to-computronium friendliness is the first problem we must solve, if we wish to solve anything.
and I think we can prove things about the friendliness of computronium despite the presence of heavy noise, if we’re good at rotating the surfaces of possibility in response to noise.
if your ideal speedup model is a diffusion model implemented via physical systems optimization, which it very much seems like it might be, then what we really want to prove is something about agents’ volumetric boundaries in the universe, and their personal trajectories within that boundary. because of the presence of noise, we can only ever verify it up to a margin anyway.
there’s something important to understand here, I think. you never want to get rid of the efficiency. you want to improve your noise ratio, sure, but the shape of physics requires efficient brains to be physically shaped some set of ways if they want to be fast. those ways are varied, and include many, many communication shapes besides our own; but you always want to be volumetric.
and volumetric self-verification introduces a really difficult coordination problem because now you have an honesty problem if you try to do open source game theory. in a distributed system, nodes can lie about their source code, easy! and it’s really hard to do enough error checking to be sure everyone was honest—you can do it, but do you really want to spend that much energy, all the time? seems heavy error checking is something to do when there’s a shape you want to be sure you fully repaired away, such messes, cancers, diseases, etc. your immune system doesn’t need to verify everyone at all times.
in order to ensure every local system prevents honesty problems, it needs to be able to output a verification of itself. but those verifications can become uncorrelated with their true purpose because one of the steps failed invisibly, and having a sufficient network of verifications of a self-perception is doable but quite hard.
it seems like ultimately what you’re worried is that an ai will want to found an ai-only state due to disinterest in letting us spend our energy to self improve a little more slowly.