What you I think are describing is you have AI systems, and these systems are trained from a simulator or human feedback, or some other offline method.
We have benchmarked their performance, and for the domain the system operates in, for inputs within the latent space of the training set, the system is sufficiently safe. (the stakes of the task determine how much reliability is required)
So the systems operate as limited duration “sessions”. They receive a stream of input, calculate an output, and any un-needed short term variables are cleared once the session ends.
Moreover, each time they run, we have an input checker that ensures that the input is within the latent space of the training set above.
Note that an autonomous car is perfectly implementable using the above. It need not be agentic. This is because all the “overhead” of doing it this way doesn’t add much latency if computers are doing it.
Seems straightforward. I had to look up the name—the Eric Drexler.
On a related topic, have you tried to explain to EY plausible routes to bootstrapping to self replicating nanoforges? EY’s model is not very accurate, the plausible routes require a large amount of carefully collected high information content data about the nanoscale. You would need to build the smallest subunits, test those, and so for iteratively to ever reach a nanoforge*. You also would likely need a very large upfront investment of robotics equipment and robotics “effort” to build the first one.
(*a nanoforge is a large machine with sufficient assembly lines to manufacture every part used in itself. Before you have one, nanotechnology is expensive and useless)
Let’s describe this in a more implementable form.
What you I think are describing is you have AI systems, and these systems are trained from a simulator or human feedback, or some other offline method.
We have benchmarked their performance, and for the domain the system operates in, for inputs within the latent space of the training set, the system is sufficiently safe. (the stakes of the task determine how much reliability is required)
So the systems operate as limited duration “sessions”. They receive a stream of input, calculate an output, and any un-needed short term variables are cleared once the session ends.
Moreover, each time they run, we have an input checker that ensures that the input is within the latent space of the training set above.
Note that an autonomous car is perfectly implementable using the above. It need not be agentic. This is because all the “overhead” of doing it this way doesn’t add much latency if computers are doing it.
Seems straightforward. I had to look up the name—the Eric Drexler.
On a related topic, have you tried to explain to EY plausible routes to bootstrapping to self replicating nanoforges? EY’s model is not very accurate, the plausible routes require a large amount of carefully collected high information content data about the nanoscale. You would need to build the smallest subunits, test those, and so for iteratively to ever reach a nanoforge*. You also would likely need a very large upfront investment of robotics equipment and robotics “effort” to build the first one.
(*a nanoforge is a large machine with sufficient assembly lines to manufacture every part used in itself. Before you have one, nanotechnology is expensive and useless)