What do you think about marginal superintelligences?
For example, take the task “implement an x86 CPU as gate masks”. Humans can do this task, but no single human can do the task, and humans are forced to inefficiently subdivide the task. For example a “CPU” that did not have distinct internal buses but was some blob of gates, with the registers and cache right in the middle of the logic mess (or with cache lines descending from dies soldered above), would probably outperform all current designs.
This hypothetical mess of a chip design is not something humans can create but it’s a checkable artifact.
Or another task, “construct a new human kidney from cells. All measurable parameters must meet or exceed a reference kidney”. Similar argument—humans can’t quite do this, the complexity of life support during construction is where humans would fail, or human made designs wouldn’t quite work well enough.
But again this is a checkable artifact. You don’t need superintelligence to validate that the output satisfies (or fails to satisfy) the goal.
A marginal superintelligence would be one that is context unaware and gets assigned tasks like this. It doesn’t know when the task is real or not.
What do you think about marginal superintelligences?
For example, take the task “implement an x86 CPU as gate masks”. Humans can do this task, but no single human can do the task, and humans are forced to inefficiently subdivide the task. For example a “CPU” that did not have distinct internal buses but was some blob of gates, with the registers and cache right in the middle of the logic mess (or with cache lines descending from dies soldered above), would probably outperform all current designs.
This hypothetical mess of a chip design is not something humans can create but it’s a checkable artifact.
Or another task, “construct a new human kidney from cells. All measurable parameters must meet or exceed a reference kidney”. Similar argument—humans can’t quite do this, the complexity of life support during construction is where humans would fail, or human made designs wouldn’t quite work well enough.
But again this is a checkable artifact. You don’t need superintelligence to validate that the output satisfies (or fails to satisfy) the goal.
A marginal superintelligence would be one that is context unaware and gets assigned tasks like this. It doesn’t know when the task is real or not.