Davidad proposes “we are careful to only provide the training process with inputs that would be just as likely in, say, an alternate universe where AI was built by octopus minds made of organosilicon where atoms obey the Bohr model.” You write “we might be able to get away with [the AI knowing about] the human economy”. These seem very contradictory to me, right? The human economy ≠ the organosilicon octopus mind economy, right? For example, the chemical industry would look rather different if atoms obeyed the Bohr model. The clothing industry would look different if we lived underwater. Etc.
Normally when people say something like “technology/STEM development” in the context of AGI, I would think of something like “come up with a way to make faster CPUs”. The AI would need to know how current fabs work in great detail, and what other technology and tools are easily available to use to attack the problem, and how compilers works, presumably with code samples and so on, and also an English-language description of what would constitute a functional CPU (ideally with English-language back-and-forth discussion to navigate the trade-space). For real AGI, I would expect it to be able to also design the new advanced CPU factories, purchase the land, negotiate with suppliers, apply for permits, etc. etc.
OK, so how much of that can be distilled into tasks which would be exactly the same in an alternate universe with different laws of physics? I think a few extremely narrow slices at best—for example maybe some narrow-AI optimization problems along the lines of this. But meanwhile an unboxed AGI can do the entire thing.
So you really can’t say “no performance penalty”, right?
So you really can’t say “no performance penalty”, right?
Yeah, that might be a big concern with my approach, but it’s still worth trying to get empirical data on how much we can get away with a boxed AI model.
Davidad proposes “we are careful to only provide the training process with inputs that would be just as likely in, say, an alternate universe where AI was built by octopus minds made of organosilicon where atoms obey the Bohr model.” You write “we might be able to get away with [the AI knowing about] the human economy”. These seem very contradictory to me, right? The human economy ≠ the organosilicon octopus mind economy, right? For example, the chemical industry would look rather different if atoms obeyed the Bohr model. The clothing industry would look different if we lived underwater. Etc.
Normally when people say something like “technology/STEM development” in the context of AGI, I would think of something like “come up with a way to make faster CPUs”. The AI would need to know how current fabs work in great detail, and what other technology and tools are easily available to use to attack the problem, and how compilers works, presumably with code samples and so on, and also an English-language description of what would constitute a functional CPU (ideally with English-language back-and-forth discussion to navigate the trade-space). For real AGI, I would expect it to be able to also design the new advanced CPU factories, purchase the land, negotiate with suppliers, apply for permits, etc. etc.
OK, so how much of that can be distilled into tasks which would be exactly the same in an alternate universe with different laws of physics? I think a few extremely narrow slices at best—for example maybe some narrow-AI optimization problems along the lines of this. But meanwhile an unboxed AGI can do the entire thing.
So you really can’t say “no performance penalty”, right?
Yeah, that might be a big concern with my approach, but it’s still worth trying to get empirical data on how much we can get away with a boxed AI model.