I think degree to which LPE is actually necessary for solving problems in any given domain, as well as the minimum amount of time, resources, and general tractability of obtaining such LPE, is an empirical question which people frequently investigate for particular important domains.
Isn’t it sort of “god in the gaps” to presume that the ASI , simply by having lots of compute , no longer actually has to validate anything and apply the scientific method in the reality its attempting to exert control over?
We have machine learning algo’s in biomedicine screen for molecules of interest. This lowers the fail rate of new pharmaceuticals , most of them still fail. Most of them during rat and mouse studies.
So all available human data on chemistry , pharmacodynamics , pharmacokinetics etc + the best simulation models available (alphago etc) still wont result in it being able to “hit” on a new drug for say “making humans obedient zombies” on the first try.
Even if we hand wave and say it discovers a bunch of insights in our data we dont have access to , their are simply too many variables and sheer unknowns for this to work without it being able to simulate human bodies down to the molecular level.
So it can discover a nerve gas thats deadly enough no problem , but we already have deadly nerve gas.
It just again , seems very hand wavy to have all these leaps in reasoning “because ASI” when good hypothesis prove false all the time upon application of avtual experimentation.
Isn’t it sort of “god in the gaps” to presume that the ASI , simply by having lots of compute , no longer actually has to validate anything and apply the scientific method in the reality its attempting to exert control over?
We have machine learning algo’s in biomedicine screen for molecules of interest. This lowers the fail rate of new pharmaceuticals , most of them still fail. Most of them during rat and mouse studies.
So all available human data on chemistry , pharmacodynamics , pharmacokinetics etc + the best simulation models available (alphago etc) still wont result in it being able to “hit” on a new drug for say “making humans obedient zombies” on the first try.
Even if we hand wave and say it discovers a bunch of insights in our data we dont have access to , their are simply too many variables and sheer unknowns for this to work without it being able to simulate human bodies down to the molecular level.
So it can discover a nerve gas thats deadly enough no problem , but we already have deadly nerve gas.
It just again , seems very hand wavy to have all these leaps in reasoning “because ASI” when good hypothesis prove false all the time upon application of avtual experimentation.