Here is a couple of “hard” things you can easily do with hypercompute, without causing dangerous consequentialism.
Given a list of atom coordinates, run a quantum accuracy simulation of those atoms. (Where the atoms don’t happen to make a computer running a bad program).
Find the smallest arrangement of atoms that forms a valid Or gate by brute forcing over the above simulator.
Brute forcing over large arrangements of atoms could find a design containing a computer containing an AI. But brute forcing over arrangements of 100 atoms should be fine, and can do a lot of interesting chemistry. Note that a psychoactive that makes humans care less about AI risk won’t be preferentially selected. Its not simulating the simple molecule in the world, that would be dangerous. Its simulating a simple molecule in a vacuum. (Or a standard temp and pressure 80% N2 + 20% O2 atmosphere, or some other simple hardcoded test setup.)
Here is a couple of “hard” things you can easily do with hypercompute, without causing dangerous consequentialism.
Given a list of atom coordinates, run a quantum accuracy simulation of those atoms. (Where the atoms don’t happen to make a computer running a bad program).
Find the smallest arrangement of atoms that forms a valid Or gate by brute forcing over the above simulator.
Brute forcing over large arrangements of atoms could find a design containing a computer containing an AI. But brute forcing over arrangements of 100 atoms should be fine, and can do a lot of interesting chemistry. Note that a psychoactive that makes humans care less about AI risk won’t be preferentially selected. Its not simulating the simple molecule in the world, that would be dangerous. Its simulating a simple molecule in a vacuum. (Or a standard temp and pressure 80% N2 + 20% O2 atmosphere, or some other simple hardcoded test setup.)