The ability of the simulated society to probe for outside reality goes up with simulated time not actual time passed. The phrasing of the text made it seem like comparison to human outside interventors.
Still inability to realise what you are doing seems rather dangerous. It is like “everythig is going to be fine if you just don’t look down”. Being reflective about what you are doing is often considered a virtue and not a vice.
Still inability to realise what you are doing seems rather dangerous.
So far, all I’ve done is post a question on lesswrong :)
More seriously, I do regret it if I appeared unaware of the potential danger. I am of course aware of the possibility that experiments with AI might destroy humanity. Think of my post above a suggesting a possible approach to investigate—perhaps one with some kinks as written (that is why I’m asking a question here) but (I think) with the possibility of one day having rigorous safety guarantees.
I don’t mean the writing of this post but in general the principle of trying to gain utility from minimising self-awareness.
Usually you don’t make processes as opaque as possible to increase their possibility of going right. The opposite of atleast social political processes being transparent is seen as pretty important.
If we are going to create minilife just to calculate 42, seeing it get calculated should not be a super extra temptation. Preventing the “interrupt/tamper” decision by limiting options is rather backwards in doing that while it would be better to argue why it should not be chosen even if available.
The ability of the simulated society to probe for outside reality goes up with simulated time not actual time passed. The phrasing of the text made it seem like comparison to human outside interventors.
Still inability to realise what you are doing seems rather dangerous. It is like “everythig is going to be fine if you just don’t look down”. Being reflective about what you are doing is often considered a virtue and not a vice.
So far, all I’ve done is post a question on lesswrong :)
More seriously, I do regret it if I appeared unaware of the potential danger. I am of course aware of the possibility that experiments with AI might destroy humanity. Think of my post above a suggesting a possible approach to investigate—perhaps one with some kinks as written (that is why I’m asking a question here) but (I think) with the possibility of one day having rigorous safety guarantees.
I don’t mean the writing of this post but in general the principle of trying to gain utility from minimising self-awareness.
Usually you don’t make processes as opaque as possible to increase their possibility of going right. The opposite of atleast social political processes being transparent is seen as pretty important.
If we are going to create minilife just to calculate 42, seeing it get calculated should not be a super extra temptation. Preventing the “interrupt/tamper” decision by limiting options is rather backwards in doing that while it would be better to argue why it should not be chosen even if available.