I do not want to put forth an idea that could possibly have a detrimental future consequence-i.e. basilisk.
I would suggest you find somebody who’s not susceptible to basilisks, or at least not susceptible to basilisks of your particular kind, and bounce it off of them.
For example, I don’t believe there’s a significant chance that any AIs operating in our physics ever will run, or even be able to run, any really meaningful number of simulations containing conscious beings with experiences closely resembling the real world. And I think that acausal trade is silly nonsense. And not only do I not want to fill the whole future light cone with the maximum possible number of humans or human analogs, but I actively dislike the idea. I’ve had a lot of time to think about those issues, and have read many “arguments for”. I’m haven’t bought any of it and I don’t ever expect to buy any of it.
So I can reasonably be treated as immune to any basilisks that rely on those ideas.
Of course, if your idea is along those lines, I’m also likely to tell it’s silly even though others might not see it that way. But I could probably make at least an informed guess as to what such people might buy into.
Note, by the way, that the famous Roko’s basilisk didn’t actually cause much of a stir, and the claims that it was a big issue seem to have come from somebody with an axe to grind.
I am afraid to a certain extent that thinking of the theory was already enough and it’s too late. Perhaps an AI exists already and it already knows my thoughts in realtime.
To know your thoughts in real time, it would have to be smart enough to (a) correctly guess your thoughts based on limited information, or (b) secretly build and deploy some kind of apparatus that let it actually read your thoughts.
(a) is probably completely impossible, period. Even if it is possible, it definitely requires an essentially godlike level of intelligence. (b) still requires the AI to be very smart. And they both imply a lot of knowledge about how humans think.
I submit that any AI that could do either (a) or (b) would long ago have come up with your idea on its own, and could probably come up with any number of similar ideas any time it wanted to.
It doesn’t make sense to worry that you could have leaked anything to some kind of godlike entity just by thinking about it.
I would suggest you find somebody who’s not susceptible to basilisks, or at least not susceptible to basilisks of your particular kind, and bounce it off of them.
For example, I don’t believe there’s a significant chance that any AIs operating in our physics ever will run, or even be able to run, any really meaningful number of simulations containing conscious beings with experiences closely resembling the real world. And I think that acausal trade is silly nonsense. And not only do I not want to fill the whole future light cone with the maximum possible number of humans or human analogs, but I actively dislike the idea. I’ve had a lot of time to think about those issues, and have read many “arguments for”. I’m haven’t bought any of it and I don’t ever expect to buy any of it.
So I can reasonably be treated as immune to any basilisks that rely on those ideas.
Of course, if your idea is along those lines, I’m also likely to tell it’s silly even though others might not see it that way. But I could probably make at least an informed guess as to what such people might buy into.
Note, by the way, that the famous Roko’s basilisk didn’t actually cause much of a stir, and the claims that it was a big issue seem to have come from somebody with an axe to grind.
To know your thoughts in real time, it would have to be smart enough to (a) correctly guess your thoughts based on limited information, or (b) secretly build and deploy some kind of apparatus that let it actually read your thoughts.
(a) is probably completely impossible, period. Even if it is possible, it definitely requires an essentially godlike level of intelligence. (b) still requires the AI to be very smart. And they both imply a lot of knowledge about how humans think.
I submit that any AI that could do either (a) or (b) would long ago have come up with your idea on its own, and could probably come up with any number of similar ideas any time it wanted to.
It doesn’t make sense to worry that you could have leaked anything to some kind of godlike entity just by thinking about it.