IMO the correct response is to run like hell from the box. In Thingspace, most things are very unfriendly, in much the same way that most of Mindspace contains unfriendly AIs.
The high-energy exotic plasma not from this universe does not love or hate you. Your universe is simply a false vacuum with respect to its home universe’s, which it accidentally collapses.
I am pattern-matching from fiction on “black box with evil-looking inscriptions on it”. Those do not tend to end well for anyone. Also, what do you mean by strong evidence against that the box is less harmful than a given random object from Thingspace? I can /barely sort of/ see “not a random object from Thingspace”; I cannot see “EV(U(spoopy creppy black box)) > EV(U(object from Thingspace))”.
The evidence that I didn’t select it at random was my saying “I find this one particularly interesting.”
I also claimed that “I’m probably not that evil.” Of course, I might be lying about that! Still, that’s a fact that ought to go into your Bayesian evaluation, no?
“Interesting” tends to mean “whatever it would be, it does that more” in the context of possibly psuedo-Faustian bargains and signals of probable deceit. From what I know, I do not start with reason to trust you, and the evidence found in the OP suggests that I should update the probability that you are concealing information updating on which would lead me not to use the black box to “much higher”.
Well, I hope to continue the sequence… I ended this article with a question, or puzzle, or homework problem, though. Any thoughts about it?
IMO the correct response is to run like hell from the box. In Thingspace, most things are very unfriendly, in much the same way that most of Mindspace contains unfriendly AIs.
Technically, almost all things in thingspace are high energy plasma.
Edit: actually most of them are probably some kind of exotic (anti-, strange-, dark- etc.) matter that’ll blow up the planet.
The high-energy exotic plasma not from this universe does not love or hate you. Your universe is simply a false vacuum with respect to its home universe’s, which it accidentally collapses.
So… you think I am probably evil, then? :-)
I gave you the box (in the thought experiment). I may not have selected it from Thingspace at random!
In fact, there’s strong evidence in the text of the OP that I didn’t...
I am pattern-matching from fiction on “black box with evil-looking inscriptions on it”. Those do not tend to end well for anyone. Also, what do you mean by strong evidence against that the box is less harmful than a given random object from Thingspace? I can /barely sort of/ see “not a random object from Thingspace”; I cannot see “EV(U(spoopy creppy black box)) > EV(U(object from Thingspace))”.
EBWOP: On further reflection I find that since most of Thingspace instantaneously destroys the universe,
EV(U(spoopy creppy black box)) >>>
EV(U(object from Thingspace)).
However, what I was trying to get at was that
EV(U(spoopy creppy black box)) ⇐
EV(U(representative object from-class: chance-based deal boxes with “normal” outcomes)) <=
EV(U(representative object from-class: chance-based deal boxes with Thingspace-like outcomes)) <=
EV(U(representative object from-class: chance-based deal boxes with terrifyingly creatively imaginable outcomes))
The evidence that I didn’t select it at random was my saying “I find this one particularly interesting.”
I also claimed that “I’m probably not that evil.” Of course, I might be lying about that! Still, that’s a fact that ought to go into your Bayesian evaluation, no?
“Interesting” tends to mean “whatever it would be, it does that more” in the context of possibly psuedo-Faustian bargains and signals of probable deceit. From what I know, I do not start with reason to trust you, and the evidence found in the OP suggests that I should update the probability that you are concealing information updating on which would lead me not to use the black box to “much higher”.
Oh, goodness, interesting, you do think I’m evil!
I’m not sure whether to be flattered or upset or what. It’s kinda cool, anyway!
I think that avatar-of-you-in-this-presented-scenario does not remotely have avatar-of-me-in-this-scenario’s best interests at heart, yes.
I hope you continue the sequence as well. :V