Val, you mentioned Sandboxing several times, but only linked to the Computer Science definition. Can you go into more details about how to sandbox as a human?
…though after reflecting on it and starting a few drafts of a comment here, I’m starting to wonder if I should instead spell it out in more detail in its own post.
The gist of it is that every framework thinks every other framework is seriously missing the point in some way. If you can nail down X’s critique of Y and Y’s critique of X, and both critiques are made of Gears, you can use those critiques to emphasize a boundary between them and to intentionally switch between them.
In practice, we usually want to switch between a kind of science-based frame and a new hypothetical one we want to test out. When both the science frame and the new to-be-sandboxed frame both have allergic reactions to the other, then they’re never going to mix, and there’s no risk of the “Aha, consciousness collapses quantum probability waves!” type error. You can then leverage each frame’s critique of the other to switch between them, or to verify which one you’re in.
After that you can set up some TAPs to create mental warning bells whenever you enter one, or to remember to verify which one you’re in if you want to double-check before doing a given kind of reasoning or making a given kind of decision.
In practice I find this makes each mode more clear and internally consistent, in part by exposing and removing internal inconsistencies. E.g., in the “consciousness collapses quantum probability waves” thing, you can actually find the logical point where “consciousness first” and quantum mechanics slam into one another, at which point you need to separate them more fully. Then it becomes more obvious that the “consciousness first” paradigm doesn’t allow us to start with the frame of there being an objective reality that there is subjective experience of. This lets you keep your sanity in quantum mechanics even when sometimes trying on the “consciousness first” paradigm, because the two basically can’t coexist in the same effort to explain a given phenomenon.
The only thing I know of that breaks these sandboxes is if you find a Gears-based link between the two. But if you actually find a Gears-based link between the science frame and a new frame, then what you have is a scientific hypothesis. At that point you can test it empirically.
Unless and until you find such a Gears-based link, though, the science frame will find it correct to view those other frames as possibly or definitely wrong or misguided in some way. Hence preemptive naming of such frameworks as “fake”: it acts as a reminder to come back to your home ontology and to keep it from being corrupted by these other ones you’re playing with.
Val, you mentioned Sandboxing several times, but only linked to the Computer Science definition. Can you go into more details about how to sandbox as a human?
I’d be happy to.
…though after reflecting on it and starting a few drafts of a comment here, I’m starting to wonder if I should instead spell it out in more detail in its own post.
The gist of it is that every framework thinks every other framework is seriously missing the point in some way. If you can nail down X’s critique of Y and Y’s critique of X, and both critiques are made of Gears, you can use those critiques to emphasize a boundary between them and to intentionally switch between them.
In practice, we usually want to switch between a kind of science-based frame and a new hypothetical one we want to test out. When both the science frame and the new to-be-sandboxed frame both have allergic reactions to the other, then they’re never going to mix, and there’s no risk of the “Aha, consciousness collapses quantum probability waves!” type error. You can then leverage each frame’s critique of the other to switch between them, or to verify which one you’re in.
After that you can set up some TAPs to create mental warning bells whenever you enter one, or to remember to verify which one you’re in if you want to double-check before doing a given kind of reasoning or making a given kind of decision.
In practice I find this makes each mode more clear and internally consistent, in part by exposing and removing internal inconsistencies. E.g., in the “consciousness collapses quantum probability waves” thing, you can actually find the logical point where “consciousness first” and quantum mechanics slam into one another, at which point you need to separate them more fully. Then it becomes more obvious that the “consciousness first” paradigm doesn’t allow us to start with the frame of there being an objective reality that there is subjective experience of. This lets you keep your sanity in quantum mechanics even when sometimes trying on the “consciousness first” paradigm, because the two basically can’t coexist in the same effort to explain a given phenomenon.
The only thing I know of that breaks these sandboxes is if you find a Gears-based link between the two. But if you actually find a Gears-based link between the science frame and a new frame, then what you have is a scientific hypothesis. At that point you can test it empirically.
Unless and until you find such a Gears-based link, though, the science frame will find it correct to view those other frames as possibly or definitely wrong or misguided in some way. Hence preemptive naming of such frameworks as “fake”: it acts as a reminder to come back to your home ontology and to keep it from being corrupted by these other ones you’re playing with.
This, please.