Very nice and clear writing, thank you! This is exactly the kind of stuff I’d love to see more on LW:
Suppose I can create either this galaxy Joe’s favorite world, or a world of happy puppies frolicking in the grass. The puppies, from my perspective, are a pretty safe bet: I myself can see the appeal.
Though I think some parts could use more work, shorter words and clearer images:
Second (though maybe minor/surmountable): even if your actual attitudes yield determinate verdicts about the authoritative form of idealization, it seems like we’re now giving your procedural/meta evaluative attitudes an unjustified amount of authority relative to your more object-level evaluative attitudes.
But most of the post is good.
R. Scott Bakker made a related point in Crash Space:
The reliability of our heuristic cues utterly depends on the stability of the systems involved. Anyone who has witnessed psychotic episodes has firsthand experience of consequences of finding themselves with no reliable connection to the hidden systems involved. Any time our heuristic systems are miscued, we very quickly find ourselves in ‘crash space,’ a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done.
And now we’re set to begin engineering our brains in earnest. Engineering environments has the effect of transforming the ancestral context of our cognitive capacities, changing the structure of the problems to be solved such that we gradually accumulate local crash spaces, domains where our intuitions have become maladaptive. Everything from irrational fears to the ‘modern malaise’ comes to mind here. Engineering ourselves, on the other hand, has the effect of transforming our relationship to all contexts, in ways large or small, simultaneously. It very well could be the case that something as apparently innocuous as the mass ability to wipe painful memories will precipitate our destruction. Who knows? The only thing we can say in advance is that it will be globally disruptive somehow, as will every other ‘improvement’ that finds its way to market.
Human cognition is about to be tested by an unparalleled age of ‘habitat destruction.’ The more we change ourselves, the more we change the nature of the job, the less reliable our ancestral tools become, the deeper we wade into crash space.
In other words, yeah, I can imagine an alter ego who sees more and thinks better than me. As long as it stays within human evolutionary bounds, I’m even okay with trusting it more than myself. But once it steps outside these bounds, it seems like veering into “crash space” is the expected outcome.
Very nice and clear writing, thank you! This is exactly the kind of stuff I’d love to see more on LW:
Though I think some parts could use more work, shorter words and clearer images:
But most of the post is good.
R. Scott Bakker made a related point in Crash Space:
In other words, yeah, I can imagine an alter ego who sees more and thinks better than me. As long as it stays within human evolutionary bounds, I’m even okay with trusting it more than myself. But once it steps outside these bounds, it seems like veering into “crash space” is the expected outcome.
Glad you liked it, and thanks for sharing the Bakker piece—I found it evocative.