Despite being a long-time reader of LessWrong and Scott Alexander (dating back to the Overcoming Bias era), I’ve historically maintained a “fellow traveler” status rather than identifying as a rationalist—though my thinking has been deeply influenced by rationalist frameworks and epistemology.
I explore the intersection of human capability enhancement and technology, with particular focus on intelligence augmentation through AI and meditation (specifically jhana practices).
My software interests lean towards high-performance native applications and CLIs. It’s very unfortunate that current AIs seem so biased towards web technologies — let me know if you want a rant.
I practice systematic self-improvement across multiple domains:
Physical: strength training, running, and rock climbing
Technical: woodworking, machining, and digital fabrication
Mental: meditation with emphasis on emotional intelligence (influenced by Joe Hudson’s work)
My intellectual framework draws from classical liberal economics, progress studies, and state capacity libertarianism. I’m particularly interested in how these paradigms can inform both individual and collective rationality.
I engage with effective altruism and evidence-based policy, with particular interest in The Center for New Liberalism’s approach to market-oriented solutions for social challenges. My focus areas include urban development (YIMBY) and institutional design, exploring how liberal democratic frameworks can be strengthened and adapted to address 21st century challenges.
Twitter
Long-time reader, first-time poster. My bio covers my background, but I have a few questions looking at AI risk through an economic lens:
Has anyone deeply engaged with Hayek’s “The Use of Knowledge in Society” in relation to AI alignment? I’m particularly interested in how his insights about distributed knowledge and the limitations of central planning might inform our thinking about AI governance and control structures.
More broadly, I’m curious about historical parallels between different approaches to handling distributed vs centralized power/knowledge systems. Are there instructive analogies to be drawn from how 20th century intellectuals thought about economic planning versus how we currently think about AI development and control?
I’m particularly interested in how distributed AI development changes the risk landscape compared to early singleton-focused scenarios. What writings or discussions have best tackled this shift?
Before developing these ideas further, I’d appreciate pointers to any existing work in these directions!