Are Dharma traditions that posit ‘innate moral perfection of everyone by default’ reasoning from the just world fallacy?
sayan
Can we have a market with qualitatively different (un-interconvertible) forms of money?
How would signalling/countersignalling work in a post-scarcity economy?
What are some effective ways to reset the hedonic baseline?
As far as I understand, this post decomoses ‘impact’ into value impact and objective impact. VI is dependent on some agent’s ability to reach arbitrary value-driven goals, while OI depends on any agent’s ability to reach goals in general.
I’m not sure if there exists a robust distinction between the two—the post doesn’t discuss any general demarcation tool.
Maybe I’m wrong, but I think the most important point to note here is that ‘objectiveness’ of an impact is defined not to be about the ‘objective state of the world’ - rather about how ‘general to all agents’ an impact is.
I think this post is broadly making two claims -
-
Impactful things fundamentally feel different.
-
A good Impact Measure should be designed in a way that it strongly safeguards against almost any imperfect objective.
It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.
I am looking forward to reading the rest of the sequence with arguments supporting these claims.
-
What gadgets have improved your productivity?
For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!
I am thinking about these questions about a lot without actually reaching anywhere.
What is the nature of non-dual epistemology? What does it mean to ‘reason’ from the (Intentional Stance)[https://en.wikipedia.org/wiki/Intentional_stance], from inside of an agent?
Seems like this has been done already.
Okay, natural catastrophes might not be a good example. (Edited)
If there is no self, what are we going to upload to the cloud?
It is so difficult to understand the difference and articulate in pronunciation some accent that is not one’s native, because of the predictive processing of the brain. Our brains are constantly appropriating signals that are closely related to the known ones.
Is there a good bijection between specification gaming and wireheading vs different types of Goodhart’s law?
Extremely low probability events are great as intuition pumps, but terrible as real world decisionmaking.
Speculation: People never use pro-con lists to actually make decisions, they rather use them rationalizingly to convince others.
The internet might be lacking multiple kind of curation and organization tools? How can we improve?
Pathological examples of math are analogous to adversarial examples in ML. Or are they?
What are the possible failure modes of AI-aligned Humans? What are the possible misalignment scenarios? I can think of malevolent uses of AI tech to enforce hegemony and etc etc. What else?
What’s a good way to force oneself outside their comfort zone where most expectations and intuitions routinely fail?
This might become useful to build antifragility about expectation management.
Quick example—living without money in a foreign nation.
Is it possible to design a personal or group retreat for this?
Enjoyed reading this. Looking forward to the next posts in the sequence.