RomanHauksson
I would be interested in this!
Related: an organization called Sage maintains a variety of calibration training tools.
How long does the Elta MD sunscreen last?
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.
(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)
Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.
Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
Funding the development of experimental “epistemology software”, like Arbital or Mathopedia
I’ll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.
Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.
How about “morally inclusive”?
I would find this deeply frustrating. Glad they fixed it!
One year later, what you think about the field now?
I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.
(low confidence, low context, just an intuition)
I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).
From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.
This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!
I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.
Is it any of the results from this Metaphor search?
A research team’s ability to design a robust corporate structure doesn’t necessarily predict their ability to solve a hard technical problem. Maybe there’s some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren’t well-tolerated (even if they have some element of truth).
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
I don’t want “humanism” to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn’t require us passing any torch at all and could just coexist with us…
I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn’t be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
“Preferring humans just because they’re humans” or “letting us be selfish” does prevent the risk of prematurely declaring that we’ve figured out what makes a being morally valuable and handing over society’s steering wheel to AI agents that, upon further reflection, aren’t actually morally valuable.
For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn’t necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don’t actually experience the world they’ve created.
Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don’t (selfishness, suffering, irrationality).
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn’t the best way to prevent a valueless posthuman society.
Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.
7 months later, from Business Insider: Silicon Valley elites are pushing a controversial new philosophy.
I’ve also been thinking a lot about this recently and haven’t seen any explicit discussion of it. It’s the reason I recently began going through BlueDot Impact’s AI Governance course.
A couple questions, if you happen to know:
Is there anywhere else I can find discussion about what the transition to a post-superhuman-level-AI society might look like, on an object level? I also saw the FLI Worldbuilding Contest.
What are the implications for this on career choice for a early-career EA trying to make this transition go well?
I think the personal responsibility mindset is healthy for individuals, but not useful for policy design.
If we’re trying to figure out how to prevent obesity from a society-level view, then I agree with your overall point – it’s not tractable to increase everyone’s temperance. But as I understand it, one finding from positive psychology research is that taking responsibility for your decisions and actions does genuinely improve your mental health. And you don’t need to be rich or have a PhD in nutritional biochemistry to eat mostly grains, beans, frozen vegetables and fruit, and nuts.
Similarly, I think there’s only so much that marketing can do to influence a society’s culture. We can and should taboo unhealthy foods, through simple decisions like what restaurant to go to with friends or what food to buy for a party.
When you’re doing effective altruism or policymaking, then I agree that corporations pushing processed food is the relevant factor to focus on. And the public should not be misled about this. But in your personal life, I think personal responsibility becomes the relevant way to think about it.