Are you just referring to the profit incentive conflicting with the need for safety, or something else?
I’m struggling to see how we get aligned AI without “inside game at labs” in some way, shape, or form.
My sense is that evaporative cooling is the biggest thing which went wrong at OpenAI. So I feel OK about e.g. Anthropic if it’s not showing signs of evaporative cooling.
Are you just referring to the profit incentive conflicting with the need for safety, or something else?
I’m struggling to see how we get aligned AI without “inside game at labs” in some way, shape, or form.
My sense is that evaporative cooling is the biggest thing which went wrong at OpenAI. So I feel OK about e.g. Anthropic if it’s not showing signs of evaporative cooling.