That’s the second filter, because “optimizing” is two words: having a goal and maximising (or minimising) it.
First, one has to aknowledge that solving aligment is a goal. Many people does not recognize that it’s a problem, beacuse smart robots will learn what love means and won’t hurt us.
What you talked about in your post comes after this. When someone is walking towards the goalpost of alignment, they should realize that there might be multiple routes there and they should choose the quickest one, because only winning matters.
That’s the second filter, because “optimizing” is two words: having a goal and maximising (or minimising) it.
First, one has to aknowledge that solving aligment is a goal. Many people does not recognize that it’s a problem, beacuse smart robots will learn what love means and won’t hurt us.
What you talked about in your post comes after this. When someone is walking towards the goalpost of alignment, they should realize that there might be multiple routes there and they should choose the quickest one, because only winning matters.