Do you mean “we should to think about Y before realize plan X” or “plan X definitely fall because of Y”?
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Basically, you are saying “we can do X and I hope it will do A, B and C” without any regard for the real world consequences.
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Will likely go down as engaging in politics is mind-killing and it’s important to think clearly to achieve AI alignment.
Our discussion look like:
Me: we can do X, that mean do X1, X2 and X3.
You: we can fall on X2 by way Y.
Do you mean “we should to think about Y before realize plan X” or “plan X definitely fall because of Y”?
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Basically, you are saying “we can do X and I hope it will do A, B and C” without any regard for the real world consequences.
Will likely go down as engaging in politics is mind-killing and it’s important to think clearly to achieve AI alignment.