I agree that there are pitfalls, and it will take several attempts for the laws to start working.
If the US government allocates a significant amount of money for (good) AI alignment research in combination with the ban, then our chances will increase from 0% to 25% in a scenario without black swans.
The problem is not whether a law works but whether it does what’s needed. If you look at the laws that exist in our society they usually do something but at the same time they don’t solve problems completely.
Politicians are quite quick to pass a law to “do something” but that does not mean that the problem is solved effectively. The more political the debate it is, the less likely it often is that the law actually does what it is indented to do.
To summarize our discussion: There may be a way to get the right government action and greatly improve our chances of alignment. But it requires a number of actions, some of which may have never been done by our society before. They may be impossible. These actions include: 1: learning how to effectively change people’s minds by videos (maybe something bordering on dark epistemology); 2: convincing tens of percent of the population of the right memes about alignment by social media (primarily youtube); 3: changing the minds of interlocutors in political debates (telling epistemological principles in the introduction to the debate??); 4: Using on broad public support to lobby for adequate laws helps alignment. So, we need to allocate a few people to think through this option to see if we can accomplish each step. If we can, then we should communicate this plan to as many rationalists as possible so that as many talented video makers as possible can try to implement this plan.
It’s not at all clear that if you convince someone on a superficial level that they should care about AI alignment, that will result in the right actions. On the other hand, thinking on that level can be quite corrosive for your own understanding. The soldier mindset is not useful for thinking about efficient mechanisms.
Do you mean “we should to think about Y before realize plan X” or “plan X definitely fall because of Y”?
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Basically, you are saying “we can do X and I hope it will do A, B and C” without any regard for the real world consequences.
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Will likely go down as engaging in politics is mind-killing and it’s important to think clearly to achieve AI alignment.
I agree that there are pitfalls, and it will take several attempts for the laws to start working.
If the US government allocates a significant amount of money for (good) AI alignment research in combination with the ban, then our chances will increase from 0% to 25% in a scenario without black swans.
The problem is not whether a law works but whether it does what’s needed. If you look at the laws that exist in our society they usually do something but at the same time they don’t solve problems completely.
Politicians are quite quick to pass a law to “do something” but that does not mean that the problem is solved effectively. The more political the debate it is, the less likely it often is that the law actually does what it is indented to do.
To summarize our discussion:
There may be a way to get the right government action and greatly improve our chances of alignment. But it requires a number of actions, some of which may have never been done by our society before. They may be impossible.
These actions include: 1: learning how to effectively change people’s minds by videos (maybe something bordering on dark epistemology); 2: convincing tens of percent of the population of the right memes about alignment by social media (primarily youtube); 3: changing the minds of interlocutors in political debates (telling epistemological principles in the introduction to the debate??); 4: Using on broad public support to lobby for adequate laws helps alignment.
So, we need to allocate a few people to think through this option to see if we can accomplish each step. If we can, then we should communicate this plan to as many rationalists as possible so that as many talented video makers as possible can try to implement this plan.
It’s not at all clear that if you convince someone on a superficial level that they should care about AI alignment, that will result in the right actions. On the other hand, thinking on that level can be quite corrosive for your own understanding. The soldier mindset is not useful for thinking about efficient mechanisms.
Our discussion look like:
Me: we can do X, that mean do X1, X2 and X3.
You: we can fall on X2 by way Y.
Do you mean “we should to think about Y before realize plan X” or “plan X definitely fall because of Y”?
A question to better understand your opinion: if all alignment community would try to realize Political Plan with all efforts they do now to align an AI directly, what do you think is the probability of success of alignment?
Basically, you are saying “we can do X and I hope it will do A, B and C” without any regard for the real world consequences.
Will likely go down as engaging in politics is mind-killing and it’s important to think clearly to achieve AI alignment.