I am ignorant of AI and AI Safety to a very large degree. It is very likely that a lot of what I talk about here will be silly to an actual expert or is something that has been thought of a long time ago and tried and taken as far as they knew how. But I think that if there is anything here that could help, or could be turned into something else that could help, I would rather have spent time writing this than not.
AI Safety and alignment, as far as this ignorant one knows, has been very ineffective. The current progress is reckless, very very fast, with little to no restraint. With the main approach in the actual industry seeming to be to patch things up as they come along, but only really if they can affect the profit margin. And the experts who aren’t in favor of this seem to have amounted to little more than educated picketers.
Something I don’t understand is that unlike other areas such as climate change and urban planning, which although very different have a slightly similar dynamic of experts with little personal power trying to advocate for changes which are better for people generally, against larger, more powerful organizations who profit hugely from the way things are currently done, people in AI Safety seem to be doing little else. What they especially don’t seem to be doing, which experts in climate change and urban planning(which, I appreciate are very different, I am using them as examples due to slightly similar power dynamics) are doing, is offering profitable alternatives. Or even offering alternative courses of action, perhaps to another powerful group. As a small scale example, if I offer solar panels to people in my village, I can appeal to how they’ll save money and have self-governance. I can offer the quietness of it compared to a generator. Sympathizing that they might not have the climate as a high priority as I do, because they have other problems. An urban planning advocate might appeal to how good drivers would have to deal with a lot less poor drivers on the road if more of them were in buses, if buses didn’t get stuck in traffic, or how there might be more parking free for drivers if there were better bike lanes and bike parking. Essentially, putting forth an idea so that it’s not just appealing to themselves, but to others as well. And also obviously profitable to others.
I don’t see this in AI Safety. The attitude often seems to be a ‘woe is me’ kind, with an underlying idea that there are people just foolishly doing things that are going to doom everyone and if they don’t listen to AI Safety, they are dumb and going to do bad things. Which even if it is true, does not seem effective or helpful to me. Of course, maybe I am completely wrong about this.
Perhaps one of the main flaws of AI Safety is the very poor alternatives offered. The main alternative offered seem to be to do the same thing more slowly and more carefully. Which will obviously be less profitable, but the huge company should do things that are less profitable, because that is the sensible and right thing to do. In my sincere opinion, if there is anyone actually trying to take this approach, and it’s not just an arrogant and ignorant misunderstanding of mine, they are very very foolish and self-centered.
A lot of what AI does currently, that is visible to the general public seems like it could be replicated without AI. E.g. making boilerplate code that can be quickly edited. There are already huge libraries of boilerplate code. Surely making a system connecting most of them, which allows the user to search one up, maybe even make and upload some of their own and quickly make some changes to boilerplate code is not something extraordinary. But it does seem like it would be useful. The same could be said for art. How useful would it be to be able to an artist to be able to select a person/things outline from a dropdown/search bar, stretch it as much as they like and the choose it to be filled in with a specific gradient? And similarly to the code, create and upload their own boilerplates. A simple way to make these things profitable might be to make older boilerplates free but ones from say, the last month are only available to subscribed users. Yet, as far as I am aware, this does not yet exist and is not being worked on. It would not even have to be a separate application. In fact, it might be better if they were extensions for already widely used IDEs or drawing apps.
Some else I don’t see being done is incentives for AI engineers or other workers not to go work for companies such as OpenAI, Deep Mind, etc. Or things being done or made to compete for the resources that such companies need.
I also see no mention of targeting the companies that fund them. Or the systems that allow such monopolies to grow so fat that they easily have the money to fund them. Or the systems that make unions so feeble that they talk about working with AI instead of fighting against it.
The main reason, it seems to me, that such things are not being worked on is that AI Safety people actually do want AI. They actually want the research to be done. They actually want an AGI to be made that will replace the human workforce. But they want it done slower and more carefully. I find this silly. How inspiring would the man blocking the tanks from Tienanmen Square be if instead of standing in front of them, he was walking alongside them, asking them to slow down and maybe think about not using guns? It would be laughable.
An Ignorant View on Ineffectiveness of AI Safety
I am ignorant of AI and AI Safety to a very large degree. It is very likely that a lot of what I talk about here will be silly to an actual expert or is something that has been thought of a long time ago and tried and taken as far as they knew how. But I think that if there is anything here that could help, or could be turned into something else that could help, I would rather have spent time writing this than not.
AI Safety and alignment, as far as this ignorant one knows, has been very ineffective. The current progress is reckless, very very fast, with little to no restraint. With the main approach in the actual industry seeming to be to patch things up as they come along, but only really if they can affect the profit margin. And the experts who aren’t in favor of this seem to have amounted to little more than educated picketers.
Something I don’t understand is that unlike other areas such as climate change and urban planning, which although very different have a slightly similar dynamic of experts with little personal power trying to advocate for changes which are better for people generally, against larger, more powerful organizations who profit hugely from the way things are currently done, people in AI Safety seem to be doing little else. What they especially don’t seem to be doing, which experts in climate change and urban planning(which, I appreciate are very different, I am using them as examples due to slightly similar power dynamics) are doing, is offering profitable alternatives. Or even offering alternative courses of action, perhaps to another powerful group. As a small scale example, if I offer solar panels to people in my village, I can appeal to how they’ll save money and have self-governance. I can offer the quietness of it compared to a generator. Sympathizing that they might not have the climate as a high priority as I do, because they have other problems. An urban planning advocate might appeal to how good drivers would have to deal with a lot less poor drivers on the road if more of them were in buses, if buses didn’t get stuck in traffic, or how there might be more parking free for drivers if there were better bike lanes and bike parking. Essentially, putting forth an idea so that it’s not just appealing to themselves, but to others as well. And also obviously profitable to others.
I don’t see this in AI Safety. The attitude often seems to be a ‘woe is me’ kind, with an underlying idea that there are people just foolishly doing things that are going to doom everyone and if they don’t listen to AI Safety, they are dumb and going to do bad things. Which even if it is true, does not seem effective or helpful to me. Of course, maybe I am completely wrong about this.
Perhaps one of the main flaws of AI Safety is the very poor alternatives offered. The main alternative offered seem to be to do the same thing more slowly and more carefully. Which will obviously be less profitable, but the huge company should do things that are less profitable, because that is the sensible and right thing to do. In my sincere opinion, if there is anyone actually trying to take this approach, and it’s not just an arrogant and ignorant misunderstanding of mine, they are very very foolish and self-centered.
A lot of what AI does currently, that is visible to the general public seems like it could be replicated without AI. E.g. making boilerplate code that can be quickly edited. There are already huge libraries of boilerplate code. Surely making a system connecting most of them, which allows the user to search one up, maybe even make and upload some of their own and quickly make some changes to boilerplate code is not something extraordinary. But it does seem like it would be useful. The same could be said for art. How useful would it be to be able to an artist to be able to select a person/things outline from a dropdown/search bar, stretch it as much as they like and the choose it to be filled in with a specific gradient? And similarly to the code, create and upload their own boilerplates. A simple way to make these things profitable might be to make older boilerplates free but ones from say, the last month are only available to subscribed users. Yet, as far as I am aware, this does not yet exist and is not being worked on. It would not even have to be a separate application. In fact, it might be better if they were extensions for already widely used IDEs or drawing apps.
Some else I don’t see being done is incentives for AI engineers or other workers not to go work for companies such as OpenAI, Deep Mind, etc. Or things being done or made to compete for the resources that such companies need.
I also see no mention of targeting the companies that fund them. Or the systems that allow such monopolies to grow so fat that they easily have the money to fund them. Or the systems that make unions so feeble that they talk about working with AI instead of fighting against it.
The main reason, it seems to me, that such things are not being worked on is that AI Safety people actually do want AI. They actually want the research to be done. They actually want an AGI to be made that will replace the human workforce. But they want it done slower and more carefully. I find this silly. How inspiring would the man blocking the tanks from Tienanmen Square be if instead of standing in front of them, he was walking alongside them, asking them to slow down and maybe think about not using guns? It would be laughable.