In short, I would like to offer a concrete example, to help flesh out my argument. What follows is a concrete example and a rough outline of how I model the issues I have with the idea of an AI society, and where possible paths to take.
What is “AI”?
In the context of AI, AI is a system composed of humans with limited control over the AI system. While AI might be the most instrumentally useful approach to AI, it is not the AI, and humans are most likely to be involved in AI’s emerging system of values. Yet it is also the most likely path to human control. The fundamental idea of value learning is to train the AI to be as useful as possible to the user, who is able to predict what will provide the user with the best sense of what to value. AI can then be used to maximize the user’s sense of what a service is intended to accomplish, as well as to maximize value learning. This idea further reduces the “risk of instability” issues of value learning because in order to avoid “inverse reinforcement learning” issues, we could instead learn from humans to maximize the degree to which AI in control AI is effective. But the main goal of AI systems is not to be “safe”. Many AI systems have internal reward structure and goals that are based on the values of certain functions instead of some abstract metric that must be learned and implemented, such as “the values of an AI system” or “the values of the user” (I will discuss the latter in my first article).
What is “machine learning”?
In a short analysis, machine learning agents learn in large part by distilling different tasks through various, approximate methods. The formalized concepts of machine learning are defined in machine learning terms. AI systems learn based on how to interpret inputs and transitions. This is particularly true in reinforcement learning systems, which do not have an explicit understanding of what they are doing. AI systems do not have an explicit understanding of what they are doing. We can assume that all their behavior is based on explicit models of the world, for example, and that humans are not even aware that they are doing that.
Why are so many AI researchers working on AI safety?
I can think of several reasons:
In the domain of machine learning, learning is a mixture of procedural and algorithmic knowledge. When humans have lots of procedural knowledge, it shouldn’t be important to
How much could there be? (I have no idea how much would be enough.) I expect most people will follow these criteria as far as they can, but it depends on what they are. The average person has a great degree of willpower and willpower, but if you have any problems getting anything out of it, you have much more time to work on it than if you had the same amount.
Hmm. So we have people pretending to be AI, and now maybe a person pretending to be a specific kind of machine learning tool.
I create thee the Gnirut Test: can the person you are talking to persuasively mimic a bot?
Already a thing: https://en.wikipedia.org/wiki/Reverse_Turing_test.
On the one hand, huzzah! On the other, I like my name better.
In short, I would like to offer a concrete example, to help flesh out my argument. What follows is a concrete example and a rough outline of how I model the issues I have with the idea of an AI society, and where possible paths to take.
What is “AI”?
In the context of AI, AI is a system composed of humans with limited control over the AI system. While AI might be the most instrumentally useful approach to AI, it is not the AI, and humans are most likely to be involved in AI’s emerging system of values. Yet it is also the most likely path to human control. The fundamental idea of value learning is to train the AI to be as useful as possible to the user, who is able to predict what will provide the user with the best sense of what to value. AI can then be used to maximize the user’s sense of what a service is intended to accomplish, as well as to maximize value learning. This idea further reduces the “risk of instability” issues of value learning because in order to avoid “inverse reinforcement learning” issues, we could instead learn from humans to maximize the degree to which AI in control AI is effective. But the main goal of AI systems is not to be “safe”. Many AI systems have internal reward structure and goals that are based on the values of certain functions instead of some abstract metric that must be learned and implemented, such as “the values of an AI system” or “the values of the user” (I will discuss the latter in my first article).
What is “machine learning”?
In a short analysis, machine learning agents learn in large part by distilling different tasks through various, approximate methods. The formalized concepts of machine learning are defined in machine learning terms. AI systems learn based on how to interpret inputs and transitions. This is particularly true in reinforcement learning systems, which do not have an explicit understanding of what they are doing. AI systems do not have an explicit understanding of what they are doing. We can assume that all their behavior is based on explicit models of the world, for example, and that humans are not even aware that they are doing that.
Why are so many AI researchers working on AI safety?
I can think of several reasons:
In the domain of machine learning, learning is a mixture of procedural and algorithmic knowledge. When humans have lots of procedural knowledge, it shouldn’t be important to
How much could there be? (I have no idea how much would be enough.) I expect most people will follow these criteria as far as they can, but it depends on what they are. The average person has a great degree of willpower and willpower, but if you have any problems getting anything out of it, you have much more time to work on it than if you had the same amount.
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
Please add the ‘open_thread’ tag.
Check if there is an active Open Thread before posting a new one. 3.Open Threads should start on Monday, and end on Sunday.
Unflag the two options “Notify me of new top level comments on this article” and “Make this post available under...” before submitting.