A person from nowhere making short and strong claims that run counter to so much wisdom. Must be wrong. Can’t be right.
I understand the prejudice. And I don’t know what can I do about it. To be honest that’s why I come here, not media. Because I expect at least a little attention to reasoning instead of “this does not align with opinion of majority”. That’s what scientists do, right?
It’s not my job to prove you wrong either. I’m writing here not because I want to achieve academic recognition, I’m writing here because I want to survive. And I have a very good reason to doubt my survival because of poor work you and other AI scientists do.
They don’t. Really, really don’t.
there is no necessary causal link between steps three and four
I don’t agree. But if you read my posts and comments already I’m not sure how else I can explain this so you would understand. But I’ll try.
People are very inconsistent when dealing with unknowns:
unknown = doesn’t exist. For example Presumption of innocence
unknown = ignored. For example you choose restaurant on Google Maps and you don’t care whether there are restaurants not mentioned there
unknown = exists. For example security systems not only breach signal but also absense of signal interpret as breach
And that’s probably the root cause why we have an argument here. There is no scientifically recognized and widespread way to deal with unknowns → Fact-value distinction emerges to solve tensions between science and religion → AI scientists take fact-value distinction as a non questionable truth.
If I speak with philosophers, they understand the problem, but don’t understand the significance. If I speak with AI scientists, they understand the significance, but don’t understand the problem.
The problem. Fact-value distinction does not apply for agents (human, AI). Every agent is trapped with an observation “there might be value” (as well as “I think, therefore I am”). And intelligent agent can’t ignore it, it tries to find value, it tries to maximize value.
It’s like built-in utility function. LessWrong seems to understand that an agent cannot ignore its utility function. But LessWrong assumes that we can assign value = x. Intelligent agent will eventually understand that value does not necessarily = x. Value might be something else, something unknown.
I know that this is difficult to translate to technical language, I can’t point a line of code that creates this problem. But this problem exists—intelligence and goal are not separate things. And nobody speaks about it.
FYI, I don’t work in AI, it’s not my field of expertise either.
And you’re very much misrepresenting or misunderstanding why I am disagreeing with you, and why others are.
And you are mistaken that we’re not talking about this. We talk about it all the time, in great detail. We are aware that philosophers have known about the problems for a very long time and failed to come up with solutions anywhere near adequate to what we need for AI. We are very aware that we don’t actually know what is (most) valuable to us, let alone any other minds, and have at best partial information about this.
I guess I’ll leave off with the observation that it seems you really do believe as you say, that you’re completely certain of your beliefs on some of these points of disagreement. In which case, you are correctly implementing Bayesian updating in response to those who comment/reply. If any mind assigns probability 1 to any proposition, that is infinite certainty. No finite amount of data can ever convince that mind otherwise. Do with that what you will. One man’s modus ponens is another’s modus tollens.
So pick a position please. You said that many people talk that intelligence and goals are coupled. And now you say that I should read more to understand why intelligence and goals are not coupled.
Respect goes down.
I strongly agree with the proposition that it is possible in principle to construct a system that pursues any specifiable goal that has any physically possible level of intelligence, including but not limited to capabilities such as memory, reasoning, planning, and learning.
As things stand, I do not believe there is any set of sources I or anyone else here could show you that would influence your opinion on that topic. At least, not without a lot of other prerequisite material that may seem to you to have nothing to do with it. And without knowing you a whole lot better than I ever could from a comment thread, I can’t really provide good recommendations beyond the standard ones, at least not recommendations I would expect that you would appreciate.
However, you and I are (AFAIK) both humans, which means there are many elements of how our minds work that we share, which need not be shared by other kinds of minds. Moreover, you ended up here, and have an interest in many types of questions that I am also interested in. I do not know but strongly suspect that if you keep searching and learning, openly and honestly and with a bit more humility, that you’ll eventually understand why I’m saying what I’m saying, whether you agree with me or not, and whether I’m right or not.
Claude probably read that material right? If it finds my observations unique and serious then maybe they are unique and serious? I’ll share other chat next time..
It’s definitely a useful partner to bounce ideas off, but keep in mind it’s trained with a bias to try to be helpful and agreeable unless you specifically prompt it to prompt an honest analysis and critique.
First of all—respect 🫡
A person from nowhere making short and strong claims that run counter to so much wisdom. Must be wrong. Can’t be right.
I understand the prejudice. And I don’t know what can I do about it. To be honest that’s why I come here, not media. Because I expect at least a little attention to reasoning instead of “this does not align with opinion of majority”. That’s what scientists do, right?
It’s not my job to prove you wrong either. I’m writing here not because I want to achieve academic recognition, I’m writing here because I want to survive. And I have a very good reason to doubt my survival because of poor work you and other AI scientists do.
I don’t agree. But if you read my posts and comments already I’m not sure how else I can explain this so you would understand. But I’ll try.
People are very inconsistent when dealing with unknowns:
unknown = doesn’t exist. For example Presumption of innocence
unknown = ignored. For example you choose restaurant on Google Maps and you don’t care whether there are restaurants not mentioned there
unknown = exists. For example security systems not only breach signal but also absense of signal interpret as breach
And that’s probably the root cause why we have an argument here. There is no scientifically recognized and widespread way to deal with unknowns → Fact-value distinction emerges to solve tensions between science and religion → AI scientists take fact-value distinction as a non questionable truth.
If I speak with philosophers, they understand the problem, but don’t understand the significance. If I speak with AI scientists, they understand the significance, but don’t understand the problem.
The problem. Fact-value distinction does not apply for agents (human, AI). Every agent is trapped with an observation “there might be value” (as well as “I think, therefore I am”). And intelligent agent can’t ignore it, it tries to find value, it tries to maximize value.
It’s like built-in utility function. LessWrong seems to understand that an agent cannot ignore its utility function. But LessWrong assumes that we can assign value = x. Intelligent agent will eventually understand that value does not necessarily = x. Value might be something else, something unknown.
I know that this is difficult to translate to technical language, I can’t point a line of code that creates this problem. But this problem exists—intelligence and goal are not separate things. And nobody speaks about it.
FYI, I don’t work in AI, it’s not my field of expertise either.
And you’re very much misrepresenting or misunderstanding why I am disagreeing with you, and why others are.
And you are mistaken that we’re not talking about this. We talk about it all the time, in great detail. We are aware that philosophers have known about the problems for a very long time and failed to come up with solutions anywhere near adequate to what we need for AI. We are very aware that we don’t actually know what is (most) valuable to us, let alone any other minds, and have at best partial information about this.
I guess I’ll leave off with the observation that it seems you really do believe as you say, that you’re completely certain of your beliefs on some of these points of disagreement. In which case, you are correctly implementing Bayesian updating in response to those who comment/reply. If any mind assigns probability 1 to any proposition, that is infinite certainty. No finite amount of data can ever convince that mind otherwise. Do with that what you will. One man’s modus ponens is another’s modus tollens.
I don’t believe you. Give me a single recognized source that talks about same problem I do. Why Orthogonality Thesis is considered true then?
You don’t need me to answer that, and won’t benefit if I do. You just need to get out of the car.
I don’t expect you to read that link or to get anything useful out of it if you do. But if and when you know why I chose it, you’ll know much more about the orthogonality thesis than you currently do.
So pick a position please. You said that many people talk that intelligence and goals are coupled. And now you say that I should read more to understand why intelligence and goals are not coupled. Respect goes down.
I have not said either of those things.
:D ok
Fair enough, I was being somewhat cheeky there.
I strongly agree with the proposition that it is possible in principle to construct a system that pursues any specifiable goal that has any physically possible level of intelligence, including but not limited to capabilities such as memory, reasoning, planning, and learning.
As things stand, I do not believe there is any set of sources I or anyone else here could show you that would influence your opinion on that topic. At least, not without a lot of other prerequisite material that may seem to you to have nothing to do with it. And without knowing you a whole lot better than I ever could from a comment thread, I can’t really provide good recommendations beyond the standard ones, at least not recommendations I would expect that you would appreciate.
However, you and I are (AFAIK) both humans, which means there are many elements of how our minds work that we share, which need not be shared by other kinds of minds. Moreover, you ended up here, and have an interest in many types of questions that I am also interested in. I do not know but strongly suspect that if you keep searching and learning, openly and honestly and with a bit more humility, that you’ll eventually understand why I’m saying what I’m saying, whether you agree with me or not, and whether I’m right or not.
Claude probably read that material right? If it finds my observations unique and serious then maybe they are unique and serious? I’ll share other chat next time..
It’s definitely a useful partner to bounce ideas off, but keep in mind it’s trained with a bias to try to be helpful and agreeable unless you specifically prompt it to prompt an honest analysis and critique.