FYI, I don’t work in AI, it’s not my field of expertise either.
And you’re very much misrepresenting or misunderstanding why I am disagreeing with you, and why others are.
And you are mistaken that we’re not talking about this. We talk about it all the time, in great detail. We are aware that philosophers have known about the problems for a very long time and failed to come up with solutions anywhere near adequate to what we need for AI. We are very aware that we don’t actually know what is (most) valuable to us, let alone any other minds, and have at best partial information about this.
I guess I’ll leave off with the observation that it seems you really do believe as you say, that you’re completely certain of your beliefs on some of these points of disagreement. In which case, you are correctly implementing Bayesian updating in response to those who comment/reply. If any mind assigns probability 1 to any proposition, that is infinite certainty. No finite amount of data can ever convince that mind otherwise. Do with that what you will. One man’s modus ponens is another’s modus tollens.
So pick a position please. You said that many people talk that intelligence and goals are coupled. And now you say that I should read more to understand why intelligence and goals are not coupled.
Respect goes down.
I strongly agree with the proposition that it is possible in principle to construct a system that pursues any specifiable goal that has any physically possible level of intelligence, including but not limited to capabilities such as memory, reasoning, planning, and learning.
As things stand, I do not believe there is any set of sources I or anyone else here could show you that would influence your opinion on that topic. At least, not without a lot of other prerequisite material that may seem to you to have nothing to do with it. And without knowing you a whole lot better than I ever could from a comment thread, I can’t really provide good recommendations beyond the standard ones, at least not recommendations I would expect that you would appreciate.
However, you and I are (AFAIK) both humans, which means there are many elements of how our minds work that we share, which need not be shared by other kinds of minds. Moreover, you ended up here, and have an interest in many types of questions that I am also interested in. I do not know but strongly suspect that if you keep searching and learning, openly and honestly and with a bit more humility, that you’ll eventually understand why I’m saying what I’m saying, whether you agree with me or not, and whether I’m right or not.
Claude probably read that material right? If it finds my observations unique and serious then maybe they are unique and serious? I’ll share other chat next time..
It’s definitely a useful partner to bounce ideas off, but keep in mind it’s trained with a bias to try to be helpful and agreeable unless you specifically prompt it to prompt an honest analysis and critique.
FYI, I don’t work in AI, it’s not my field of expertise either.
And you’re very much misrepresenting or misunderstanding why I am disagreeing with you, and why others are.
And you are mistaken that we’re not talking about this. We talk about it all the time, in great detail. We are aware that philosophers have known about the problems for a very long time and failed to come up with solutions anywhere near adequate to what we need for AI. We are very aware that we don’t actually know what is (most) valuable to us, let alone any other minds, and have at best partial information about this.
I guess I’ll leave off with the observation that it seems you really do believe as you say, that you’re completely certain of your beliefs on some of these points of disagreement. In which case, you are correctly implementing Bayesian updating in response to those who comment/reply. If any mind assigns probability 1 to any proposition, that is infinite certainty. No finite amount of data can ever convince that mind otherwise. Do with that what you will. One man’s modus ponens is another’s modus tollens.
I don’t believe you. Give me a single recognized source that talks about same problem I do. Why Orthogonality Thesis is considered true then?
You don’t need me to answer that, and won’t benefit if I do. You just need to get out of the car.
I don’t expect you to read that link or to get anything useful out of it if you do. But if and when you know why I chose it, you’ll know much more about the orthogonality thesis than you currently do.
So pick a position please. You said that many people talk that intelligence and goals are coupled. And now you say that I should read more to understand why intelligence and goals are not coupled. Respect goes down.
I have not said either of those things.
:D ok
Fair enough, I was being somewhat cheeky there.
I strongly agree with the proposition that it is possible in principle to construct a system that pursues any specifiable goal that has any physically possible level of intelligence, including but not limited to capabilities such as memory, reasoning, planning, and learning.
As things stand, I do not believe there is any set of sources I or anyone else here could show you that would influence your opinion on that topic. At least, not without a lot of other prerequisite material that may seem to you to have nothing to do with it. And without knowing you a whole lot better than I ever could from a comment thread, I can’t really provide good recommendations beyond the standard ones, at least not recommendations I would expect that you would appreciate.
However, you and I are (AFAIK) both humans, which means there are many elements of how our minds work that we share, which need not be shared by other kinds of minds. Moreover, you ended up here, and have an interest in many types of questions that I am also interested in. I do not know but strongly suspect that if you keep searching and learning, openly and honestly and with a bit more humility, that you’ll eventually understand why I’m saying what I’m saying, whether you agree with me or not, and whether I’m right or not.
Claude probably read that material right? If it finds my observations unique and serious then maybe they are unique and serious? I’ll share other chat next time..
It’s definitely a useful partner to bounce ideas off, but keep in mind it’s trained with a bias to try to be helpful and agreeable unless you specifically prompt it to prompt an honest analysis and critique.