You may have noticed that a lot of people on here are concerned about AI going rogue and doing things like converting everything into paperclips. If you have no effective way of assuring good behavior, but you keep adding capability to each new version of your system, you may find yourself paperclipped. That’s generally incompatible with life.
This isn’t some kind of game where the worst that can happen is that somebody’s feelings get hurt.
I doubt they do. And using the unqualified word “believe” implies a level of certainty that nobody probably has. I also doubt that their “beliefs” are directly and decisively responsible to their decisions. They are responding to their daily environments and incentives.
Anyway, regardless of what they believe or of what their decision making processes are, the bottom line is that they’re not doing anything effective to assure good behavior in the things they’re building. That’s the central point here. Their motivations are mostly an irrelevant side issue, and only might really matter if understanding them provided a path to getting them to modify their actions… which is unlikely.
When I say “literal fear of actual death”, what I’m really getting at is that, for whatever reasons, these people ARE ACTING AS IF THAT RISK DID NOT EXIST WHEN IT IN FACT DOES EXIST. I’m not saying they do feel that fear. I’m not even saying they do not feel that fear. I’m saying they ought to feel that fear.
They are also ignoring a bunch of other risks, including many that a lot of them publicly claim they do believe are real. But they’re doing this stuff anyway. I don’t care if that’s caused by what they believe, by them just running on autopilot, or by their being captive to Moloch. The important part is what they are actually doing.
… and, by the way, if they’re going to keep doing that, it might be appropriate to remove their ability to act as “decision makers”.
I doubt they do. And using the unqualified word “believe” implies a level of certainty that nobody probably has. I also doubt that their “beliefs” are directly and decisively responsible to their decisions. They are responding to their daily environments and incentives.
If this is your view, then what does your previous comment,
Literal fear of actual death?
have to with decisions made at Microsoft/OpenAI ?
Their ‘daily environments and incentives’ would almost certainly not include such a fear.
You may have noticed that a lot of people on here are concerned about AI going rogue and doing things like converting everything into paperclips. If you have no effective way of assuring good behavior, but you keep adding capability to each new version of your system, you may find yourself paperclipped. That’s generally incompatible with life.
This isn’t some kind of game where the worst that can happen is that somebody’s feelings get hurt.
This is only believed by a small portion of the population.
Why do you think the aforementioned decision makers share such beliefs?
I doubt they do. And using the unqualified word “believe” implies a level of certainty that nobody probably has. I also doubt that their “beliefs” are directly and decisively responsible to their decisions. They are responding to their daily environments and incentives.
Anyway, regardless of what they believe or of what their decision making processes are, the bottom line is that they’re not doing anything effective to assure good behavior in the things they’re building. That’s the central point here. Their motivations are mostly an irrelevant side issue, and only might really matter if understanding them provided a path to getting them to modify their actions… which is unlikely.
When I say “literal fear of actual death”, what I’m really getting at is that, for whatever reasons, these people ARE ACTING AS IF THAT RISK DID NOT EXIST WHEN IT IN FACT DOES EXIST. I’m not saying they do feel that fear. I’m not even saying they do not feel that fear. I’m saying they ought to feel that fear.
They are also ignoring a bunch of other risks, including many that a lot of them publicly claim they do believe are real. But they’re doing this stuff anyway. I don’t care if that’s caused by what they believe, by them just running on autopilot, or by their being captive to Moloch. The important part is what they are actually doing.
… and, by the way, if they’re going to keep doing that, it might be appropriate to remove their ability to act as “decision makers”.
If this is your view, then what does your previous comment,
have to with decisions made at Microsoft/OpenAI ?
Their ‘daily environments and incentives’ would almost certainly not include such a fear.