I fully share the views expressed in your article. Indeed, the ideal solution would be to delete many of the existing materials and to reformat the remaining ones into a format understandable to every novice programmer, transhumanist, or even an average person.
As a poker player and a lawyer assisting consumers who have suffered from the consequences of artificial intelligence, as well as someone interested in cryptocurrencies and existential risks, I first invested in Eliezer Yudkowsky’s ideas many years ago. At that time, I saw how generative-predictive models easily outplayed poker players, and I wondered whether it was possible to counteract this. Since then, I have not seen a single serious security study conducted by not the players themselves, but any non-response system up question could it research even self data
and in the realm of cryptocurrencies, money continues to be stolen with the help of AI, with no help or refund in sight.
I see prediction we have already lost the battle against GAI, but in the next 12 years, we have a chance to make the situation a bit better. To create conditions of the game where this player or his precursor (AI-users) will have more aligned (lawful good) elements.
It seems that very intelligent also very stubborn, see no doubts in position, such high IQs are very dangerous. Think they are right about everything, that understood it all, but we are just few perspectives in a vast, incomprehensible world where we understand nothing. We all wrong.
Yes, you’re probably a couple of sigmas smarter than the median person, but you need to convince exactly such a person, the median, or even dumper on a couple of IQ sigmas not to launch anything. It’s not just OpenAI developing GAI,
others are too, make research, decisions but they might not even know who Eliezer Yudkowsky is or what the lesswrong website is. They might visit pepper copy of the site, see that it’s clear we shouldn’t let GAI emerge, think about graphic boards, and where there are many graphic boards, in decentralized mining, they might decide to take control of them.
If we’re lucky, their master slaves will just steal them and use them for mining, and everything will be fine then.
But various research like changing the sign of a function and creating something dangerous, that’s better removed.
Another strange thing is the super-ethical laws for Europe and the US. A lot of jurisdictions. Even convention of cybercrime not universal. And in universal jurisdiction cybercrimes there is no crimes about existential risks. So many of international media laws just declarations without real procedures without any real power
Many laws aren’t adhered to in practice, there are different kinds of people, for some, the criminal code is like a menu, and if you don’t have to pay for that menu, it’s doubly bad
There are individualists, and among transhumanists, I’m sure there are many who would choose their life and the life of a close million over the rest of humanity. And that’s not good, unfair. System should be for all billions of people
But there are also those in the world who, if presented with a “shut down server” button, will eventually press it. There are many such buttons in various fields worldwide. If we take predictions for a hundred years, unless something radically changes, the likelihood of “server shutdown” approaches 1.
So it’s interesting whether through GAI open source or any other framework or model, we could create some universal platform with a rule system that on one hand does universal monitoring of all existential problems, but also provides clear, beneficial instructions for the median voter, as well as for the median worker and their masters.
Culture is created by the spoon. Give us a normal, unified system that encourages correct behavior for adhering to existential risks, since you’ve won the genetic and event lottery by intelligence and were born with high IQ and social skills.
Usually, the median person is interested in: jobs, a full fridge, rituals, culture, the spread of their opinion leader’s information, dopamine, political and other random and inherited values, life, continuation of life, and the like.
Provide a universal way of obtaining this and just monitor it calmly. And it touched on the problem of all existential risks: ecology, physics, pandemics, volcanic activity, space, nanobots, atom.
Doomclock 23:55 is not only because of the GAI risk, what selfishness.
Sometimes it seems that Yudkowsky is the Girolamo Savonarola of our days. And the system of procedures that Institute of Future Life and Eliezar already invented, their execution is important!
Sadly in humanity now it’s profitable to act, and then ask for forgiveness. So many businesses are built the same as nowadays Binance without responsibility, ‘don’t FUD just build’, same way work all AI and others powerful startups. Many experimental researches not 100% sure that they are safe for planet. In 20th and 21th centuary it’s became normal. But it shouldn’t.
And these real condition of problem, real pattern of life. And yet in crypto, there are many graphics cards, collected in decentralized networks, and they gather in large decentralized, unturnoffable nodes and clusters. Are they danger?
We need systems of cheap protection, brakes, and incentives for their use! And like with seat belts, teach from childhood. Something even simpler than Khan Academy. HPMOR was great. Do we have anything for next Generations? That didn’t see or like Harry Potter? What is it? To explain problem.
Laws and rules just for show, unenforceable, are only harmful. Since ancient times it’s known that any rules consist of three things: hypothesis, disposition, and sanction. Without powerful procedural law, all these material legal norms are worthless, more precisely, a boon for the malefactor. If we don’t procedurally protect people from wrongful AI, introducing soothing, non-working ethical rules will only increase volatility and the likelihood of wrongful AI, his advantage, even if we are lucky to have its element (it’s alighment) in principle.
I apologize if there were any offensive remarks in the text or if it seemed like an unstructured rant expressing incorrect thoughts, that how my brain work. Hope I wrong, point please. Thank you for any comments and for your attention!
A bit of a rant, yes, but some good thoughts here.
I agree that unenforceable regulation can be a bad thing. On the other hand, it can also work in some limited ways. For example, the international agreements against heritable human genetic engineering seem to have held up fairly well. But I think that that requires supporting facts about the world to be true. It needs to not be obviously highly profitable to defectors, it needs to be relatively inaccessible to most people (requiring specialized tech and knowledge), it needs to fit with our collective intuitions (bio-engineering humans seems kinda icky to a lot of people).
The trouble is, all of these things fail to help us with the problem of dangerous AI! As you point out, many bitcoin miners have plenty of GPUs to be dangerous if we get even a couple more orders-of-magnitude algorithmic efficiency improvements. So it’s accessible. AI and AGI offer many tempting ways to acquire power and money in society. So it’s immediately and incrementally profitable. People aren’t as widely instinctively outraged by AI experiments as Bio-engineering experiments. So it’s not intuitively repulsive.
So yes, this seems to me to be very much a situation in which we should not place any trust in unenforceable regulation.
I also agree that we probably do need some sort of organization which enforces the necessary protections (detection and destruction) against rogue AI.
And it does seem potentially like a lot of human satisfaction could be bought in the near future with a focus on making sure everyone in the world gets a reasonable minimum amount of satisfaction from their physical and social environments as you describe here:
Usually, the median person is interested in: jobs, a full fridge, rituals, culture, the spread of their opinion leader’s information, dopamine, political and other random and inherited values, life, continuation of life, and the like. Provide a universal way of obtaining this and just monitor it calmly.
As Connor Leahy has said, we should be able to build sufficiently powerful tool-AI to not need to build AGI! Stop while we still have control! Use the wealth to buy off those who would try anyway. Also, build an enforcement agency to stop runaway AI or AI misuse.
I don’t know how we get there from here though.
Also, the offense-dominant weapons development landscape is looking really grim, and I don’t see how to easily patch that.
On the other hand, I don’t think we buy ourselves any chance of victory by trying to gag ourselves for fear of speeding up AGI development. It’s coming soon regardless of what we do! The race is short now, we need to act fast!
I don’t buy the arguments that our discussions here will make a significant impact in the timing of the arrival of AGI. That seems like hubris to me, to imagine we have such substantial effects, just from our discussions.
Code? Yes, code can be dangerous and shouldn’t be published if so.
Sufficiently detailed technical descriptions of potential advancements? Yeah, I can see that being dangerous.
Unsubstantiated commentary about a published paper being interesting and potentially having both capabilities and alignment value? I am unconvinced that such discussions meaningfully impact the experiments being undertaken in AI labs.
Good day!
I fully share the views expressed in your article. Indeed, the ideal solution would be to delete many of the existing materials and to reformat the remaining ones into a format understandable to every novice programmer, transhumanist, or even an average person.
As a poker player and a lawyer assisting consumers who have suffered from the consequences of artificial intelligence, as well as someone interested in cryptocurrencies and existential risks, I first invested in Eliezer Yudkowsky’s ideas many years ago. At that time, I saw how generative-predictive models easily outplayed poker players, and I wondered whether it was possible to counteract this. Since then, I have not seen a single serious security study conducted by not the players themselves, but any non-response system up question could it research even self data
and in the realm of cryptocurrencies, money continues to be stolen with the help of AI, with no help or refund in sight.
I see prediction we have already lost the battle against GAI, but in the next 12 years, we have a chance to make the situation a bit better. To create conditions of the game where this player or his precursor (AI-users) will have more aligned (lawful good) elements.
It seems that very intelligent also very stubborn, see no doubts in position, such high IQs are very dangerous. Think they are right about everything, that understood it all, but we are just few perspectives in a vast, incomprehensible world where we understand nothing. We all wrong.
Yes, you’re probably a couple of sigmas smarter than the median person, but you need to convince exactly such a person, the median, or even dumper on a couple of IQ sigmas not to launch anything. It’s not just OpenAI developing GAI,
others are too, make research, decisions but they might not even know who Eliezer Yudkowsky is or what the lesswrong website is. They might visit pepper copy of the site, see that it’s clear we shouldn’t let GAI emerge, think about graphic boards, and where there are many graphic boards, in decentralized mining, they might decide to take control of them.
If we’re lucky, their master slaves will just steal them and use them for mining, and everything will be fine then.
But various research like changing the sign of a function and creating something dangerous, that’s better removed.
Another strange thing is the super-ethical laws for Europe and the US. A lot of jurisdictions. Even convention of cybercrime not universal. And in universal jurisdiction cybercrimes there is no crimes about existential risks. So many of international media laws just declarations without real procedures without any real power
Many laws aren’t adhered to in practice, there are different kinds of people, for some, the criminal code is like a menu, and if you don’t have to pay for that menu, it’s doubly bad
There are individualists, and among transhumanists, I’m sure there are many who would choose their life and the life of a close million over the rest of humanity. And that’s not good, unfair. System should be for all billions of people
But there are also those in the world who, if presented with a “shut down server” button, will eventually press it. There are many such buttons in various fields worldwide. If we take predictions for a hundred years, unless something radically changes, the likelihood of “server shutdown” approaches 1.
So it’s interesting whether through GAI open source or any other framework or model, we could create some universal platform with a rule system that on one hand does universal monitoring of all existential problems, but also provides clear, beneficial instructions for the median voter, as well as for the median worker and their masters.
Culture is created by the spoon. Give us a normal, unified system that encourages correct behavior for adhering to existential risks, since you’ve won the genetic and event lottery by intelligence and were born with high IQ and social skills.
Usually, the median person is interested in: jobs, a full fridge, rituals, culture, the spread of their opinion leader’s information, dopamine, political and other random and inherited values, life, continuation of life, and the like.
Provide a universal way of obtaining this and just monitor it calmly. And it touched on the problem of all existential risks: ecology, physics, pandemics, volcanic activity, space, nanobots, atom.
Doomclock 23:55 is not only because of the GAI risk, what selfishness.
Sometimes it seems that Yudkowsky is the Girolamo Savonarola of our days. And the system of procedures that Institute of Future Life and Eliezar already invented, their execution is important!
Sadly in humanity now it’s profitable to act, and then ask for forgiveness. So many businesses are built the same as nowadays Binance without responsibility, ‘don’t FUD just build’, same way work all AI and others powerful startups. Many experimental researches not 100% sure that they are safe for planet. In 20th and 21th centuary it’s became normal. But it shouldn’t.
And these real condition of problem, real pattern of life. And yet in crypto, there are many graphics cards, collected in decentralized networks, and they gather in large decentralized, unturnoffable nodes and clusters. Are they danger?
We need systems of cheap protection, brakes, and incentives for their use! And like with seat belts, teach from childhood. Something even simpler than Khan Academy. HPMOR was great. Do we have anything for next Generations? That didn’t see or like Harry Potter? What is it? To explain problem.
Laws and rules just for show, unenforceable, are only harmful. Since ancient times it’s known that any rules consist of three things: hypothesis, disposition, and sanction. Without powerful procedural law, all these material legal norms are worthless, more precisely, a boon for the malefactor. If we don’t procedurally protect people from wrongful AI, introducing soothing, non-working ethical rules will only increase volatility and the likelihood of wrongful AI, his advantage, even if we are lucky to have its element (it’s alighment) in principle.
I apologize if there were any offensive remarks in the text or if it seemed like an unstructured rant expressing incorrect thoughts, that how my brain work. Hope I wrong, point please. Thank you for any comments and for your attention!
A bit of a rant, yes, but some good thoughts here.
I agree that unenforceable regulation can be a bad thing. On the other hand, it can also work in some limited ways. For example, the international agreements against heritable human genetic engineering seem to have held up fairly well. But I think that that requires supporting facts about the world to be true. It needs to not be obviously highly profitable to defectors, it needs to be relatively inaccessible to most people (requiring specialized tech and knowledge), it needs to fit with our collective intuitions (bio-engineering humans seems kinda icky to a lot of people).
The trouble is, all of these things fail to help us with the problem of dangerous AI! As you point out, many bitcoin miners have plenty of GPUs to be dangerous if we get even a couple more orders-of-magnitude algorithmic efficiency improvements. So it’s accessible. AI and AGI offer many tempting ways to acquire power and money in society. So it’s immediately and incrementally profitable. People aren’t as widely instinctively outraged by AI experiments as Bio-engineering experiments. So it’s not intuitively repulsive.
So yes, this seems to me to be very much a situation in which we should not place any trust in unenforceable regulation.
I also agree that we probably do need some sort of organization which enforces the necessary protections (detection and destruction) against rogue AI.
And it does seem potentially like a lot of human satisfaction could be bought in the near future with a focus on making sure everyone in the world gets a reasonable minimum amount of satisfaction from their physical and social environments as you describe here:
As Connor Leahy has said, we should be able to build sufficiently powerful tool-AI to not need to build AGI! Stop while we still have control! Use the wealth to buy off those who would try anyway. Also, build an enforcement agency to stop runaway AI or AI misuse.
I don’t know how we get there from here though.
Also, the offense-dominant weapons development landscape is looking really grim, and I don’t see how to easily patch that.
On the other hand, I don’t think we buy ourselves any chance of victory by trying to gag ourselves for fear of speeding up AGI development. It’s coming soon regardless of what we do! The race is short now, we need to act fast!
I don’t buy the arguments that our discussions here will make a significant impact in the timing of the arrival of AGI. That seems like hubris to me, to imagine we have such substantial effects, just from our discussions.
Code? Yes, code can be dangerous and shouldn’t be published if so.
Sufficiently detailed technical descriptions of potential advancements? Yeah, I can see that being dangerous.
Unsubstantiated commentary about a published paper being interesting and potentially having both capabilities and alignment value? I am unconvinced that such discussions meaningfully impact the experiments being undertaken in AI labs.