Both ARE possible, according to my best knowledge, so it wouldn’t be wise to be too sure in any direction.
According to the technically correct, but completely useless, lesswrong style rationality you are right that it is not wise to say that it is “almost certainly bullshit”. What I meant to say is that given what I know it is unlikely enough to be true to be ignored and that any attempt at calculating the expected utility of being wrong will be a waste of time, or even result in spectacular failure.
I currently feel that the whole business of using numerical probability estimates and calculating expected utilities is incredible naive in most situations and at best gives your beliefs a veneer of respectability that is completely unjustified. If you think something is almost certainly bullshit then say it and don’t try to make up some number. Because the number won’t resemble the reflective equilibrium of various kinds of evidence, your preferences and intuition that is being comprised in calling something almost certainly bullshit.
What I meant to say is that given what I know it is unlikely enough to be true
Well, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can’t be any different.
Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be.
Saying—oh, I know you are wrong based on everything I stand for—is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely—should explain it also. They do so.
P.S. I don’t consider myself as a “lesswronger” at all. Disagree too often and have no “site patriotism”.
You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely—should explain it also. They do so.
My comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree.
There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated.
In the case that the development of self-improving AI’s is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements.
In the case that the development of self-improving AI’s demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered.
And regarding the proponents of a technological Singularity. 99% of their arguments consist of handwaving and claims that physical possibility implies feasibility. In other words, bullshit.
Everybody on all sides of this discussion is a suspect of a bullshit trader or a bullshit producer.
That includes me, you, Vinge, Kurzweill, Jürgen S., Ben Goertzel—everybody is a suspect. Including the investigators from any side.
Now, I’ll clear my position. The whole AI business is an Edisonian, not an Einsteinian project. I don’t see a need for some enormous scientific breakthroughs before it can be done. No, to me it looks like—we have Maxwell equations for some time now, can we build an electric lamp?
Edison is just one among many, who is claiming it is almost done in his lab. It is not certain what’s the real situation in the Menlo Park. The fact that an apprentice who left Edison is saying that there is no hope for a light bulb is not very informative. As it is not, that another apprentice still working there, is euphoric. It doesn’t matter even what the Royal Queen Science Society back in old England has to say. Or a simple peasant.
You just can’t meta judge very productively.
But you can judge is it possible to have an object as an electric driven lamp? Or can you build a nuclear fusion reactor? Or can you built an intelligent program?
If it is possible, how hard is to actually build one of those? May takes a long time, even if it is. May take a short time, if it is.
The only real question is—can it be done and if yes—how? If no, also good. It just isn’t.
But you have to stay on topic, not meta topic, I think.
No, to me it looks like—we have Maxwell equations for some time now, can we build an electric lamp?
To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.
But you can judge is it possible to have an object as an electric driven lamp?
It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn’t mean that it is feasible. It also says nothing about timeframes.
The only real question is—can it be done and if yes—how?
The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.
to me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.
It has been done in 2500 years. (Providing that the fusion is still outsourced to the Sun). What are guaranties that in this case we will CERTAINLY NOT be 100 times faster?
It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn’t mean that it is feasible. It also says nothing about timeframes.
It does not automatically mean that it is either unfeasible or far, far in the future.
The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.
If it was sure that it’s far, far away—but it isn’t that sure at all—even then it would be a very important topic.
If it was sure that it’s far, far away—but it isn’t that sure at all—even then it would be a very important topic.
I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can’t really calculate shit 2) it’s impossible to do for any agent that isn’t computationally unbounded 3) you’ll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.
The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can’t be evaluated due to a lack of data.
Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions.
I don’t think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
Utilitarianism is an ethical theory holding that the proper course of action is the one that maximizes the overall “happiness”.
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?
(Note there is no such possible danger from cryonics: you’re already ‘dead’.)
To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.
Really? Some have been known top exaggerate to stimulate funding. However, many people (including some non-engineers) don’t put machine intelligence that far off. Do you have your own estimates yet, perhaps?
According to the technically correct, but completely useless, lesswrong style rationality you are right that it is not wise to say that it is “almost certainly bullshit”. What I meant to say is that given what I know it is unlikely enough to be true to be ignored and that any attempt at calculating the expected utility of being wrong will be a waste of time, or even result in spectacular failure.
I currently feel that the whole business of using numerical probability estimates and calculating expected utilities is incredible naive in most situations and at best gives your beliefs a veneer of respectability that is completely unjustified. If you think something is almost certainly bullshit then say it and don’t try to make up some number. Because the number won’t resemble the reflective equilibrium of various kinds of evidence, your preferences and intuition that is being comprised in calling something almost certainly bullshit.
Well, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can’t be any different.
Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be.
Saying—oh, I know you are wrong based on everything I stand for—is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely—should explain it also. They do so.
P.S. I don’t consider myself as a “lesswronger” at all. Disagree too often and have no “site patriotism”.
My comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree.
There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated.
In the case that the development of self-improving AI’s is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements.
In the case that the development of self-improving AI’s demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered.
And regarding the proponents of a technological Singularity. 99% of their arguments consist of handwaving and claims that physical possibility implies feasibility. In other words, bullshit.
Everybody on all sides of this discussion is a suspect of a bullshit trader or a bullshit producer.
That includes me, you, Vinge, Kurzweill, Jürgen S., Ben Goertzel—everybody is a suspect. Including the investigators from any side.
Now, I’ll clear my position. The whole AI business is an Edisonian, not an Einsteinian project. I don’t see a need for some enormous scientific breakthroughs before it can be done. No, to me it looks like—we have Maxwell equations for some time now, can we build an electric lamp?
Edison is just one among many, who is claiming it is almost done in his lab. It is not certain what’s the real situation in the Menlo Park. The fact that an apprentice who left Edison is saying that there is no hope for a light bulb is not very informative. As it is not, that another apprentice still working there, is euphoric. It doesn’t matter even what the Royal Queen Science Society back in old England has to say. Or a simple peasant.
You just can’t meta judge very productively.
But you can judge is it possible to have an object as an electric driven lamp? Or can you build a nuclear fusion reactor? Or can you built an intelligent program?
If it is possible, how hard is to actually build one of those? May takes a long time, even if it is. May take a short time, if it is.
The only real question is—can it be done and if yes—how? If no, also good. It just isn’t.
But you have to stay on topic, not meta topic, I think.
To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.
It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn’t mean that it is feasible. It also says nothing about timeframes.
The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.
It has been done in 2500 years. (Providing that the fusion is still outsourced to the Sun). What are guaranties that in this case we will CERTAINLY NOT be 100 times faster?
It does not automatically mean that it is either unfeasible or far, far in the future.
If it was sure that it’s far, far away—but it isn’t that sure at all—even then it would be a very important topic.
I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can’t really calculate shit 2) it’s impossible to do for any agent that isn’t computationally unbounded 3) you’ll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.
The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can’t be evaluated due to a lack of data.
Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?
I don’t think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Thanks.
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
See http://en.wikipedia.org/wiki/Consequentialism for your last question (it’s a general term).
The answer to your “Is there a name...” question is “no”—AFAIK.
I get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.
My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?
(Note there is no such possible danger from cryonics: you’re already ‘dead’.)
Really? Some have been known top exaggerate to stimulate funding. However, many people (including some non-engineers) don’t put machine intelligence that far off. Do you have your own estimates yet, perhaps?