So basically you’re saying that when Leo Szilard wanted to hide the true neutron cross section of purified graphite and Enrico Fermi wanted to publish it, you’d have published it.
You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.
Reading “value is fragile”almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, “Safety is not Safe” reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.
For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart’s Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.
I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they’d known they could’ve used purified graphite… well, they probably still wouldn’t have gotten nuclear weapons in thisEverett branch but they might have somewhere else.
Before 2001 I would probably have been on Fermi’s side, but that’s when I still believed deep down that no true harm could come to someone who was only faithfully trying to do science. (I.e. supervised universe thinking.)
How is blindly looking for AGI in a vast search space better than stagnation?
No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.
How does working on FAI qualify as “stagnation”?
It is a distraction from doing things which are actually useful in the creation of our successors.
You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
AGI is a really hard problem. If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won’t have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.
It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.
If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years
This is not how truly fundamental breakthroughs are made.
Will they be so immersed in the math that they won’t have read the deep philosophical tracts?
Here is where I agree with you—anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.
But your bored teenager scenario makes no sense.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
This is not how truly fundamental breakthroughs are made.
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
ought to be more difficult than building an operating system
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. [...] my sole justification [...] is that a number of pyramid-style AGI projects of heroic proportions have been attempted and failed miserably.
The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
That truly would be a sad day.
Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are “things which are actually useful in the creation of our successors”?
Is that your plan against intelligence stagnation?
This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).
I view the teenager’s success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my “goal” in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
ad-hoc tinkering is expected to lead to disaster
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
I also question the other assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
Ad-hoc tinkering has given us the seed of essentially every other technology.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.
I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
I am convinced that resource depletion is likely to lead to social collapse—possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test—space colonization.
Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.
Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii.
So you, like me are a “Risk transhumanist”—someone who thinks that existential risk motivates the enhancement of the intelligence of those humans who do the substantial information processing in our society (i.e. politicians, economists, scientists, etc).
I completely agree with this position.
However, creating an uFAI doesn’t make things any better.
How about thinking about ways to enhance human intelligence?
we have nuclear, wind, solar and other fossil fuels
Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture’s non-negotiable dependence on synthetic fertilizers.
Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.
You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
Yes, the US military is extensively researching how to convert nuclear energy + atmospheric CO2 + water (all of which are in no short supply) into traditional fuel. New York Times article about it. The only thing holding it back from use is that it costs more than making the fuel from ordinary fossil fuels, but when you account for existing taxes in my most countries, if this method weren’t taxed while other taxes remained in place, “nuclear octane” would be cost-competitive.
Indeed. It’s a hard resource to exploit, that one, but it has been done. ;)
It’s harder to hitch a ride on a bird than it is to turn plants into car fuel, though, but, on a less silly note, the fact that so much fertilizer comes from petrochemicals and other non-renewable sources seriously limits the long-term potential of biofuels.
The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov’s blog and dead-tree book.
As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of “not letting go of the steering wheel” lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it—“vroom, vroom”—without having invented the car.
The history of technology provides no examples of a safety system being developed entirely prior to the deployment of “unsafe” versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.
I have been reading Yudkowsky since he first appeared on the Net in the 90′s, and remain especially intrigued by his pre-2001 writings—the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man’s formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.
the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research
Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than “the rabble” to decide the fate of all mankind.
This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science’s efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.
i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don’t believe hard takeoff occurs easily?
So basically you’re saying that when Leo Szilard wanted to hide the true neutron cross section of purified graphite and Enrico Fermi wanted to publish it, you’d have published it.
I think rwallace is saying both men were right to continue their research.
Would you have hidden it?
You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.
Reading “value is fragile” almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, “Safety is not Safe” reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.
For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart’s Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.
I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they’d known they could’ve used purified graphite… well, they probably still wouldn’t have gotten nuclear weapons in this Everett branch but they might have somewhere else.
Before 2001 I would probably have been on Fermi’s side, but that’s when I still believed deep down that no true harm could come to someone who was only faithfully trying to do science. (I.e. supervised universe thinking.)
How is blindly looking for AGI in a vast search space better than stagnation?
How does working on FAI qualify as “stagnation”?
No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.
It is a distraction from doing things which are actually useful in the creation of our successors.
You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
AGI is a really hard problem. If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won’t have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.
It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.
This is not how truly fundamental breakthroughs are made.
Here is where I agree with you—anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
“Reversed Stupidity is Not Intelligence.” If AGI takes deep insight and a pyramid, then we would expect those projects to fail.
Fair enough. It may very well take both.
That truly would be a sad day.
Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are “things which are actually useful in the creation of our successors”?
Is that your plan against intelligence stagnation?
I’ll bet on the bored teenager over a sclerotic NASA-like bureaucracy any day. Especially if a computer is all that’s required to play.
This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).
I view the teenager’s success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my “goal” in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.
You consider the creation of an unFriendly superinelligence a step on the road to understanding Friendliness?
Earlier:
In other words, Friendly AI is an ineffective effort even compared to something entirely hypothetical.
What do you mean by this?
I am convinced that resource depletion is likely to lead to social collapse—possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test—space colonization.
What convinced you and how convinced are you?
Dmitry Orlov, and very.
Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.
So you, like me are a “Risk transhumanist”—someone who thinks that existential risk motivates the enhancement of the intelligence of those humans who do the substantial information processing in our society (i.e. politicians, economists, scientists, etc).
I completely agree with this position.
However, creating an uFAI doesn’t make things any better.
How about thinking about ways to enhance human intelligence?
I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.
If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.
Dying doesn’t appeal to me, hence the desire to build an FAI.
Dying is the default.
I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.
but when do you think the petrocollapse is?
Personally, I don’t think that the end of oil will be so bad; we have nuclear, wind, solar and other fossil fuels.
Also, look at the incentives: each country is individually incentivized to develop alternative energy sources.
Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture’s non-negotiable dependence on synthetic fertilizers.
Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.
You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
Yes, the US military is extensively researching how to convert nuclear energy + atmospheric CO2 + water (all of which are in no short supply) into traditional fuel. New York Times article about it. The only thing holding it back from use is that it costs more than making the fuel from ordinary fossil fuels, but when you account for existing taxes in my most countries, if this method weren’t taxed while other taxes remained in place, “nuclear octane” would be cost-competitive.
Well, one way to convert nuclear energy into hydrocarbons is fairly common, if rather inefficient.
Well, one way to exploit the properties of air to fly is fairly common, if rather inefficient ;-)
Indeed. It’s a hard resource to exploit, that one, but it has been done. ;)
It’s harder to hitch a ride on a bird than it is to turn plants into car fuel, though, but, on a less silly note, the fact that so much fertilizer comes from petrochemicals and other non-renewable sources seriously limits the long-term potential of biofuels.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
I’m not asciilifeform and am not suggesting there will be a petrocalypse.
You make a lot of big claims in this thread. I’m interested in reading your detailed thoughts on these. Could you please point to some writings?
The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov’s blog and dead-tree book.
As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of “not letting go of the steering wheel” lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it—“vroom, vroom”—without having invented the car.
The history of technology provides no examples of a safety system being developed entirely prior to the deployment of “unsafe” versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.
I have been reading Yudkowsky since he first appeared on the Net in the 90′s, and remain especially intrigued by his pre-2001 writings—the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man’s formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.
You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher.
Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.
Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than “the rabble” to decide the fate of all mankind.
This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science’s efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.
Name three.
Not being rhetorical, genuinely curious here.
i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don’t believe hard takeoff occurs easily?
That doesn’t make guaranteed destruction any better. It just makes FAI harder, because the time limit is closer.
Also, excellent example with the “planetary IQ test” thing.
As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.