we have nuclear, wind, solar and other fossil fuels
Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture’s non-negotiable dependence on synthetic fertilizers.
Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.
You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
Yes, the US military is extensively researching how to convert nuclear energy + atmospheric CO2 + water (all of which are in no short supply) into traditional fuel. New York Times article about it. The only thing holding it back from use is that it costs more than making the fuel from ordinary fossil fuels, but when you account for existing taxes in my most countries, if this method weren’t taxed while other taxes remained in place, “nuclear octane” would be cost-competitive.
Indeed. It’s a hard resource to exploit, that one, but it has been done. ;)
It’s harder to hitch a ride on a bird than it is to turn plants into car fuel, though, but, on a less silly note, the fact that so much fertilizer comes from petrochemicals and other non-renewable sources seriously limits the long-term potential of biofuels.
The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov’s blog and dead-tree book.
As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of “not letting go of the steering wheel” lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it—“vroom, vroom”—without having invented the car.
The history of technology provides no examples of a safety system being developed entirely prior to the deployment of “unsafe” versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.
I have been reading Yudkowsky since he first appeared on the Net in the 90′s, and remain especially intrigued by his pre-2001 writings—the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man’s formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.
the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research
Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than “the rabble” to decide the fate of all mankind.
This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science’s efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.
i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don’t believe hard takeoff occurs easily?
I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.
If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.
Dying doesn’t appeal to me, hence the desire to build an FAI.
Dying is the default.
I maintain that there will be no FAI without a cobbled-together-ASAP (before petrocollapse) AGI.
but when do you think the petrocollapse is?
Personally, I don’t think that the end of oil will be so bad; we have nuclear, wind, solar and other fossil fuels.
Also, look at the incentives: each country is individually incentivized to develop alternative energy sources.
Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture’s non-negotiable dependence on synthetic fertilizers.
Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.
You can synthesize petrol from water and CO2 given large energy input. One way to do this is by first turning water into hydrogen, then heat the hydrogen and CO2 to make alkenes, etc. Chemists please feel free to correct.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
Yes, the US military is extensively researching how to convert nuclear energy + atmospheric CO2 + water (all of which are in no short supply) into traditional fuel. New York Times article about it. The only thing holding it back from use is that it costs more than making the fuel from ordinary fossil fuels, but when you account for existing taxes in my most countries, if this method weren’t taxed while other taxes remained in place, “nuclear octane” would be cost-competitive.
Well, one way to convert nuclear energy into hydrocarbons is fairly common, if rather inefficient.
Well, one way to exploit the properties of air to fly is fairly common, if rather inefficient ;-)
Indeed. It’s a hard resource to exploit, that one, but it has been done. ;)
It’s harder to hitch a ride on a bird than it is to turn plants into car fuel, though, but, on a less silly note, the fact that so much fertilizer comes from petrochemicals and other non-renewable sources seriously limits the long-term potential of biofuels.
But I repeat; when do you think the petrocalypse is? How soon? When you say asap for agi we need numbers.
I’m not asciilifeform and am not suggesting there will be a petrocalypse.
You make a lot of big claims in this thread. I’m interested in reading your detailed thoughts on these. Could you please point to some writings?
The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov’s blog and dead-tree book.
As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of “not letting go of the steering wheel” lest humanity veer off into the maw of a paperclip optimizer or similar calamity. My position is that Friendly AI enthusiasts have invented the steering wheel, playing with it—“vroom, vroom”—without having invented the car.
The history of technology provides no examples of a safety system being developed entirely prior to the deployment of “unsafe” versions of the technology it was designed to work with. The entire idea seems arrogant and somewhat absurd to me.
I have been reading Yudkowsky since he first appeared on the Net in the 90′s, and remain especially intrigued by his pre-2001 writings—the ones he has disavowed, which detail his theories regarding how one might actually construct an AGI. It saddens me that he is now a proponent of institutionalized caution regarding AI. I believe that the man’s formidable talents are now going to waste. Caution and moderation lead us straight down the road of 15th century China. They give us OSHA and the modern-day FDA. We are currently aboard a rocket carrying us to pitiful oblivion rather than a glorious SF future. I, for one, want off.
You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher.
Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.
Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than “the rabble” to decide the fate of all mankind.
This argument mixes up the question of factual utilitarian efficiency of science, claim for overconfidence in science’s efficiency, and moral judgment about breaking the egalitarian attitude based on said confidence in efficiency. Also, the argument is for some reason about science in general, and not just the controversial claim about hypothetical FAI researchers.
Name three.
Not being rhetorical, genuinely curious here.
i.e. you think we can use AGI without a Friendly goal system as a safe tool? If you found Value Is Fragile persuasive, as you say, I take it you then don’t believe hard takeoff occurs easily?