The only part I agree with is that parallel software is fundamentally more difficult
What about the linked paper on dark silicon?
we are more likely than not to experience a hard-takeoff scenario in the 2020′s
Wow. I really need to figure out a practical way to make apocalypse bets. Maybe I can get longbets.org to add that feature.
Just look at the OpenCog Prime documents, look at the current state of OpenCog implementation, look at the current level of funding and available developers, and do normal, everyday project planning analysis.
OpenCog looks pretty unpromising to me. What’s the #1 document I should read that has the best chance of changing my mind, even if it’s only a tiny chance?
Would you [be] doing things differently if you expected a hard-takeoff in ~2025? How would it be different?
For example if we had believed this 1.5 years ago, I suppose we would have (1) scuttled longer-term investments like CFAR and “strategic” research (e.g. AI forecasting, intelligence explosion microeconomics), (2) tried our best to build up a persuasive case that AI was very near, so we could present it to all the wealthy individuals we know, (3) used that case and whatever money we could quickly raise to hire all the best mathematicians willing to do FAI work, and (4) done almost nothing else but in-house FAI research. (Maybe Eliezer and others have different ideas about what we’d do if we thought AI was coming in ~2025; I don’t know.)
As far as I can tell, it completely ignores 3D chip design, or multi-chip solutions in the interim. If there are power limits on the number of transistors in a single chip, then expect to have more and more, smaller and smaller chips, or a completely different chip design which invalidates the assumptions underlying the dark silicon paper (3D chips, for example).
Generalizing, this is a very common category of paper. It identifies some hard wall that prevents continuation of Moore’s law. The abstract and conclusion contains much doom and gloom. In practice, it merely spurs creative thinking, resulting in a modification of the manufacturing or design process which invalidates the assumptions which led to the scaling limit. (In this case, the assumption that to get higher application performance, you need more transitors or smaller feature sizes, or that chips consist of a 2D grid of silicon transistors.)
OpenCog looks pretty unpromising to me. What’s the #1 document I should read that has the best chance of changing my mind, even if it’s only a tiny chance?
I read a preprint of Ben Goertzel’s upcoming “Building Better Minds” book, most of which is also available as part of the OpenCog wiki. When I said the “OpenCog Prime documents,” this is what I was referring to. But it’s not structured as an argument for this approach, and as a 1,000 page document it’s hard to recommend as an introduction if you’re uncertain about its value. My own confidence in OpenCog Prime comes from doing my own analysis of its capabilities and weaknesses as I read this document (being unconvinced beforehand), and my own back-of-the-envelope calculations of where it could go, with the proper hardware and software optimizations. There is a high-level overview of CogPrime on the OpenCog wiki. It’s a little outdated and incomplete, but the overall structure is still the same.
Also you might be interested in Ben’s write-up of the rise and fall of WebMind (which became Novamente, which became OpenCog), as it provides a good context for why OpenCog is structured the way that it is. That is to say how what problems they tried to solve, what difficulties they encountered, and why they ended up where they are in the design space of AGI minds, and why they are confident in that design. It’s an interesting people-story anyhow: “Waking up from the economy of dreams”
OpenCog gets a lot of flack for being an “everything and the kitchen sink” approach. This is unfair and undue criticism, I believe. Rather I would say that the CogPrime architecture recognizes that human cognition is complex, and that while it is reducible, an accurate model would nevertheless still contain a lot of complexity. For example, there are many different kinds of memory (perceptual, declarative, episodic, procedural, etc.), and therefore it makes sense to have different mechanisms for handling each of these memory systems. Maybe in the future some of them can be theoretically unified, but that doesn’t mean it wouldn’t still be beneficial to implement them separately—the model is not the territory.
However what CogPrime does do which doesn’t get harped enough is provide a base representation format (hypergraphs) which are capable of encoding the entire architecture in the same homoiconic medium. A good description of why this is important is the following blog post, “Why hypergraphs?”
For example if we had believed this 1.5 years ago, I suppose we would have (1) scuttled longer-term investments like CFAR and “strategic” research (e.g. AI forecasting, intelligence explosion microeconomics),
I hope y’all wouldn’t have scrapped the most excellent HPMoR ;)
(2) tried our best to build up a persuasive case that AI was very near, so we could present it to all the wealthy individuals we know, (3) used that case and whatever money we could quickly raise to hire all the best mathematicians willing to do FAI work, and (4) done almost nothing else but in-house FAI research. (Maybe Eliezer and others have different ideas about what we’d do if we thought AI was coming in ~2025; I don’t know.)
I hope that you (again, addressing both Luke and MIRI) take a look at existing, active AGI projects, and evaluate for each of them (1) if it could possibly lead to a hard-takeoff scenario, (2) what the timeframe for a hard-takeoff would be^1, and (3) enumeration of FAI risk factors and possible mitigation strategies. And of course, publish the results of this study.
^1: Both your own analysis and the implementor’s estimation—but be sure to ask them when the necessary features X, Y, and Z will be implemented, not about the hard-takeoff specifically, so as to not bias the data.
Thanks for your detailed thoughts!
What about the linked paper on dark silicon?
Wow. I really need to figure out a practical way to make apocalypse bets. Maybe I can get longbets.org to add that feature.
OpenCog looks pretty unpromising to me. What’s the #1 document I should read that has the best chance of changing my mind, even if it’s only a tiny chance?
For example if we had believed this 1.5 years ago, I suppose we would have (1) scuttled longer-term investments like CFAR and “strategic” research (e.g. AI forecasting, intelligence explosion microeconomics), (2) tried our best to build up a persuasive case that AI was very near, so we could present it to all the wealthy individuals we know, (3) used that case and whatever money we could quickly raise to hire all the best mathematicians willing to do FAI work, and (4) done almost nothing else but in-house FAI research. (Maybe Eliezer and others have different ideas about what we’d do if we thought AI was coming in ~2025; I don’t know.)
As far as I can tell, it completely ignores 3D chip design, or multi-chip solutions in the interim. If there are power limits on the number of transistors in a single chip, then expect to have more and more, smaller and smaller chips, or a completely different chip design which invalidates the assumptions underlying the dark silicon paper (3D chips, for example).
Generalizing, this is a very common category of paper. It identifies some hard wall that prevents continuation of Moore’s law. The abstract and conclusion contains much doom and gloom. In practice, it merely spurs creative thinking, resulting in a modification of the manufacturing or design process which invalidates the assumptions which led to the scaling limit. (In this case, the assumption that to get higher application performance, you need more transitors or smaller feature sizes, or that chips consist of a 2D grid of silicon transistors.)
I read a preprint of Ben Goertzel’s upcoming “Building Better Minds” book, most of which is also available as part of the OpenCog wiki. When I said the “OpenCog Prime documents,” this is what I was referring to. But it’s not structured as an argument for this approach, and as a 1,000 page document it’s hard to recommend as an introduction if you’re uncertain about its value. My own confidence in OpenCog Prime comes from doing my own analysis of its capabilities and weaknesses as I read this document (being unconvinced beforehand), and my own back-of-the-envelope calculations of where it could go, with the proper hardware and software optimizations. There is a high-level overview of CogPrime on the OpenCog wiki. It’s a little outdated and incomplete, but the overall structure is still the same.
Also you might be interested in Ben’s write-up of the rise and fall of WebMind (which became Novamente, which became OpenCog), as it provides a good context for why OpenCog is structured the way that it is. That is to say how what problems they tried to solve, what difficulties they encountered, and why they ended up where they are in the design space of AGI minds, and why they are confident in that design. It’s an interesting people-story anyhow: “Waking up from the economy of dreams”
OpenCog gets a lot of flack for being an “everything and the kitchen sink” approach. This is unfair and undue criticism, I believe. Rather I would say that the CogPrime architecture recognizes that human cognition is complex, and that while it is reducible, an accurate model would nevertheless still contain a lot of complexity. For example, there are many different kinds of memory (perceptual, declarative, episodic, procedural, etc.), and therefore it makes sense to have different mechanisms for handling each of these memory systems. Maybe in the future some of them can be theoretically unified, but that doesn’t mean it wouldn’t still be beneficial to implement them separately—the model is not the territory.
However what CogPrime does do which doesn’t get harped enough is provide a base representation format (hypergraphs) which are capable of encoding the entire architecture in the same homoiconic medium. A good description of why this is important is the following blog post, “Why hypergraphs?”
I hope y’all wouldn’t have scrapped the most excellent HPMoR ;)
I hope that you (again, addressing both Luke and MIRI) take a look at existing, active AGI projects, and evaluate for each of them (1) if it could possibly lead to a hard-takeoff scenario, (2) what the timeframe for a hard-takeoff would be^1, and (3) enumeration of FAI risk factors and possible mitigation strategies. And of course, publish the results of this study.
^1: Both your own analysis and the implementor’s estimation—but be sure to ask them when the necessary features X, Y, and Z will be implemented, not about the hard-takeoff specifically, so as to not bias the data.
Thanks for the reading links.