You keep making a rookie mistake: trying to invent solutions without learning the subject matter first. Consider this: people just as smart as you (and me) have put in 100 times more effort trying to solve this issue professionally. What are the odds that you have found a solution they missed after gaining only a cursory familiarity with the topic?
If you still think that you can meaningfully contribute to the FAI research without learning the basics, note that smarter people have tried and failed. Those truly interested in making their contribution went on to learn the state of the art, the open problems and the common pitfalls.
If you want to contribute, start by studying (not just reading) the relevant papers on the MIRI web site and their summaries posted by So8res in Main earlier this year. And for Omega’s sake, go read Bostrom’s Superintelligence.
I just realised, you’re the guy from my first post. You’re first sentence now makes a lot more sense. I think that the problem is not so much that I’m massively overconfident though that may also be the case) its just that when I’m writing here I appear too bold. I’ll definitely try and reduce that, though I thought I had done fairly well on this post. Looking back, I guess I could’ve been clearer. I was thinking of putting a disclaimer at the beginning saying ‘Warning! This post is not to be seen as representing the posters views. It is meant to be dismantled and thoroughly destroyed so the poster can learn about his AI misconceptions.’ But it was late, and I couldn’t put it into words properly.
Anyway, thanks for being patient with me. I must have sounded like a bit of a twat, and you’ve been pleasantly polite. It really is appreciated.
Not to put too fine a point on it but many of the posters here have never sat through a single class on AI of any kind nor read any books on actually programming AI’s nor ever touched any of the common tools in the current state of the art nor learned about any existing or historical AI designs yet as long as they go along with the flow and stick to the right Applause Lights they don’t get anyone demanding they go off and read papers on it.
As a result conversations on here can occasionally be frustratingly similar to theological discussions with posters who’ve read pop-comp-sci material like Superintelligence who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
Superintelligence is a broad overview of the topic without any aspirations for rigor, as far as I can tell, and it is pretty clear about that.
who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
This seems uncharitable. The outside view certainly backs up something like
every jump in intelligence opens up new previously unexpected and unimaginable sources of energy.
Examples: fire, fossil fuels, nuclear. Same applies to computational power.
There is no clear reason for this correlation to disappear. Thus what we would currently deem
more energy than there exists in the universe
or
more computational power than would be available from turning every atom in the universe into computronium
might reflect our limited understanding of the Universe, rather than any kind of genuine limits.
The current state of the art doesn’t get anywhere close to the kind of general-purpose intelligence that (hypothetically but plausibly) might make AI either an existential threat or a solution to a lot of the human race’s problems.
So while I enthusiastically endorse the idea of anyone interested in AI finding out more about actually-existing AI research, either (1) the relevance of that research to the scenarios FAI people are worried about is rather small, or (2) those scenarios are going to turn out never to arise. And, so far as I can tell, we have very little ability to tell which.
(Perhaps the very fact that the state of the art isn’t close to general-purpose intelligence is evidence that those scenarios will never arise. But it doesn’t seem like very good evidence for that. We know that rather-general-purpose human-like intelligence is possible, because we have it, so our inability to make computers that have it is a limitation of our current technology and understanding rather than anything fundamental to the universe, and I know of no grounds for assuming that we won’t overcome those limitations. And the extent of variations within the human species seems like good reason to think that actually-existing physical things can have mental capacities well beyond the human average.)
It’s still extremely relevant since they have to grapple with watered-down versions of many of the exact same problems.
You might be concerned that a non-FAI will optimize for some scoring function and do things you don’t want while they’re actually dealing with the actual nuts and bolts of making modern AI’s where they want to make sure they don’t optimize for some scoring function and do things you don’t want (on a more mundane level). That kind of problem is in the first few pages of many AI textbooks yet the applause lights here hold that almost all AI researchers are blind to such possibilities.
There’s no need to convince me that general AI is possible in principle. We can use the same method to prove that nanobots and self replicating von neumann machines are perfectly possible but we’re still a lot way from actually building them.
it’s just frustrating: like watching someone trying to explain why proving code is important in the control software of a nuclear reactor (extremely true) who has no idea how code is proven, has never written even a hello-world program and every now and then talks as if they believe that exception handling is unknown to programmers mixed with references to magic. They’re making a reasonable point but mixing their language with references to magic and occasional absurdities.
Still, if someone is capable of grasping the argument “Any kind of software failure in the control systems of a nuclear reactor could have disastrous consequences; the total amount of software required isn’t too enormous; therefore it is worth going to great lengths, including formal correctness proofs, to ensure that the software is correct” then they’re right to make that argument even if their grasp of what kind of software is used for controlling a nuclear reactor is extremely tenuous. And if they say ”… because otherwise the reactor could explode and turn everyone in the British Isles into a hideous mutant with weird superpowers” then of course they’re hilariously wrong, but their wrongness is about the details of the catastrophic disaster rather than the (more important) fact that a catastrophic disaster could happen and needs preventing.
That’s absolutely true but it leads to two problems.
First: the obvious lack of experience/understanding of the nuts and bolts of it makes people from outside less likely to take the realistic parts of their warning seriously and may even lead to it being viewed as a subject of mockery which works against you.
Second: The failure modes that people suggest due to their lack of understanding can also be hilariously wrong like “a breach may be caused by the radiation giving part of the shielding magical superpowers which will then cause it to gain life and open a portal to R’lyeh” and they may even spend many paragraphs on talking about how serious that failure mode it while others who also don’t actually understand politely aplaude. This has some of the same unfortunate side effects: it makes people who are totally unfamiliar with the subject less likely to take the realistic parts of their warning seriously.
I think you are being a little too exacting here. True, most advances in well-studied fields are likely to be made by experts. That doesn’t mean that non-experts should be barred from discussing the issue, for educational and entertainment purposes if nothing else.
That is not to say that there isn’t a minimum level of subject-matter literacy required for an acceptable post, especially when the poster in question posts frequently. I imagine your point may be that Algon has not cleared that threshold (or is close to the line) - but your post seems to imply a MUCH higher threshold for posting.
for educational and entertainment purposes if nothing else.
Indeed. And a more appropriate tone for this would be “How is addressed in the current AI research?” and “where can I find more information about it?” not “I cannot find anything wrong with this idea”. To be fair, the OP was edited to sound less arrogant, though the author’s reluctance to do some reading even after being pointed to it is not encouraging. Hopefully this is changing.
The main purpose of this post is not to actually propose a solution; I do not think I have some golden idea here. And if I gave that impression, I really am sorry (not sarcastic). What this was meant to be was a learning opportunity, because I really couldn’t see many things wrong with this avenue. So I posted it here to see what was wrong with it, and so far I’ve had a few people reply and give me decent reasons as to why it is wrong. I’ve quibbled with a few of them, but that’ll probably be cleared up in time.
Though I guess my post was probably too confident in tone. I’ll try to correct that in the future and make my intentions better known. I hope I’ve cleared up any misunderstandings, and thanks for taking the time to reply. I’ll certainly check those recommendation out.
By the way, didn’t you say you had a PhD in physics? Because I’ve been trying to find some good public domain resources for physics, and I’ve found a few, but it’s not enough. Do you have any recommendation? I know about professor D’Hoofts ‘how to be a good physicist’ and I’m using that, but I’d also like to get some other resources.
That is my go-to link for self-study. You can also get most non-free textbooks cheap second-hand, by asking to borrow from a university prof (they tend to have a collection of older versions), or downloading them off libgen and clones. Plus the usual MIT OCW, edX and such.
Might I ask where that’s coming from? I have a fair bit of knowledge on the whole theology thing and I know that there are way too many problems to just come out with one post saying ‘God exists’ and some argument backing it up.
shminux made a post which was basically, “don’t bother arguing with me unless you’ve read and understood this book and these papers” and I pointed out that religious believers and other believers in anti-rationalist ideas use the very same tactics to try to shut down discussion of their ideas. I was not actually stating that one should study theology before talking about whether God exists. That was sarcasm.
Oh right. Well, I think the problem was due to a misunderstanding , so its all good. And hey! I got some reading recommendations out of it, which is never a bad thing.
You keep making a rookie mistake: trying to invent solutions without learning the subject matter first. Consider this: people just as smart as you (and me) have put in 100 times more effort trying to solve this issue professionally. What are the odds that you have found a solution they missed after gaining only a cursory familiarity with the topic?
If you still think that you can meaningfully contribute to the FAI research without learning the basics, note that smarter people have tried and failed. Those truly interested in making their contribution went on to learn the state of the art, the open problems and the common pitfalls.
If you want to contribute, start by studying (not just reading) the relevant papers on the MIRI web site and their summaries posted by So8res in Main earlier this year. And for Omega’s sake, go read Bostrom’s Superintelligence.
I just realised, you’re the guy from my first post. You’re first sentence now makes a lot more sense. I think that the problem is not so much that I’m massively overconfident though that may also be the case) its just that when I’m writing here I appear too bold. I’ll definitely try and reduce that, though I thought I had done fairly well on this post. Looking back, I guess I could’ve been clearer. I was thinking of putting a disclaimer at the beginning saying ‘Warning! This post is not to be seen as representing the posters views. It is meant to be dismantled and thoroughly destroyed so the poster can learn about his AI misconceptions.’ But it was late, and I couldn’t put it into words properly.
Anyway, thanks for being patient with me. I must have sounded like a bit of a twat, and you’ve been pleasantly polite. It really is appreciated.
Not to put too fine a point on it but many of the posters here have never sat through a single class on AI of any kind nor read any books on actually programming AI’s nor ever touched any of the common tools in the current state of the art nor learned about any existing or historical AI designs yet as long as they go along with the flow and stick to the right Applause Lights they don’t get anyone demanding they go off and read papers on it.
As a result conversations on here can occasionally be frustratingly similar to theological discussions with posters who’ve read pop-comp-sci material like Superintelligence who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
Beware of isolated demands for rigor.
http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/
Superintelligence is a broad overview of the topic without any aspirations for rigor, as far as I can tell, and it is pretty clear about that.
This seems uncharitable. The outside view certainly backs up something like
Examples: fire, fossil fuels, nuclear. Same applies to computational power.
There is no clear reason for this correlation to disappear. Thus what we would currently deem
or
might reflect our limited understanding of the Universe, rather than any kind of genuine limits.
The current state of the art doesn’t get anywhere close to the kind of general-purpose intelligence that (hypothetically but plausibly) might make AI either an existential threat or a solution to a lot of the human race’s problems.
So while I enthusiastically endorse the idea of anyone interested in AI finding out more about actually-existing AI research, either (1) the relevance of that research to the scenarios FAI people are worried about is rather small, or (2) those scenarios are going to turn out never to arise. And, so far as I can tell, we have very little ability to tell which.
(Perhaps the very fact that the state of the art isn’t close to general-purpose intelligence is evidence that those scenarios will never arise. But it doesn’t seem like very good evidence for that. We know that rather-general-purpose human-like intelligence is possible, because we have it, so our inability to make computers that have it is a limitation of our current technology and understanding rather than anything fundamental to the universe, and I know of no grounds for assuming that we won’t overcome those limitations. And the extent of variations within the human species seems like good reason to think that actually-existing physical things can have mental capacities well beyond the human average.)
It’s still extremely relevant since they have to grapple with watered-down versions of many of the exact same problems.
You might be concerned that a non-FAI will optimize for some scoring function and do things you don’t want while they’re actually dealing with the actual nuts and bolts of making modern AI’s where they want to make sure they don’t optimize for some scoring function and do things you don’t want (on a more mundane level). That kind of problem is in the first few pages of many AI textbooks yet the applause lights here hold that almost all AI researchers are blind to such possibilities.
There’s no need to convince me that general AI is possible in principle. We can use the same method to prove that nanobots and self replicating von neumann machines are perfectly possible but we’re still a lot way from actually building them.
it’s just frustrating: like watching someone trying to explain why proving code is important in the control software of a nuclear reactor (extremely true) who has no idea how code is proven, has never written even a hello-world program and every now and then talks as if they believe that exception handling is unknown to programmers mixed with references to magic. They’re making a reasonable point but mixing their language with references to magic and occasional absurdities.
Yeah, I understand the frustration.
Still, if someone is capable of grasping the argument “Any kind of software failure in the control systems of a nuclear reactor could have disastrous consequences; the total amount of software required isn’t too enormous; therefore it is worth going to great lengths, including formal correctness proofs, to ensure that the software is correct” then they’re right to make that argument even if their grasp of what kind of software is used for controlling a nuclear reactor is extremely tenuous. And if they say ”… because otherwise the reactor could explode and turn everyone in the British Isles into a hideous mutant with weird superpowers” then of course they’re hilariously wrong, but their wrongness is about the details of the catastrophic disaster rather than the (more important) fact that a catastrophic disaster could happen and needs preventing.
That’s absolutely true but it leads to two problems.
First: the obvious lack of experience/understanding of the nuts and bolts of it makes people from outside less likely to take the realistic parts of their warning seriously and may even lead to it being viewed as a subject of mockery which works against you.
Second: The failure modes that people suggest due to their lack of understanding can also be hilariously wrong like “a breach may be caused by the radiation giving part of the shielding magical superpowers which will then cause it to gain life and open a portal to R’lyeh” and they may even spend many paragraphs on talking about how serious that failure mode it while others who also don’t actually understand politely aplaude. This has some of the same unfortunate side effects: it makes people who are totally unfamiliar with the subject less likely to take the realistic parts of their warning seriously.
Does one of these papers answer this? http://lesswrong.com/r/discussion/lw/m26/could_you_tell_me_whats_wrong_with_this/c9em I suspect these papers would be difficult and time-consuming to understand, so I would prefer to not read all of them just to figure this one out.
I think you are being a little too exacting here. True, most advances in well-studied fields are likely to be made by experts. That doesn’t mean that non-experts should be barred from discussing the issue, for educational and entertainment purposes if nothing else.
That is not to say that there isn’t a minimum level of subject-matter literacy required for an acceptable post, especially when the poster in question posts frequently. I imagine your point may be that Algon has not cleared that threshold (or is close to the line) - but your post seems to imply a MUCH higher threshold for posting.
Indeed. And a more appropriate tone for this would be “How is addressed in the current AI research?” and “where can I find more information about it?” not “I cannot find anything wrong with this idea”. To be fair, the OP was edited to sound less arrogant, though the author’s reluctance to do some reading even after being pointed to it is not encouraging. Hopefully this is changing.
The main purpose of this post is not to actually propose a solution; I do not think I have some golden idea here. And if I gave that impression, I really am sorry (not sarcastic). What this was meant to be was a learning opportunity, because I really couldn’t see many things wrong with this avenue. So I posted it here to see what was wrong with it, and so far I’ve had a few people reply and give me decent reasons as to why it is wrong. I’ve quibbled with a few of them, but that’ll probably be cleared up in time.
Though I guess my post was probably too confident in tone. I’ll try to correct that in the future and make my intentions better known. I hope I’ve cleared up any misunderstandings, and thanks for taking the time to reply. I’ll certainly check those recommendation out.
Also, go do a couple years of study of theology before you can talk about whether God exists or not.
Doesn’t theology assume divine as a premise?
By the way, didn’t you say you had a PhD in physics? Because I’ve been trying to find some good public domain resources for physics, and I’ve found a few, but it’s not enough. Do you have any recommendation? I know about professor D’Hoofts ‘how to be a good physicist’ and I’m using that, but I’d also like to get some other resources.
That is my go-to link for self-study. You can also get most non-free textbooks cheap second-hand, by asking to borrow from a university prof (they tend to have a collection of older versions), or downloading them off libgen and clones. Plus the usual MIT OCW, edX and such.
Cheers.
Might I ask where that’s coming from? I have a fair bit of knowledge on the whole theology thing and I know that there are way too many problems to just come out with one post saying ‘God exists’ and some argument backing it up.
Yes, you’re taking it out of context.
shminux made a post which was basically, “don’t bother arguing with me unless you’ve read and understood this book and these papers” and I pointed out that religious believers and other believers in anti-rationalist ideas use the very same tactics to try to shut down discussion of their ideas. I was not actually stating that one should study theology before talking about whether God exists. That was sarcasm.
Oh right. Well, I think the problem was due to a misunderstanding , so its all good. And hey! I got some reading recommendations out of it, which is never a bad thing.