Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
What on Earth? When you say “may I” you presumably mean “is this a good idea” since obviously we’re not in a position to stop you. But you’re already aware of the arguments why it isn’t a good idea and you don’t address them here, so it’s not clear that you have a good purpose for this comment in mind.
I interpreted as akin to a call to a suicide hot-line.
‘This is sounding like a good idea...’
(Can you help / talk me out of it?)
If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won’t make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?
If this is your state of knowledge then… how can I put this: it seems extremely likely that you’ll start playing around with very simple tools, find out just how little they can do, and, if you’re lucky, start reading up and rediscovering the world of AI.
Backprop is likely to be safe. Lots of AI students play around with it and it is well behaved mathematically. If it was going to kill us it would have done so already. More advanced stuff has to be evaluated individually.
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
I second Kevin: the nearest analogy that occurs to me is playing “kick the landmine” when the landmine is almost surely a dud.
Of course, the advantage of “kick the landmine” is that you don’t take the rest of the world out in case it wasn’t a dud.
I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you’re so astronomically unlikely to succeed that it doesn’t matter.
What on Earth? When you say “may I” you presumably mean “is this a good idea” since obviously we’re not in a position to stop you. But you’re already aware of the arguments why it isn’t a good idea and you don’t address them here, so it’s not clear that you have a good purpose for this comment in mind.
I interpreted as akin to a call to a suicide hot-line.
‘This is sounding like a good idea...’
(Can you help / talk me out of it?)
If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won’t make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?
I don’t think it’s fear of knowledge that leads me to suggest you don’t try to build a catapult to twang yourself into a tree.
Do you mean playing around with backprop? Or making your own algorithms.
Either.
If this is your state of knowledge then… how can I put this: it seems extremely likely that you’ll start playing around with very simple tools, find out just how little they can do, and, if you’re lucky, start reading up and rediscovering the world of AI.
Backprop is likely to be safe. Lots of AI students play around with it and it is well behaved mathematically. If it was going to kill us it would have done so already. More advanced stuff has to be evaluated individually.
No.
What made you think you might get any other answer?
Well, I did get other answers. Ask Kevin and thomblake why they answered that way, if you like.
Sounds fun. Though so far we don’t have anything that you can “teach” in a general way.