I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that.
My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data?
If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim.
I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that.
But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box.
More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story.
I would agree though that it’s hard to tell where would our preferences point to.
If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections?
Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m tempted to call that bollocks.
Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them?
Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and “Best guess butchery” alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later… This isn’t a very desirable future, but it is a possible one.
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them?
Don’t underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data? If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim. I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that. But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box. More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it’s hard to tell where would our preferences point to.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
I hope it would be in any way helpful.
No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and “Best guess butchery” alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later… This isn’t a very desirable future, but it is a possible one.
Don’t underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)