No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people’s behaviors in the future than with AI. People are improving systems as well.
As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:
As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it’s quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).
Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.
Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start… And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that—It’s where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).
While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.
This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.
I just don’t see that happening. I don’t see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.
I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit “dull witted” in comparison.
I don’t so much buy the “Ant/Amoeba to Human” comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don’t… They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn’t mean it is necessarily so, but it does seem to be more than less likely.
And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.
Yes, that is close to what I am proposing.
No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people’s behaviors in the future than with AI. People are improving systems as well.
As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:
“Gorging upon the stew of...”
Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory
As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it’s quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).
Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.
Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start… And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that—It’s where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).
While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.
This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.
I just don’t see that happening. I don’t see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.
I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit “dull witted” in comparison.
I don’t so much buy the “Ant/Amoeba to Human” comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don’t… They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn’t mean it is necessarily so, but it does seem to be more than less likely.
And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.