I have been trying to invent an AI for over a year, although I haven’t made a lot of progress lately. My current approach is a bit similar to how our brain works according to “Society Of Mind”. That is, when it’s finished the system is supposed to consist of a collection of independent, autonomous units that can interact and create new units. The tricky part is of course the prioritization between the units. How can you evaluate how promising an approach is? I recently found out that something like this has already been tried, but that has happened to me several times by now as I started thinking and writing about AI before I had read any books on that subject (I didn’t have a decent library in school).
I have no great hopes that I will actually manage to create something usefull with this, but even a tiny probability of a working AI is worth the effort (as long as it’s friendly, at least).
I suspect some people here will have a negative reaction to your comment. Your approach comes off as not very serious, your last paragraph sounds like reasoning from conclusion to argument, and your mention of friendliness seems like an afterthought.
I assure you that I have thought a lot about freindliness in AI. I just don’t think that it is reasonable or indeed possible to make the AI have a moral system from the very start. You can’t define morality well if the AI doesn’t have a good understanding of the world already. Of course it shouldn’t be taught too late under any circumstances but I actually think that the risk will be higher if you try to hardcode friendliness into the AI at the very beginning, which will necessarily be flawed because you have so little to use in your definition, and then work under the assumption that the AI is friendly already and will stay so, than if you only implement friendliness later once it actually understands the concepts involved. The difference would be like between the moral understandings of a child and an adult philosopher.
Have you read a good AI/machine learning textbook, like AIMA or shorter Mitchell’s book? Let your goal drive you to study and learn and refine yourself and become stronger.
I read the first one, but it didn’t really cover learning in a general sense. The second one sounds more interesting, I wonder why I haven’t heard of it before. Do you know where I can get it? I’m a student and thus have very little money. I don’t want to spend 155$ only to find out it only contains stuff I already read elsewhere.
OK, if you’ve read AIMA and still want to become a Dark Lord, I don’t know if I should encourage you on this path. My impression is that Mitchell’s textbook covers less material than AIMA, though I didn’t read AIMA.
What gives you the impression that I “want to be a Dark Lord”? I have already explained that I realize the importance of friendliness in AI. I just don’t think it is reasonable to teach the AI the intricacies of ethics beore it is smart enough to grasp the concept in its entirety. You don’t read Kant to infants either. I think that implementing friendliness too soon would actually increase the chances of misunderstanding, just like children that are taught hard concepts too early often have a hard time updating their believes once they are actually smart enough. You would just need to give the AI a preliminary non-interference task until you find a solution to the friendliness problem. You might also need to add some contingency tasks such as “if you find you are not the original AI you but an illegally made copy, try to report this, then shut down.”.
It’s not possible to explain what you don’t know, to answer a question you can’t state, and “intelligence” doesn’t save from this trouble, doesn’t open the floodgates to arbitrary helpfulness, resolving any difficulties you have. It just does its thing really well, but it’s up to its designers to choose the right thing as its optimization criterion. Doing the wrong thing very well, on the other hand, is in no one’s interest. This is a brittle situation, where vagueness in understanding the goal leads to arbitrary and morally desolate outcomes.
thanks for the effort but I just found out that the library at my university does have the book after all. I overlooked it at first because the search engine of the library is broken.
I have been trying to invent an AI for over a year, although I haven’t made a lot of progress lately. My current approach is a bit similar to how our brain works according to “Society Of Mind”. That is, when it’s finished the system is supposed to consist of a collection of independent, autonomous units that can interact and create new units. The tricky part is of course the prioritization between the units. How can you evaluate how promising an approach is? I recently found out that something like this has already been tried, but that has happened to me several times by now as I started thinking and writing about AI before I had read any books on that subject (I didn’t have a decent library in school).
I have no great hopes that I will actually manage to create something usefull with this, but even a tiny probability of a working AI is worth the effort (as long as it’s friendly, at least).
I suspect some people here will have a negative reaction to your comment. Your approach comes off as not very serious, your last paragraph sounds like reasoning from conclusion to argument, and your mention of friendliness seems like an afterthought.
I assure you that I have thought a lot about freindliness in AI. I just don’t think that it is reasonable or indeed possible to make the AI have a moral system from the very start. You can’t define morality well if the AI doesn’t have a good understanding of the world already. Of course it shouldn’t be taught too late under any circumstances but I actually think that the risk will be higher if you try to hardcode friendliness into the AI at the very beginning, which will necessarily be flawed because you have so little to use in your definition, and then work under the assumption that the AI is friendly already and will stay so, than if you only implement friendliness later once it actually understands the concepts involved. The difference would be like between the moral understandings of a child and an adult philosopher.
Have you read a good AI/machine learning textbook, like AIMA or shorter Mitchell’s book? Let your goal drive you to study and learn and refine yourself and become stronger.
I read the first one, but it didn’t really cover learning in a general sense. The second one sounds more interesting, I wonder why I haven’t heard of it before. Do you know where I can get it? I’m a student and thus have very little money. I don’t want to spend 155$ only to find out it only contains stuff I already read elsewhere.
OK, if you’ve read AIMA and still want to become a Dark Lord, I don’t know if I should encourage you on this path. My impression is that Mitchell’s textbook covers less material than AIMA, though I didn’t read AIMA.
What gives you the impression that I “want to be a Dark Lord”? I have already explained that I realize the importance of friendliness in AI. I just don’t think it is reasonable to teach the AI the intricacies of ethics beore it is smart enough to grasp the concept in its entirety. You don’t read Kant to infants either. I think that implementing friendliness too soon would actually increase the chances of misunderstanding, just like children that are taught hard concepts too early often have a hard time updating their believes once they are actually smart enough. You would just need to give the AI a preliminary non-interference task until you find a solution to the friendliness problem. You might also need to add some contingency tasks such as “if you find you are not the original AI you but an illegally made copy, try to report this, then shut down.”.
It’s not possible to explain what you don’t know, to answer a question you can’t state, and “intelligence” doesn’t save from this trouble, doesn’t open the floodgates to arbitrary helpfulness, resolving any difficulties you have. It just does its thing really well, but it’s up to its designers to choose the right thing as its optimization criterion. Doing the wrong thing very well, on the other hand, is in no one’s interest. This is a brittle situation, where vagueness in understanding the goal leads to arbitrary and morally desolate outcomes.
Searching booksprice.com yields a used version for 40 usd. You can also find a lot of books online through torrents and such.
thanks for the effort but I just found out that the library at my university does have the book after all. I overlooked it at first because the search engine of the library is broken.
Somehow “DANGER WILL ROBINSON” doesn’t seem to quite cover it.