So you point is that I am wrong on bacteria. I agree, let’s move on.
draq
My post isn’t supposed to be biologically accurate. Bacteria include a vast majority of organisms and I do them wrong if I depict them as crude and simple. As a part of my apology tour, I will start with my gut flora.
Replace “bacteria” with “secure hash algorithm”.
I believe the problem is that while I believe in and presumed an absolute moral system, you don’t.
Let’s agree on a definition of morality/ethics, that it is what we should do to reach a desirable state or value, given that we both understand what “value” or “should” mean.
I think that morality exists as much as the physical world exist. If you believe that the physical reality is absolute, then there is no reason to doubt that there is a consistent absolute moral system. In our everyday life, we don’t question the reality of the physical world, as much as we always uphold a moral system (unless we are psychopath). We have moral perception as much as we have an physical perception.
Of course, concerning the physical world, we have established a methodology that is agreed upon by the vast majority of people. That is, we have a method using which we can determine what is false, if not what is true. So far, we do not have anything alike in morality that is as easily understandable as the scientific methods. So far it only means that we cannot determine the moral system as precisely as the physical system we live in.
In summary, I believe that the moral world is as real as the physical world. However, I don’t know the moral world completely as much as I don’t understand the physical world completely. So, I don’t know what constitutes “better” in every possible situation, as much as I don’t know what constitutes “real” in every possible situation.
But I believe that there is one single right answer. Otherwise, it becomes quite confusing.
I fully agree. There are many aspects of intelligence.
The reason I choose this categorization, given it is valid, is to highlight the aspect of intelligence that is relevant to ethics.
I think only a level-3 intelligence can be a moral agent. An intelligence that has an innate goal does not need to and cannot bother itself with moral questions.
- Oct 28, 2010, 3:31 PM; 0 points) 's comment on Levels of Intelligence by (
Well that’s the point. The intelligence itself defines the criterion. Choosing goals presumes a degree of self-reflection that a paperclip maximizer does not have.
If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a “greater good” maximizer, and paperclip maximising isn’t the end to itself.
Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.
So, to be and to stay a paperclip maximiser, it must not question the end of its activity. And that’s slightly different to human beings, who are often asking for the meaning of life.
- Oct 28, 2010, 7:23 PM; 0 points) 's comment on Levels of Intelligence by (
Well, a paperclip maximizer has an identifiable goal. What is the identifiable goal of humans?
Well, “finding new algorithms” aka learning may itself be a kind of algorithm, but certainly of a higher-level than a simple algorithms aka instinct or reflex. I think there is a qualitative difference between an entity that cannot learn and an entity that can.
The absolute moral system I am talking about is as “absolute” as the physical world. Our perception of the reality (“the absolute physical world”) is also a primarily instinct that humans evolved to make life easier.
The difference between level 2 and level 3 intelligence is, using an analogy, like the difference between an intelligence that acts on postulated theories of the physical world and an intelligence that discovers new physical theories.