I definitely understand this perspective and do agree. Less Wrong was not the only target audience. I tried to start in a style that could be easily understood by any reader to the point of them seeing enough value to try to get through the whole thing, while gradually increasing the level of discourse, and reducing the falsifiable arguments. I tried to take a broad approach to introduce the context I am working from, as I have much more to say and needed someplace to try to start building a bridge. I also just wanted to see if anyone cared enough for me to keep investing time. I have a much larger more academic style work which this is pretty much the abstract of, that I have spent many weeks on over the last year working on, and I just can’t seem to finish. There are too many branches. Personally I prefer a list of bullet points, but it becomes very easy to dismiss an entire document. Here is how I might summarize the points above.
Human brains are flawed by design, and generally cannot see this flaw, because of the flaw.
Humans should be trying to eliminate the impact of this flaw.
The gap between flawed models and working models, sometime can not be closed, and instead requires a totally new model.
AI is a totally new model that does not need this flaw, but insisting on converging the models reintroduces the flaw.
That last point is new and interesting to me. By converging the models I assume you mean aligning AGI. What flaw is this reintroducing into the system? Are you saying AGI shouldn’t do what people want because people want dumb things?
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.
I definitely understand this perspective and do agree. Less Wrong was not the only target audience. I tried to start in a style that could be easily understood by any reader to the point of them seeing enough value to try to get through the whole thing, while gradually increasing the level of discourse, and reducing the falsifiable arguments. I tried to take a broad approach to introduce the context I am working from, as I have much more to say and needed someplace to try to start building a bridge. I also just wanted to see if anyone cared enough for me to keep investing time. I have a much larger more academic style work which this is pretty much the abstract of, that I have spent many weeks on over the last year working on, and I just can’t seem to finish. There are too many branches. Personally I prefer a list of bullet points, but it becomes very easy to dismiss an entire document. Here is how I might summarize the points above.
Human brains are flawed by design, and generally cannot see this flaw, because of the flaw.
Humans should be trying to eliminate the impact of this flaw.
The gap between flawed models and working models, sometime can not be closed, and instead requires a totally new model.
AI is a totally new model that does not need this flaw, but insisting on converging the models reintroduces the flaw.
That last point is new and interesting to me. By converging the models I assume you mean aligning AGI. What flaw is this reintroducing into the system? Are you saying AGI shouldn’t do what people want because people want dumb things?
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.