A fact that is only relevant if those properties can capture the desired feature. You’ll recall that defining the desired feature is a major goal of MIRI.
No that presumes what is being checked against is the friendly goal system. What I’m talking about is checking that e.g. all actions being taken by the AI are in search of solutions to a compact goal description, also extracted from the machine in the form of a bayesian concept net. Then both the goal set and stochastic samplings of representative mental processes are checked by humans for anomalous behavior (and a much larger subset frequency mined to determine what’s representative).
You’re not testing that the machine obeys some as-of-yet-not-figured-out friendly goal set, but that the extracted goals and computational traces are representative, and then manually inspecting those.
Giving the AI zero power to affect our behavior, in the strict sense, would mean not running it (or not letting it produce even one bit of output and not expecting any).
That’s a legalistic definition which belongs only in philosophy debates.
Utility maximization today seems like the best-formalized part of human general intelligence
I disagree. Much of human behavior is not utility maximizing. Much of it is about fulfilling needs, which is often about eliminating conditions. You have hunger? You eliminate this condition by eating a reasonable amount of food. You do not maximize your lack of hunger by turning the whole planet into a food-generating system and force-feeding the products down your own throat.
Anyway, in my own understanding general intelligence has to do with concept formation and system 1/system 2 learned behavior. There’s not much about utility maximization there.
It doesn’t seem like you even want to focus on uploading.
Do you count intelligence augmentation as uploading? Because that’s my path throughthe singularity.
despite being mathematically equivalent to some utility function
Gah, no no no. Not every program is equal to a utility maximizer. Not if utility and utility maximization is to have any meaning at all. Sure you can take any program and call it a utility maximizer by finding some super contrived function which is maximized by the program. But if that goal system is more complex than the program that supposidly maximizes it, then all you’ve done is demonstrate the principle of overfitting a curve.
No that presumes what is being checked against is the friendly goal system. What I’m talking about is checking that e.g. all actions being taken by the AI are in search of solutions to a compact goal description, also extracted from the machine in the form of a bayesian concept net. Then both the goal set and stochastic samplings of representative mental processes are checked by humans for anomalous behavior (and a much larger subset frequency mined to determine what’s representative).
You’re not testing that the machine obeys some as-of-yet-not-figured-out friendly goal set, but that the extracted goals and computational traces are representative, and then manually inspecting those.
That’s a legalistic definition which belongs only in philosophy debates.
I disagree. Much of human behavior is not utility maximizing. Much of it is about fulfilling needs, which is often about eliminating conditions. You have hunger? You eliminate this condition by eating a reasonable amount of food. You do not maximize your lack of hunger by turning the whole planet into a food-generating system and force-feeding the products down your own throat.
Anyway, in my own understanding general intelligence has to do with concept formation and system 1/system 2 learned behavior. There’s not much about utility maximization there.
Do you count intelligence augmentation as uploading? Because that’s my path throughthe singularity.
Gah, no no no. Not every program is equal to a utility maximizer. Not if utility and utility maximization is to have any meaning at all. Sure you can take any program and call it a utility maximizer by finding some super contrived function which is maximized by the program. But if that goal system is more complex than the program that supposidly maximizes it, then all you’ve done is demonstrate the principle of overfitting a curve.