Seriously Luke, slides—the video was kind of blurry. Use the Force (if you have to)!
I think there is such a thing as professionalism, and it’s not always bad. Posting slides for your talks is common practice. In EY’s case we can chuck it up to absentminded genius, but this is why we have well organized people like you at SingInst. I say it as a supporter.
extension of Solomonoff induction to anthropic reasoning and higher-order logic – why ideal rational agents still seem to need anthropic assumptions.
I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.
Much the same is true of this one:
Theory of logical uncertainty in temporal bounded agents.
Again, this is a sub-problem of solving the maximisation problem.
Breaking a problem down into sub-problems is valuable—of course. On the other hand you don’t want to mistake one problem for three problems—or state a simple problem in a complicated way.
How do you construe a utility function from a psychologically realistic detailed model of a human’s decision process?
It may be an obvious thing to say—but there is an existing research area that deals with this problem: revealed preference theory.
I would say obtaining some kind of utility function from observations is rather trivial—the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can’t compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.
Better formalize hybrid of causal and mathematical inference.
I’m not convinced that there is much to be done there. Inductive inference is quite general, while causal inference involves its application to systems that change over time in a lawful manner. Are we talking about optimising inductive inference systems to preferentially deal with causal patterns?
That is similar to the “reference machine” problem—in that eventually you can expose the machine to some real-world data and then let it design its own reference machine. Hand-coding a reference machine might help with getting off the ground initially, however.
Is it possible to obtain the slides from EY’s presentation?
Not what you asked, but… I did upload his list of open problems here.
Seriously Luke, slides—the video was kind of blurry. Use the Force (if you have to)!
I think there is such a thing as professionalism, and it’s not always bad. Posting slides for your talks is common practice. In EY’s case we can chuck it up to absentminded genius, but this is why we have well organized people like you at SingInst. I say it as a supporter.
Just got permission from Eliezer to post his Singularity Summit 2011 slides. Here you go.
Great!
Thanks a lot Luke.
I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.
Much the same is true of this one:
Again, this is a sub-problem of solving the maximisation problem.
Breaking a problem down into sub-problems is valuable—of course. On the other hand you don’t want to mistake one problem for three problems—or state a simple problem in a complicated way.
It may be an obvious thing to say—but there is an existing research area that deals with this problem: revealed preference theory.
I would say obtaining some kind of utility function from observations is rather trivial—the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can’t compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.
Right. Also, choice modeling in economics and preference extraction in AI / decision support systems.
I’m not convinced that there is much to be done there. Inductive inference is quite general, while causal inference involves its application to systems that change over time in a lawful manner. Are we talking about optimising inductive inference systems to preferentially deal with causal patterns?
That is similar to the “reference machine” problem—in that eventually you can expose the machine to some real-world data and then let it design its own reference machine. Hand-coding a reference machine might help with getting off the ground initially, however.
Does anyone understand better—or have a link?
This one seems to be a pretty insignificant problem, IMHO. Real icing-on-the-cake stuff that isn’t worth spending time on at this stage.