Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
AndrewKemendo
I’m trying to be Friendly, but I’m having serious problems with my goals and preferences.
So is this an AGI or not? If it is then it’s smarter than Mr. Yudkowski and can resolve it’s own problems.
[P]resent only one idea at a time.
Most posts do present one idea at a time. However it may not seem like it because most of the ideas presented are additive—that is, you have to have a fairly good background on topics that have been presented previously in order to understand the current topic. OB and LW are hard to get into for the uninitiated.
To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.
That is what the sequences were designed to do—give the background needed.
it just takes the understanding that five lives are, all things being equal, more important than four lives.
Your examples rely too heavily on “intuitively right” and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.
if people agree to judge actions by how well they turn out general human preference
What is the method you use to determine how things will turn out?
similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of “well”
Does consensus make decisions correct?
The economist’s utility function is not the same as the ethicist’s utility function
According to who? Are we just redefining terms now?
As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.
I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don’t know where the term originated.
This is a problem for both those who’d want to critique the concept, and for those who are more open-minded and would want to learn more about it.
Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..
These books are very accessible but lack the in depth analysis which are expected to be thoroughly critiqued and understood in depth. Writings like Global catastrophic risks and any of the other written deconstructions of the necessary steps of technological singularity lack those spell-it-out-for-us-all sections that Gladwell et al. make their living from. Reasonably so. The issue of singularity is so much more complex and involved that it does not do the field justice to give slogans and banner phrases. Indeed it is arguably detrimental and has the ability to backfire by simplifying too much.
I think however what is needed is a clear, short and easily understood consensus on why this crazy AI thing is the inevitable result of reason, why it is necessary to think about, how it will help humanity, how it could reasonably hurt humanity.
The SIAI tried to do this:
http://www.singinst.org/overview/whatisthesingularity
http://www.singinst.org/overview/whyworktowardthesingularity
Neither of these is compelling in my view. They both go into some detail and leave the un-knowledgeable reader behind. Most importantly neither has what people want: a clear vision of exactly what we are working for. The problem is there isn’t a clear vision; there is no consensus on how to start. Which is why in my view the SIAI is more focused on “Global risks” rather than just stating “We want to build an AI”; frankly, people get scared by the latter.
So is this paper going to resolve the dichotomy between the simplified and complex approach, or will we simply be replicating what the SIAI has already done?
Thus if we want to avoid being arbitraged, we should cleave to expected utility.
Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.
The money pump only works if your “utility function” is static, or more accurately, if your preferences update slower than the pumper can change the statistical trade imbalance eg: arbitrage doesn’t work if the person outsourced to can also outsource.
I can take advantage of your vN-M axioms if I have any information about one of your preferences which you do not have (this need not be gotten illegally), as a result, you sticking to it would money pump regardless.
This might have something to do with how public commitment may be counterproductive: once you’ve effectively signaled your intentions, the pressure to actually implement them fades away.
I was thinking about this today in the context of Kurzweil’s future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.
I wonder, if they see the time lines that he predicts if they will potentially think: “oh, well [this or that technology] will be designed by 2019, so I can put it off for a little while longer, or maybe someone else will take the project instead”
It might not be the case and in fact they might use the predicted time line as a motivator to beat. Regardless, I think it would be good for developers to keep things like that in mind.
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.
Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.
Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here—however that may not be appropriate for this particular thread.
Thanks, I followed up below.
You’ll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term “maximizing utility” means and the Bentham definition (who was the originator) are significantly different; If you don’t know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.
Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose—similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it’s goal.
I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).
So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).
Ha, fair enough.
I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.
I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.
Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?
Worded differently, how have you come to the conclusion that “maximizing utility” is the optimized goal as opposed to say virtue seeking?
We don’t have to understand the universe completely to be very confident that it contains no contradictions.
Where is the proof of concept for this?
I have several resources which point to extreme inconsistency with the current and past behaviors of particle and astro physics. Beyond natural sciences, there are inconsistencies in the way that political systems are organized and interacted with even on a local level—yet most find them acceptable enough to continue to work with.
You argue that inconsistency alone is enough to reject a theory. The point I make is that understanding that a process may work differently under different circumstances is not necessarily inconsistent and does not “guarantee” it being wrong. That is the point behind chaotic modeling.
There can still be valuable achievements that come from better understanding how the seemingly inconsistent theories work and I argue would not be wholly acceptable as a sole reason for rejection as you seem to advocate.
I still am not convinced that all systems must be consistent to exist—however that is a much different discussion.
Inconsistency is a general, powerful case of having reason to reject something. Inconsistency brings with it the guarantee of being wrong in at least one place.
I would agree if the laws of the universe or the system, political or material are also consistent and understood completely. I think history shows us clearly that there are few laws which, under enough scrutiny are consistent in their known form—hence exogenous variables and stochastic processes.
I looked into that but it lacks the database support that would be desired from this project. With LW owning the xml or php database, closest match algorithms can be built which optimize meeting locations for particular members.
That said, if the current LW developer wants to implement this I think it would at least be a start.
I thought so too—however not in the implementation that I think is most user friendly.
I am currently working on a google map API application which will allow LW/OB readers to add their location, hopefully encouraging those around them to form their own meetups. That might also make determining the next singularity summit location easier.
If there are any PHP/MySQL programmers who want to help I could def use some.
- Nov 5, 2009, 7:20 PM; 0 points) 's comment on Open Thread: November 2009 by (
As I asked in response to your other argument: Who has given utility this new definition?
I think perhaps there is a disconnect between the origins of utilitarianism, and how people who are not economists (Even some economists) understand it.
You as well as black belt bayesian are making the point that utilitarianism as used in an economic sense is somehow non-ethics based, which could not be more incorrect as utilitarianism was explicitly developed with goal seeking behavior in mind—stated by Bentham as greatest hedonic hapiness. It was not derived as a simple calculator, and is rarely used as such in serious academic works because it is so insanely sloppy, subjective and arguably useless as a metric.
True, some economists do use it in reference and it is introduced in academic economic theory as a mathematical principal but I have yet to see an authoritative study which uses expected utility as a variable, nor as it was introduced in my undergrad (Economics) as a reliable measure—again, why you do not see it in authoritative works.
You both imply that the economics version utility is non normative. Again as I said before, it was created specifically to guide economic decision making in how homoeconomicus should act. Does the fact that it can be both used normatively and objectively in economic decision making change the definition? No, because as you said, they use the same math. People forget that political economics was and is still normative whether economist want it to be or not.
Which leads me to what I think the root of this problem is in understanding what economics is. At it’s heart economics is both descriptive, prescriptive and normative. Current trends in economics are seeking to turn the discipline into a physics-esqe discipline which seeks to describe economic patterns. Yet, even in these camps they must hold natural rate of employment as good, trade as enhancing, public goods as multiplicative good etc… Lest we forget than Keynesianism was hailed as the next great coming and would revolutionize the way that humans interact. Economics without normative conclusions is just statistics.
I realize it is a semantic point, however if we want to use a term then let’s use it correctly. I know Mr. Yudkowski has posted before about the uselessness of debating definitions, however we are talking about the same thing here.
All of this redefining utility discussion smacks of cognitive dissonance to me because it seems to be looking to find some authority on the use of the term utility in the way that people around here want to use it. If you want to use normative utilitarianism then you’ll have great fun with Bentham’s utilitarianism as it is and has always been normative. The beef seems to lie between expected and average utility—which are both still normative anyway so it is really a moot point.
I have thought of making a separate post on utilitarianism, it’s history and errors, mostly because it is the aspect I have been most interested in for the past decade. However I doubt it would give any more information than what exists on the web and in text for any interested parties.
edit: Here is a perfect example of my point about the silliness of expected utility calculation in empirical metrics. The author uses VNM Expected utility based on assumed results of expected utility in terms of summed monetary and psychic income. There are no units, there is no actual calculation. There are however nice pretty formulas which do nothing for us but restate that a terrorist must gain more from his terrorism than other activities.