Would it be inappropriate to put this list somewhere on the Less Wrong Wiki?
I think that would be great if we had a good repository of mind games
Would it be inappropriate to put this list somewhere on the Less Wrong Wiki?
I think that would be great if we had a good repository of mind games
I think a lot of it has to do with your experience with computer based games and web applications.
This is why I say it would have to be a controlled study because those with significant computer experience and gaming experience have a distinct edge on those who do not. For example many gamers would automatically go to the WASD control pattern (which is what some first person shooting games use) on the “alternate control” level.
5:57:18 with 15 deaths here
A few months ago I stumbled upon a game wherein the goal is to guide an elephant from one side of the screen to a pipe; perhaps you have seen it:
Here’s the rub: The rules change on every level. In order to do well you have to be quick to change your view of how the new virtual world works. That takes a flexible mind and accurate interpretation of the cues that the game gives you.
I sent this to some of my colleagues and have concluded anecdotally that their mental flexibility is in rough correlation with their results from the game. I think that experimental games are great and would, if done in a controlled setting, be an interesting way to evaluate mental acuity.
I probably came off as more “anticapitalist” or “collectivist” than I really am, but the point is important: betraying your partners has long-term consequences which aren’t apparent when you only look at the narrow version of this game.
This is actually the real meaning of “selfishness.” It is in my own best interest to do things for the community.
The mantras of collectivists and anti-capitalists seem to either not realize or ignore the fact that greedy people aren’t really doing things in their own best interest if they are making enemies in the process.
With mechanical respiration, survival with ALS can be indefinitely extended.
What a great opportunity to start your transhuman journey (that is if you indeed are a transhumanist). Admittedly these are not the circumstances you or anyone would have chosen but here we are nonetheless.
If you decide to document your process then I look forward to watching your progression out of organic humanity. I think it is people like you who have both the impetus and the knowledge to really show how transhuman technology can be a bolster to our society.
Cheers!
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature—as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it’s implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a “utility” calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: “I’m not interested in that, I want to know how intelligence works” or “I just want to make it work, I’m interested in the science behind it.” And I think this attitude is pervasive. It is ignoring the subject.
“Utilons” are a stand-in for “whatever it is you actually value”
Of course—which makes them useless as a metric.
we tend to support decision making based on consequentialist utilitarianism
Since you seem to speak for everyone in this category—how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.
Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot’s of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.
Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?
If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.
Current body types diverge rather dramatically from body types of a hundred or even fifty years ago
Dramatically? I have to disagree with that. I wasn’t around 100 years ago but I’ve seen pictures and they didn’t look much different than us, really at all. From the medical texts I am familiar with, our insides work pretty much the same as well.
I agree with Michael below that nutrition has made differences in how, say bodybuilders change their bodies, but the average person from what I can tell is about the same in type—if perhaps marginally taller on average.
You seem to be implying that you can do better.
Quite the contrary—I am saying it is currently impossible; thus in my view useless for accurate predictive models which should be applied to decision making. I think this is all great philosophy and science but once we start talking about individual “utility functions” we are talking fantasy. This is where I diverge with the consequentialist camp (granted I used to be a consequentialist).
let’s just remember that it is an approximation.
From my work with modeling an simulation you only approximate when you can do no better. In the case of calculating “utility,” ostensibly towards some decision based reasoning, that isn’t good enough. At least for me. There are too many exogenous variables currently.
ya know, it would be really nice if there was some way for a negative vote to be accompanied by some explanation of what the voter didn’t like.
That assumes those people down voting are doing so with some well thought out intention.
A certain philosophy being the most sustainable and positive isn’t automatically the same as being the one people tend to adopt
I think there is more than ample evidence to suggest that those are significantly less likely to be adopted—however wouldn’t a group of people who know that and can correct for it be the best test case of implementing an optimized strategy?
Also, it sounds like you’re still talking about a situation where people don’t actually have ultimate power.
I hold the view that it is unnecessary to hold ultimate power over FAI. I certainly wouldn’t bind it to what has worked for humans thus far. Don’t fear the AI, find a way to assimilate.
I disagree. While there are some spectacular examples of what you describe, and they are indeed ugly, by and large there is wide distribution of hierarchical disparity even in daily life which is more often than not mutually beneficent.
As an emperor I optimize my empire by ensuring that my subjects are philosophically and physically satisfied do I not? I think there is plenty of evidence to support this philosophy as the most sustainable (and positive) of hierarchical models; after all some of the most successful businesses are laterally organized.
You have to instill the right preference, and just having a working improved brain doesn’t give this capability.
Of course—we have to BUILD IT RIGHT. I couldn’t agree more. The cognitive model does not suggest a mere carbon copy of any particular brain at random, as you know it is not so limited in focus. The fantastic part about the method is at the point in which it is possible to do correctly (not simply an apparent approximation), the tools will likely be available (in the process of it being structured) to correct a large portion of what we identify as fatal cognitive errors. Any errors that are missed it stands to reason would be also missed given the same amount of time with any other developmental structure.
I am familiar with the global risk paper you linked, AI: A modern approach which addresses the issue of cognitive modeling as well as Drescher’s Good and Real and the problems associated with an FAI.
The same existential risks and potential for human disasters are inherent in all AI systems—regardless of the structure, by virtue of it’s “power.” I think one of the draws to this type of development is the fantastic responsibility which comes with it’s development, recognizing and accounting for the catastrophic results that are possible.
That said, I have yet to read a decision theoretic explication as to which structure is an optimized method of development, weighing all known limiting factors. I think AI: A modern approach comes closest to doing this but falls short in that it specifically narrows it’s focus without a thorough comparison of methods. So again, I ask, by what construct has it been determined that a logical symbolic programming approach is optimized?
Whole brain emulation doesn’t require understanding of brain architecture, only the dynamics of its lowest-level components.
I fear the same philosophical reasoning may be applied to model neural architecture as is currently being used for econometric forecasting. Even the most complex economic models cannot account for significant exogenous variables.
For the record I think we can get to WBE, however I think a premature launch would be terrible. Based on the lack of research into developmental AI (much work notably done by a friend—Dr. Frank Guerin at Aberdeen college) I think there is a long way to go.
Granting that a brain model or WBE, would be as accurate as the biological version, why then would that not be the most efficient method? The problems with testing and implementation are the same as any other AI, if not easier because of familiarity, however it is grounded on specific biological benchmarks which at that point would be immediately identifiable.
I could go on with my particular thoughts as to why biological simulation is in my estimation a better approach, however I am more interested in why the organization (people who have been thinking longer and with more effort than myself) decided otherwise. It would seem that their collective reasoning would give a sufficiently clear and precise answer such that there would be no ambiguity.
This would be true of any AI. Thus the AI box problem.
It is unclear however, how a formal logic approach overcomes this problem and a replication approach does not. They both will need testing, but as you said the methodology should be reconsidered. The easiest way to change testing methodology for logic would be to improve on current logic methodology which has yielded arguably fantastic results—all done by faulty human brains.
In order to set on limits on the kinds of things an AI will do, you need to understand how it works.
How is that impossible with a replicated brain architecture? We can’t make one if we don’t know how it works.
This is easier when you’ve got a clearly defined structure to the AI, and know how the parts interact, and why.
Of course. However, how you plan to structure AI what I am asking about. There are many theories about how to structure the AI—so why did the SIAI choose to only focus on a theoretical mathematical logic based approach rather than taking the most advanced, if still flawed, logic device known to man and replicating and improving that?
Perhaps this could be expanded to be Q&A for the people the readers agree would comparably elucidate on all manners rationality/AGI such as Wei Dei and Nesov rather than a single person.
To me it gives a broader perspective and has an added benefit of eliminating any semblance of cultishness, despite Mr. Yudkowski’s protests of such a following.