Comprehensible Improvments: Things you Could Do.
Edit2: reactions to the edit made me reconsider, partially. I might get around to making more posts here.
EDIT: Because this and all my comments on it is getting downvoted already, I won’t bother finish this and wish I’d never posted anything on it. Should I delete this thread or leave it as a monument to my own pathetic failure?
The topic of what you’d do if you found yourself as an upload and were to self improve is dangerous to think about for many reasons. It’s unlikely to happen before the singularity and if it happens afterwards you’ll have knowledge and a community that renders current speculation moot. As a human you almost certainly can’t reach superintelegence without becoming Unfriendly. You can’t think about any that improve intelligence beyond the first iteration because thatd be trying to predict somehting smarter than you. Etc.
However, even if you can only think about the very start of it, and the actual predictions or plans that you generate neither will or should have any reason to happen, there can be less direct benefits. The dominant one is it’s damn fun; thinking about things you could do to your mind is way more interesting than what you could do with that hot guy/gal sitting in front of you on the bus or what you’d do with a billion dollars. More importantly thou, it serves to provide a LOWER BOUND, helping against failures of imagination and providing more salient and near mode motivation for a friendly singularity in establishing life after it will be at least this good and the only reason you wont do these awesome things is that you’ll be provided by even better alternatives. Lastly, the chance is infinitesimal, but maybe you really will at some time have to boot the singularity from only your own upload and then a repository of the least unsafe upgrades LW could think of might come in handy. Just don’t fool yourself the first one isn’t the real cause of doing this thou. :p
Now, it happens to appear that all these 3 goals actually have the same most important heuristic: Keep it comprehensible to a vanilla human. There is a limited amount of fun to be gained from thinking of just a change to do without your brain being able to respond with what it’d feel like afterwards. Likewise, in the second goal the abstract “somehting really good but i don’t know how good or in what exact way” is what we’re trying to get away from. And for the last one, doing only changes you can comprehend is just common sense; “know what you’re doing” taken literally.
So, for he format of this thread: Have discrete improvement suggestions, and put only one in each comment with a witty title bolded. To keep it from degenerating in to buzzwords and the obvious, but all these are very lose suggestions, here’s a few guidelines that improvements should follow:
The exact situational assumptions for each example may vary, but in general you’re yourself, uploaded to a machine with enough power to simulate you at 10 to 10^12 times human speed, having 10 to 10^12the required memory, containing only you and software not much more advanced than we have today, using an architecture that provides no additional obstacles to anything (for example, all the computing power can be used serially and latency can be considered negligible), and you have no reason to be interested the outside world and under no obligation to personally cause the singularity and just enjoying yourself, but making sure you do not foom and cause a bad one. These are just establishing a default and you’re free to make other assumptions but you have to write them out.
It should be highly predictable and EASILY comprehensible. I won’t bother defining this other than by heuristic: you should be able to predict what you’d do and feel after the change as well as you’d be able to predict what you’d do before it. By this definition reading a book you haven’t read before is an example of a non comprehensible change but being wireheaded is. The narrowness of this is indeed excessive, but I’m confident it still gives a large enough search space and there is no need to go further into unpredictability than necessary.
keep it low level. The point of this is things you can vividly imagine, and it’s very easy to get carried away into far mode and abstraction. Talk neurons and algorithms, not ideas and functionality. Or rather, talk about the low level changes first and then the results they give on higher levels. Describe not what end result it’d be cool to have, but what procedure it’d be fun to do!
Have a witty title. it should be in bold.
Keep it fun. This is intended fair bit less serious than most LW discussions.
Keep it something more than fun, and on topic for LW.
Look at the examples I make.
EDIT: Damn, it’s really late and I were a lot wordier than I thought. I don’t have time to write the actual examples. I’ll do that tomorrow then hopefully. Sorry. :(
Can we please refrain from sexist phrasings on LW? Other easy phrasings would be just “gal/guy” “guy/gal” or just “person”.
(Also disagree somewhat on the billion dollar front. That’s a very fun thing to think about that reveals interesting things about preferences and cognitive biases such as scope insensitivity).
Upvoted for pointing out somehting I’d never have noticed myself.
I like this post, but Armok: Please spell-check. The mistakes (lack of apostrophes, 10 to 10^12 instead of 10*10^12, etc.) are distracting.
With that out of the way:
Stop the Info Loss
I want an eidetic memory, and the ability to delete or prevent the deletion of specific memories at will. That means remembering every instant of my life, except the boring ones I’d throw out to save hard drive space. I’d do this by making a backup of my experiences of the last day/week every day/week, storing them with descriptions of what I did and downloading info from the backups whenever I wanted to access it.
This may of course be due to a lack of imagination of my part, but I don’t think this would lead to a dangerous FOOM as long as the backups were inactive. Even if one “got away,” they’d be copies of me and no more likely to do anything I’d regret than I would. The effective result—a photographic memory—would be within human mind design space, so I doubt there’d be problems there.
Ok. This is good enough to serve as an example for others of what to do now.
After visualizing doing this, there is one thing making this inconvenient to use… While I now HAVE the information, the only way to access it is to fire up the old backup temporarily and communicate with it. The fast and intuitively accessible part of the memories keeps degrading and has to be laboriously updated from old backups. Likewise, deleting a memory will also delete all memories that happened after that point. It’s still a clear improvement from the old human way, and other future changes enabling merging minds or transfer of memories between them will remove these inconveniences.
A better way to archive this might be to, or more likely as a compliment to it, using similar techniques that have been used with implanted electrodes to get low level images from animal brains, to record your inner eye and voice to video, then using voice recognition to make a transcript for the voice part, and having that searchable to find the right point in the video feed if you remember a phrase you thought at approximately the same time.
The way I was visualizing it, it would only delete that day.
Good suggestion. I didn’t think this would be possible without “decompiling” myself first, but it might.
They did that? That’s really cool, can I have a link?
Um, how could it just delete that day? The only way to delete memories with this system is the delete the entire copy of the brain containing them, and all brains descended from the one experiencing the thing you want to forget would contain it.
It depends on what you mean by “decompiling” I guess, but it’s a very very limited decompile that has already been done IRL to animals using much cruder means.
And really that is about as much info as I can remember on that, and you can google it as well as I can. Most likely someone else reading this will know about it and hopefully they’ll link it.
I’m not talking about things I particularly want to forget, just about deleting brains that don’t have any novel interesting info. But keeping an archive by making backups is an unwieldly and annoying way of doing it, and the way you suggested a few posts up is better. I was actually visualizing something like that before you said I had to have a plausible way of doing it. I only came up with the backup thing because I didn’t think I’d be able to just archive my inner eye/voice.
ok, sounds like we pretty mush agree then.
Still, there IS good reasons to have entire brain backups to. For things that aren’t just audio and sound for example. I’m pretty sure you couldn’t do anything equivalent for smell. likewise, skills can rust from lack of use and that to is worth preserving. My approach would probably be to have the audio and video backup streaming constantly, have archival backups every few subjective months, and have subjective-hourly backups that you can revert to if the last hour was completely boring AND non-educational AND you didn’t practice any skill you need to get better at. Or when you encounter a basilisk or wireheading risk or somehting else lik that you’d like gone from your mind. I’d label the revert button “CAN HAS UNSEE” :p
Interesting potential akrasia-blocker: set this up to automatically revert me if I spend a whole hour doing one of a list of activities tagged “procrastinating”. This could have annoying effects and wouldn’t stop me from spending 50 minutes of the hour goofing off, but it might be useful.
Um, If you can do this stuff there’s a much easier way to do this: Just have a copy of you (probably running at a lower speed) supervising the other copies. Then, if a copy is wasting time, SUSPEND that simulation: After all it’s somehting you want to do sometime, akrasia is only a problem if time is limited or it has other negative effects.
Personally, I probably wouldn’t even need to do this, I can’t think of anything that’d actually be akrasia in a situation like that with near unlimited time and the ability to bring back an ancient copy from storage if to much unhealthy things done accumulate. I’d just READ all of tvtropes and wikipedia and youtube and then I’d be DONE. And the slow humans couldn’t make new content fast enough for it to take a significant portion of my time.
I’m really intrigued by your second claim. I often “waste time” creatively; I suspect I would do a lot more of that as a high-speed upload. The relative lack of human-generated content is neither here nor there.
Of course, as with all akrasia discussions, a lot depends on what qualifies as a waste of time. If it is worthwhile to do what I feel like doing when I feel like doing it, then I do worthwhile things constantly, and I don’t have akrasia… though I might have a problem with judging myself inappropriately, if I believe what I’m doing isn’t worthwhile.
I sort of like the idea of multiple copies of me competing for timeshare, but I’d expect it to not work out very well, as some of me would sooner or later start trying to game my own evaluation metric in order to get more time than the rest of me, which over time is an unfortunate trend.
Maybe the confusion is this: it is rare for time to be completely wasted: Reading TVtropes has many benefits and anything creative is obviously a good thing. “wasted” time is really just time that could have been spent even better. An upload like this would value it’s time a LOT less, the opportunity cost goes down immensely, and suddenly what was before waste is now proper training and study.
Uploading is sounding more and more appealing.
On a more serious note, if I split into copies for that sort of reason, I’d prefer to merge and re-split occasionally or share memories across copies. Then all the mes would have all the fun, instead of each me having some.
Depending on how much computing power I had, just letting some designated TVTropes/Wikipedia/YouTube personalities run at a slower rate, and merging them back in when I’m done (or rotating copies between fun and work so someone’s always working) could be an option.
This is an interesting point. As an upload, I wouldn’t have many obligations to others that I couldn’t do quickly and easily. That would leave me essentially doing whatever I felt like, with little or none of the urgency of current life. If I wanted to read TVTropes, I could with no consequences. If I wanted to read the literature in particle physics, I could do that with no consequences.
Yea.
Only problem is how do you “merge” two copies of your brain so that the resultant have the memories of both? I’d guess it’s somewhat implementation dependant and pretty hard. Hard enough that we can’t assume it can be done without explaining how for the purposes of a thread like this.
Meta-comment: Your stance on this, that technologies should be known to be workable before their effects are discussed, is different from the LW zeitgeist (citation when I can find it). I say this without stating a preference for either. I just wonder what it says about how you think about the future or things in general.
I don’t take that stance, in general. However, this thread is just just random idle scifi discussion, and details the way is is done in this case seems relevant to the end result.
Again you forgot a title in bold. And you’re supposed to say a safe route to archiving stuff not just what the goal of the change is.
(As for the spelling stuff, I’ve never been able to do it under any known circumstance. Anything the automated spell checker misses I’ve simply given up on by now.)
Thanks for reminding me. I fixed the title and added more details; I hope the “safe route” description is what you were after.
Project 1 is to make a brain decompiler, so that I run as a program rather than a simulation within a program. Brains are complicated, so it’s likely that we will figure out how to approximate one as a physical object before we figure out how to actually understand one as a computational process. But I’ll have plenty of time, and a virtual sandbox that present researchers don’t.
Depending on my options, I might weaken some of my basic urges before I finish the project. Current psychology literature should be enough for a reversible procedure to do crude things like that.
No title, far to broad strokes and high level or abstraction, and that last part is blatant values-destroying. Wont download because I hadn’t yet provided examples so misunderstandings are understandable.
Please leave it open. It’s a fun topic. Also, if I may stick my nose in and give some advice: instead of trying to control how people comment (title in bold, no values-destroying, provide specific details of how to do it) just let people say whatever.
Ok. I’ll leave it open and maybe make some posts like I intended them that people can use as examples. Thanks for the advice and encouraging words, this whole getting my first top level post (even if it was just in the discussion section) completely down voted was really hurting my confidence.
Ziphead Copies
Just a placeholder so i don’t lose a tangentially relevant link: http://lesswrong.com/lw/44l/fast_minds_and_slow_computers/3ha1
Kudos for those who get the book reference.
Fast Minds and Slow Computers
http://lesswrong.com/lw/44l/fast_minds_and_slow_computers/ Basically: You’ll have lots and lots of time, even as an unaided human you can do a lot with that. This is a very short one because it’s all in the above linked post.
(This is what inspired me to think this kind of subject was OK actually.)
Here, have a written downvote because I think I’m running low on potential downvotes and don’t want to use one on you. Firstly, I would say that your bolded title doesn’t really provide as much of a description—you’re really saying more that you want to be a fast mind on a slow computer. And secondly… doesn’t that scenario make clear that you would be terribly lonely, if the way you were perceiving things remains the same? I’d much rather coexist with equally fast minds, which in this scenario would be quite likely.
Seeing this, I realized that one of my fairly high priorities if I were running at a millionfold speedup would be to convince the people I care about to do it as well, both for their own benefit and so that we’d be able to interact.
I suspect that it would take them more than a day to do so, and I also suspect that long before then I would have lost interest in them. A millenium is a long time.
Sad, but true. Perhaps I’d create simulated people similar to them instead. Designing people like that has its own pitfalls, though. A situation where I start as the only upload sounds very not-fun from a social perspective… maybe I’d increase my tendency to introversion to the point where I could be psychologically stable on my own, at least until I had a simulated world to be in.
That sounds very dangerous for your values. For company, just branching yourself and drastically limiting but not zeroing the amount of contact between the branches while having them read different books and doing different tasks and maybe having small differences in settings should be enough to make your brain register it as a different person and have it fulfil the need for social contact.
Yeah, this is more or less what I would do. I don’t think explicit modifications would even be necessary… the simple existence of others would, I think, be enough to drive me in multiple different directions.
Well, I were intending to write more on that post. But I guess i wasn’t clear abut that.
Anyway, the “modification” in this case is running yourself faster than realtime and using a simple simulation instead of say a robotic body. But that’s not really the point.
I guess the reason I thought this to be an appropriate post is the insight that just having lots of time, with no other changes, is enough to enable you to do superhuman things. Yea, it’s all ridiculously obvious to anyone who’ve spent 5 min on LW but I included it for completeness i guess and because later posts might build of it.