When does technological enhancement feel natural and acceptable?
Technology can be used and perceived in different ways. Future technology may change our lives beyond imagination. How can friendly AI technology enrich the human experience positively? Technology can feel like it controls us, or—if it goes well—it can feel like a natural enhancement of mind and body.
I’m interested in ways future technology could or couldn’t do this. I will explore some avenues and state my opinion on these. Make up your mind. I’d like to hear your opinion.
Body Enhancement
The first thing we’d like to get rid of is impediments and diseases. Some might like to be immortal. But this is not enhancement but rather maintenance.
People apparently like to enhance their bodies. This starts with cosmetics and doesn’t end with doping. Strictly clothing could also count among this. We know quite well what we want here. I’d bet that people would accept body enhancements easily—esp. if it is reliable, safe, and/or reversible. Fictional evidence here is the positive reception of suitably enhanced heroes. Wouldn’t you like to have super strength or look like a supermodel? Such enhancement for everybody, which is otherwise zero-sum, could also balance against the effect that ideals in the media diminish our self-image compared to the ancestral environment where we were only one in 150 average guys.
As long as people don’t change their native preferences, this should make everybody happier with themselves. If preferences are changed, bets are off again.
Mind Enhancement
Drugs can have not only pleasurable but also performance-increasing effects. Nootropics for everybody could be acceptable—if free and safe. I think that increasing brain capacity (speed and capacity) would feel the most natural—if it could be done.
The trouble with this is the inevitable return on cognitive investment. Seems like either exponential or chaotic changes (from interacting minds) result. One tricky part here seems to be how to avoid boredom once stability has been reached.
Body Schema Extension
Body perception is flexible. It is known that the body schema (our body self-image) can expand to encompass tools. Thus tools become part of the body (schema) and are wielded and handled and thus felt like one’s own body. I got the impression that this might extend to vehicles—e.g., driving a car—or probably also flying a plane—can feel like one’s own movement. One knows where the car ends. I’d guess that technology that has immediate feedback and can be mapped to a (distorted/extended) body schema will likely feel natural (after some time of adjustment).
Sensory Enhancement
Apparently, our senses are quite flexible. Whatever input (visual, auditory, tactile, even smell) can be mapped to a 3D environment model by long training. This is apparently also possible for non-native senses, which is called Sensory Substitution and Sensory Augmentation. There are already some projects which build actual working devices. Once this mapping has settled in the subconscious, it feels natural. I wonder whether augmented reality systems can achieve this. Virtual reality systems are dual for this—data mapped to the senses instead of senses mapped to data.
Devices and Gadgets
Devices that require conscious interaction and translation into some UI often feel clumsy no are clumsy. They break the flow. They require conscious effort. I think the main attraction of having an app for that is the feeling of control over the distance we gain. We can do something by invoking some magical ritual to achieve some effect other people can’t achieve (or rather only via mundane manual action). This is good and fine, but even better if you can achieve the effect even without the interaction.
There was a recent post somewhere about the best smartphone UI being just a blank screen where you could type (or dictate) what you want, and the ‘UI’ would figure out the context and intention. While googling unsuccessfully for that, I found this link about natural UIs:
“The real problem with the interface is that it is an interface. Interfaces get in the way. I don’t want to focus my energies on an interface. I want to focus on the job … I don’t want to think of myself as using a computer, I want to think of myself as doing my job.”—Donald Norman in 1990
Services
Commerce and esp. the internet provide lots of services that we use to reap the benefits of a digital society. Amazon, Netflix, online booking… But we are at the mercy of the service providers and the power of the interface provided, and the cost required. Independent of how well-integrated this is, every interaction with service either means a transaction (cost per use), freemium choice (will I regret this later), or ad suffering (paying with attention). This is bondage and discipline. I’d rather minimize this as a means of future technological enrichment.
Language Control
Communication is natural—via speech and via text. Communication is natural, not only with people. Most programmers value the power of the command lines—because it allows them to combine commands in new ways in a—to an experienced user—natural linguistic way. Why not use language to control the technology of the future? Just utter your wishes. A taste of this could be the service offered by the Magic startup.
Social Interaction
We are social animals. Could we deal with digital assistants who understand us and support us? Probably—if they are beyond the Uncanny Valley. But would we trust them? Only if they behave consistently with equal or lower status. Otherwise, we’d justifiably feel dominated. Can this be achieved if the artificial agent is much smarter than us and unavoidably controls us thereby? Would we feel manipulated?
Slow Processes
Societal processes affecting us in ways we (feel we) have only limited control over often feel oppressive—even if they are by some objective standard intended for our (collective) best. Examples are health care bureaucracy, compulsory education, traffic rules, and above all, parliamentary democracy. These processes are slow in the sense of affecting and changing over longer times than conscious effort can easily work on. Often there is no immediate or clearly attributable feedback. Such a process often feels like a force of nature, and humans have adapted quite well to forces of nature—but just because we accept it doesn’t mean that we feel liberated by it. I think that any slow process that changes things in complex ways we cannot follow will cause negative feelings. And many conceptions of how FAI could help us I have seen involve masterminds setting things up. People might feel manipulated. Either this is balanced by other means, or we need a correspondingly slow consciousness or deep understanding to follow this.
My Transhuman Wish-List
I want to look better, be more robust—even if everybody else would look better too. I want backups and autonomous real and virtual clones of myself.
I’d like to think faster, have a perfect memory, or even access information from the web in a way that feels like recall. I’d like to be able to push conscious thought processes into the subconscious—call it deliberate, efficient reversible habit formation.
I’d like to be able to move into machines (vehicles, robots, even buildings) and feel the machines as my extended self. I’d like to perceive more kinds of sensor input as natural as my current senses.
I don’t want to interface with devices but command linguistically, by thought, or completely subconsciously.
I want a consciousness that can deal with slow processes and possibly a way to think slower in parallel with normal consciousness.
Open Ends
There are more areas where this reasoning can be applied, and I’d like to state some general patterns behind these areas—but my time for this post has run out.
Just two examples:
Incremental changes are preferable to abrupt changes. People oppose changes for which they cannot see the consequences. But compared to slow external processes. Slow internal processes may be the best option.
Enhancements that can be used subconsciously are better than those that need conscious attention (and context switches).
I’d like to give fictional evidence for each point. But here, I also just point you to the Optimalverse, where some of these are played out, and to The Culture, which describes some of the effects.
EDITED: Spelling, typos.
- Thoughts on Neuralink update? by 29 Aug 2020 22:59 UTC; 13 points) (
- 6 Apr 2018 21:48 UTC; 4 points) 's comment on Open Thread April 2018 by (
- 21 Feb 2023 10:21 UTC; 2 points) 's comment on Cyborgism by (
- 2 Aug 2015 19:34 UTC; 1 point) 's comment on Open Thread, Jul. 27 - Aug 02, 2015 by (
- 20 May 2015 22:02 UTC; 1 point) 's comment on Brainstorming new senses by (
Have you ever wondered why, in an age of cell phones and hand grenades, telepaths and fireball throwing wizards in fantasy books sounds cool? Somehow it seems like we like to do things with our mind or body only, not relying on tools.
That is how primitive cyberpunk novels fail. I am pretty sure I don’t want to replace my eyeballs with mechanical eyes. However I am thinking if LASIK surgery could be good. So the idea is not so much so to implant machines in our body or to use them externally, but to use technology to make our own bodies become high quality and powerful. I like this idea.
Not even if you could adjust them to become telescopes or microscopes if need be? If you could switch to seeing in infrared or ultraviolet? Add amplification to clearly see on moonless nights with nothing but starlight?
If this can be done inside the eyes, it can be done outside the eyes, as a removable eyeglasses like thing.
This is why cyberpunk never really made sense to me. Why remove the choice of putting something on or off? Okay there is an advantage of never forgetting it at home, still. Gibson’s razor blades implanted under the fingernails sound cool until you realize you just gave up the option of ever being allowed on an airplane for example.
Oh, and sometimes I hear horror stories that when people wear diamond rings for decades and it becomes unremovable from their fingers, and some criminal mugs them, they just cut of the finger. Extrapolate from here...
People shooting other people with blaster rifles and flying spaceships sounds cool too.
I’m not sure what your point is?
You might not, that doesn’t mean that nobody does. I think I have meet 3 people face to face with implants to be able to perceive magnetic fields.
Need not be implants. The NorthPaw http://sensebridge.net/projects/northpaw/ or the feelspace belt http://feelspace.cogsci.uni-osnabrueck.de/ are cool despite being devices—precisely because they quickly fade into the subconscious.
Yes, there are non-implant solution but that doesn’t change the fact that there are people willing to use implants.
If you have vision problems, as I do, those “mechanical eyes” sound interesting.
Fantasy appeals strongly to the adolescent mind because our bodies at that age start to change and develop new powers, so to speak—just not necessarily the kinds of powers we might want; or else our new powers still don’t meet the needs of our new desires. Notice especially how fantasy appeals strongly to the sorts of boys who get pushed aside from access to girls until their 20′s, or even indefinitely in more cases than we would care to admit.
But that is self-hating escapism mostly.
Required reading:
Man Into Superman (1972), by Robert Ettinger:
http://www.cryonics.org/images/uploads/misc/ManIntoSuperman.pdf
I read it in 1974. Ettinger anticipated a lot of things that today’s transhumanists think they just discovered.
Which enhancements would you like? “Yes” doesn’t mean “always” but “as needed”. Choose “Other” if unsure, if you see other choices you want to comment on or if you just want to see the answers.
[pollid:908]
[pollid:909]
Virtual clones [pollid:910]
Real clones [pollid:911]
Independent clones [pollid:912]
Think faster [pollid:913]
Think slower [pollid:914]
Perfect memory [pollid:915]
Conscious access to numeric computing ressources (arithmetic, statistics) [pollid:916]
Conscious access to symbolic computing ressources (logic) [pollid:917]
Conscious access to turing complete computing ressources [pollid:918]
Recall of information from the web like own memory [pollid:919]
Conscious control over habit formation [pollid:920]
Affect emotional states in a controlled way (happiness, attention, fear...) [pollid:921]
Alter my mind in deeper ways [pollid:922]
Move my mind into or expand my mind to vehicles or other bodies [pollid:923]
Perceive radiation natively [pollid:924]
Perceive material properties natively [pollid:925]
Act via tactile control of audiovisual devices [pollid:926]
Act via tactile control with feedback via augmented senses [pollid:927]
Act via linguistic control of audiovisual devices [pollid:928]
Act via linguistic control with feedback via augmented senses [pollid:929]
Act via conscious thought control of audiovisual devices [pollid:930]
Act via conscious thought with feedback via augmented senses [pollid:931]
Interact with artificial beings [pollid:932]
Interact with artificial beings that are smarter than I [pollid:933]
Interact with artificial beings that are less smart than I [pollid:934]
Interact with artificial beings that are more powerful than I [pollid:935]
Have (parallel) consciousness which runs at time-scales of societal change [pollid:936]
Conscious access to slow processes [pollid:937]
Be consciously aware of cost-benefit trade-offs any application or usage of the above enhancements brings [pollid:938]
You may add other polls as sub comments.
I want to be able to reverse aging.
What would the use be of thinking slower? Maybe for boring times?
I don’t just want conscious recall of information from web-like own memory, I want to be able to communicate (both receive and transmit) directly in hypertext—I don’t know what it would be like, but it’s frustrating that I can’t.
If I could alter my mind in deeper ways, I’d like really good version control. I’d also like to be able to toggle between sensory extension and old-style sensory systems—there’s a lot of art which is optimized for currently standard senses.
And I’d like self-modules., so that if I wanted to experience something as though it was new to me or as if I were at an earlier age, I could. Daniel Pinkwater (a notable author of children’s books) has mentioned that he has access to what it’s like to be various ages.
No, though that might be useful for things like long space travel too.
I’m more thinking about the ability to perceive and act on longer timescales effectively. What Robin Hanson calls the Long View. We are not very good at noticing and consciously dealing with processes that are much slower than our attention span. We have to piece these together from episodic memory.
Sorry for the laaaaate reply. Curious whether you are still here.
I think people are SEVERELY overestimating the utility of perfect memory (74% yes, 10% no), and underestimating the value of traumatic and unpleasant experiences fading over time. Some people currently have perfect memory, it is not a good experience.
A better selective memory is a good thing. Electing to remember where you placed your keys or the name of your mailman is a good idea. Having perfect memory of all the idiotic things you said or did during your first break up or that fight with your mom, or more importantly that time you were molested or almost died in combat is a recipe for emotional disaster and severe PTSD. Its very hard to control where your mind dwells and how memories are triggered, but slow fade and nostalgic filters protect us from the worst emotional damage of long-term rumination over negative events.
http://www.spiegel.de/international/world/the-science-of-memory-an-infinite-loop-in-the-brain-a-591972.html
Insightful. But that really ‘only’ means that these transhumanists just want conscious access to the availability of the memory too.
Summary of results: we want everything.
What’s the difference between “die when I want” and “immortality”? I would expect “die when I want” would mean that I keep living until I decide to die, and “immortality” would mean that I keep living, but I could totally change my mind if I want to. I’m fine with clones if we can recombine, but if we can’t it would be disconcerting.
Lots of people have voted “other”—but not always (show results) so I wonder: What other options there are hidden?
I think that it’s acceptable when it works.
What I mean is, a lot of the transhumanist stuff is predicated on these things working properly. But we know how badly wrong computers can sometimes go, and that’s in everyone’s experience, so much so that “switch it off and switch it on again” is part of common, everyday lore now.
Imagine being so intimately connected with a computerized thingummybob that part of your conscious processing, what makes you you, is tied up with it—and it’s prone to crashing. Or hacking, or any of the other ills that can befall computery things. Potential horrorshow.
Similar for bio enhancements, etc. For example, physical enhancement like steroids, but safer and easier to use, are still a long way off, and until they come, people are just not going to go for it. We really only have a very sketchy understanding of how the body and brain work at the moment. It’s developing, but it’s still early days.
So ultimately, I think for the foreseeable future, people are still going to go for things that are separable, that the natural organic body can use as tools that can be put away, that the natural organic body can easily separate itself from, at will, if they go wrong.
They’re not going to go for any more intimate connections until such things work much, much better than anything we’ve got now.
And I think it’s actually debatable whether that’s ever going to happen. It may be the case that there are limits on complexity, and that the “messy” quality of organics is actually the best way of having extremely complex thinking, moving objects—or that there’s a trade-off between having stupid things that do massive processing well, and clever things that do simple processing well, and you can’t have both in one physical (information processing) entity (but the latter can use the former as tools).
Another angle to look at this would be to look at the rickety nature of high IQ and/or genius—it’s six and a half dozen whether a hyper-intelligent being is going to be of any use at all, or just go off the rails as soon as it’s booted up. It’s probably the same for “AI”.
I don’t think any of this is insurmountable, but I think people are massively underestimating the time it’s going to take to get there; and we’ll already have naturally evolved into quite different beings by that time (maybe as different as early homonids from us), so by that time, this particular question is moot (as there will have been co-evolution with the developing tech anyway, only it will have been very gradual).
I’m no expert in the field, but I’d like to bring up neuroplasticity. Our brains are constantly rewiring themselves as they process input, and they gradually adjust to change. My point is that I believe any enhancement could come to feel natural (although some would certainly have a higher learning curve).
Other thoughts:
Ever read Uglies, Pretties, and Specials by Scott Westerfield? It’s set in a utopia/dystopia where massive plastic surgery is the norm—at 16, everyone chooses what they will look like (going from “Ugly” to “Pretty”) and similar changes occur at middle age, and so on. One of the points made is that there will always be something to envy—if it stops being looks it’ll become something else.
I’d take some kind of physical enhancement that removes most bodily needs—sleeping, bathroom, eating, etc. - although this is a symptom of the more general “anything that gives me more free time is good” heuristic.
I can imagine some kind of gene sequencing becoming a regular medical practice—stripping people of bad genes, or enhancing good ones.
When does technological enhancement feel natural and acceptable? When it relieves us of a perceived burden. When trend leaders present it as natural and acceptable. When other choices are taken away. In short, not necessarily for rational or self-interest reasons.
Good results can come if you put the question on its head. When does technological enhancement not feel natural and acceptable? Or, as an egoist, how can I get what I want in indifference to how others feel about it?
You can have a lot of fun imagining how HEPs (highly enhanced persons) would interact with MOSHes (Mostly Original Substrate Humans), especially if the MOSHes didn’t understand the nature of the interaction. Olaf Stapledon wrote an excellent short novel on that theme back in the 1930′s:
Odd John:
http://gutenberg.net.au/ebooks06/0601111h.html
The HEP character as a boy, John Wainwright, for example makes a MOSH boy fall in love with him in a homosexual way, just as an experiment.
John is a feral child who needs to figure out what he’s doing by himself.
In the real world, there would be at least efforts to have some rules for how HEPs and MOSHes interact, even if those rules can’t be enforced reliably.
We don’t have exactly any HEPs around now that I know of. The first ones may live in something analogous to a state of nature, however, until new social norms emerge to regulate their behavior.
That’s the Borg. If remembers feels the same as information that other people put on the internet it changes a lot.
Ah, the idea is that I can recall the information that way but that I’m aware of the source. Not that it feels genuinely like my own memory. But without that distinctions yes Bork fits it.
Related article by Zoltan Istvan:
The Culture of Transhumanism Is About Self-Improvement
http://www.huffingtonpost.com/zoltan-istvan/the-culture-of-transhuman_b_7022406.html
So my instinct is to write this guy off as a nut because it’s super sketchy to try to run for president for a party that (to my very cursory knowledge) he made up to increase his book sales. Does anyone else find some value in paying attention to him or taking time to read his stuff? I’m wondering if this is a correct judgement on my part or an instinctual misfire.
I follow his career with a perverse kind of fascination just to see how aggressive self-promotion works. Two years ago I had never heard of this guy, though he does have a media trail on the internet. Now he has figured out how to get invited to all kinds of H+ related conferences so that he can plug his novel, argue for the imminence of all kinds of radical transformations in the human condition due to allegedly accelerating technology, and make the case for a transhumanist political party in the U.S. with himself as the presidential candidate.
Case in point: He will speak at a conference in Palm Springs next month, along with several other individuals whose names you might recognize, hosted by something called the Brink Institute:
http://brinkinstitute.org/
I can say that whenever he speaks about biology his claims are an order of magnitude more inane than the usual ones I see made by others.
Is there a standard metric for inanity of biological claims?
Hmmm… we could define one.
We might need multiple axes though. One for thermodynamic implausiblity, one for dammit-thats-not-how-it-works-at-all / misapplication of programming concepts to chemistry, one for do-you-realize-how-complicated-what-you-just-suggested-is.
I recently registered to vote and did not see his party listed as an option, even though I have never heard of the “Americans Elect Party” and it is an option. I mostly pay attention when other people mention him. Also, I kind of wish the Transhumanist Party would issue some statements about ballot issues besides “vote for Istvan”.