True fact: I don’t drive. I never have. My visual processing and reflexes are bad enough that I don’t trust myself to do so—I haven’t been specifically told that I’m not allowed to, but I estimate that there’s a significant enough chance of me causing an accident if I do that it’s not worth it. This is a source of inconvenience in my life, but it hasn’t been too hard to adjust my lifestyle to accommodate it, so that’s what I’ve done. I am, however, hoping that those spiffy new self-driving cars that Google has been working on turn into something other than a geeky novelty sometime soon. I want one. There’s a reasonable chance that they’ll revolutionize my life.
Compared to regular driving, is a self-driving car ‘surrendering freedom’? I suspect not, or at least not much—one might not be able to slow down to get a better look at some distracting thing along the side of the road, or run a red light when there’s clearly nobody else at the intersection, and one might have less control over the route that one takes to get from point A to point B, but generally speaking, other than the skills involved, there doesn’t seem to be that much of a difference between the two.
How about a self-driving car that’s able to communicate with computers at nearby stores? One could give the car a file with one’s grocery list, and have it go to the store that has all the items in stock for the best combined price. This seems like giving up a little bit of freedom—maybe I like shopping at store A rather than store B, and don’t mind doing without bananas this week—but it seems like a good thing to me, overall. The car is still a tool, helping me achieve my preferences.
How about we take that new functionality a few steps further: The resulting technology wouldn’t exactly be a car any more, but more ubiquitous, gathering data that’s important to all my decisions and telling me whether things are good ideas or not. This system wouldn’t just say ‘go to store B; store A is out of bananas’; it would say ‘go to store B; store A’s bananas come from a company that was just discovered using child labor, and A has not yet announced that they’ve switched suppliers’. This still helps me achieve my preferences, but in a much more holistic way—and by keeping track of many more things than it’s possible for me to do on my own.
It’s a bit of a conceptual leap, but not too far to be believable, between that and a system that has the potential notice that the best way to allow me to have the experiences that I want without inconveniencing others is by uploading me into a simulated environment with a group of compatible individuals with similar preferences—that this option not only avoids the risk of supporting child labor, but potentially avoids supporting involuntary labor altogether, by making it so that the only ‘real’ things I need are hardware and electricity, and both of those can be created by machines.
Where in this, exactly, do I stop being an adult, by your standards? Has it perhaps already happened, since I’m willing to trust a smart car to do something that I ‘should’ be doing for myself?
I’m not saying “not being an adult” is a bad thing, at least not by my own standards. There are many aspects of “adulthood” I repudiate.
However, I thought the whole point of libertarianism, which appears to be endorsed by some notable people here, appears to be to maximize individual freedom and embracing the many dangers that come with it. I’m not sure it’s such a good idea, and I’m arguing that limiting our choices through agents we can’t control allows us to feel more in control with what’s left, and more satisfied with our choices. I then think of the logical extremity of such an attitude, and wonder and cower before it, feeling its “goodness” to be as ambiguous as the libertarian’s. Hence the “CONGRATULATIONS” scene, since that was a bit of an Esoteric Happy Ending for anyone familiar with the context.
In other words, it’s what they call the Peter Pan Syndrome: do you want to be a child forever? People always seem to get nostalgic about their childhoods (except some outcasts for whom childhood was a terrible time and who are quite happy to live in a world where you can sue people attempting bullying). Yet the cosntant exhortation: “Grow up”. “Stop being such a child”. “You’re a big boy/man/big girl/woman now”. “Take responsibility”. But is it really worth it, adulthood? If we could give it up, should we? Allowing an AI to govern our lives seems to amount to giving up humanity’s adulthood. It wouldn’t even be a Zeroth Law Rebellion, we’d be the ones asking for it. So, should we?
A self-driving car is a robotic chauffeur. Human chauffeurs are not our bosses but our servants. There are many other examples of devices replacing servants and other underlings. I wouldn’t offhand consider any of these to be examples of “surrendering our freedom to an external agent”. I would, instead, consider becoming a servant or underling to be an example of surrendering (part of) our freedom to an external agent, who tells us what to do.
It’s a question of who is telling whom what to do. Are you telling the device to do something, or is the device telling you? We mostly tell our devices what to do.
There are, of course, devices that tell us what to do or otherwise oversee us. For example, a cash register that calculates change in effect tells us what to do in the trivial sense of telling us how much change to return. This is fairly trivial and we welcome the help. More ominously, a modern cash register keeps tabs on cashiers because it keeps a perfect record of what was sold and exactly how much money should be in the tray. This is the sort of oversight that a human manager used to do. So in this case the machine acts as a kind of immediate supervisor.
Are you telling the device to do something, or is the device telling you? We mostly tell our devices what to do.
I’d point out that a lot of powerful people have advisors whose job is, more or less, telling powerful people what to do.It seems less about telling and more about being able to actually compel via some form of force—when the cash register gains the ability to auto-issue disciplinary actions, then I think it’s “telling us what to do”. When it’s simply reporting information, it’s still subordinate, just not necessarily to you personally.
True fact: I don’t drive. I never have. My visual processing and reflexes are bad enough that I don’t trust myself to do so—I haven’t been specifically told that I’m not allowed to, but I estimate that there’s a significant enough chance of me causing an accident if I do that it’s not worth it. This is a source of inconvenience in my life, but it hasn’t been too hard to adjust my lifestyle to accommodate it, so that’s what I’ve done. I am, however, hoping that those spiffy new self-driving cars that Google has been working on turn into something other than a geeky novelty sometime soon. I want one. There’s a reasonable chance that they’ll revolutionize my life.
Compared to regular driving, is a self-driving car ‘surrendering freedom’? I suspect not, or at least not much—one might not be able to slow down to get a better look at some distracting thing along the side of the road, or run a red light when there’s clearly nobody else at the intersection, and one might have less control over the route that one takes to get from point A to point B, but generally speaking, other than the skills involved, there doesn’t seem to be that much of a difference between the two.
How about a self-driving car that’s able to communicate with computers at nearby stores? One could give the car a file with one’s grocery list, and have it go to the store that has all the items in stock for the best combined price. This seems like giving up a little bit of freedom—maybe I like shopping at store A rather than store B, and don’t mind doing without bananas this week—but it seems like a good thing to me, overall. The car is still a tool, helping me achieve my preferences.
How about we take that new functionality a few steps further: The resulting technology wouldn’t exactly be a car any more, but more ubiquitous, gathering data that’s important to all my decisions and telling me whether things are good ideas or not. This system wouldn’t just say ‘go to store B; store A is out of bananas’; it would say ‘go to store B; store A’s bananas come from a company that was just discovered using child labor, and A has not yet announced that they’ve switched suppliers’. This still helps me achieve my preferences, but in a much more holistic way—and by keeping track of many more things than it’s possible for me to do on my own.
It’s a bit of a conceptual leap, but not too far to be believable, between that and a system that has the potential notice that the best way to allow me to have the experiences that I want without inconveniencing others is by uploading me into a simulated environment with a group of compatible individuals with similar preferences—that this option not only avoids the risk of supporting child labor, but potentially avoids supporting involuntary labor altogether, by making it so that the only ‘real’ things I need are hardware and electricity, and both of those can be created by machines.
Where in this, exactly, do I stop being an adult, by your standards? Has it perhaps already happened, since I’m willing to trust a smart car to do something that I ‘should’ be doing for myself?
I’m not saying “not being an adult” is a bad thing, at least not by my own standards. There are many aspects of “adulthood” I repudiate.
However, I thought the whole point of libertarianism, which appears to be endorsed by some notable people here, appears to be to maximize individual freedom and embracing the many dangers that come with it. I’m not sure it’s such a good idea, and I’m arguing that limiting our choices through agents we can’t control allows us to feel more in control with what’s left, and more satisfied with our choices. I then think of the logical extremity of such an attitude, and wonder and cower before it, feeling its “goodness” to be as ambiguous as the libertarian’s. Hence the “CONGRATULATIONS” scene, since that was a bit of an Esoteric Happy Ending for anyone familiar with the context.
In other words, it’s what they call the Peter Pan Syndrome: do you want to be a child forever? People always seem to get nostalgic about their childhoods (except some outcasts for whom childhood was a terrible time and who are quite happy to live in a world where you can sue people attempting bullying). Yet the cosntant exhortation: “Grow up”. “Stop being such a child”. “You’re a big boy/man/big girl/woman now”. “Take responsibility”. But is it really worth it, adulthood? If we could give it up, should we? Allowing an AI to govern our lives seems to amount to giving up humanity’s adulthood. It wouldn’t even be a Zeroth Law Rebellion, we’d be the ones asking for it. So, should we?
A self-driving car is a robotic chauffeur. Human chauffeurs are not our bosses but our servants. There are many other examples of devices replacing servants and other underlings. I wouldn’t offhand consider any of these to be examples of “surrendering our freedom to an external agent”. I would, instead, consider becoming a servant or underling to be an example of surrendering (part of) our freedom to an external agent, who tells us what to do.
It’s a question of who is telling whom what to do. Are you telling the device to do something, or is the device telling you? We mostly tell our devices what to do.
There are, of course, devices that tell us what to do or otherwise oversee us. For example, a cash register that calculates change in effect tells us what to do in the trivial sense of telling us how much change to return. This is fairly trivial and we welcome the help. More ominously, a modern cash register keeps tabs on cashiers because it keeps a perfect record of what was sold and exactly how much money should be in the tray. This is the sort of oversight that a human manager used to do. So in this case the machine acts as a kind of immediate supervisor.
I’d point out that a lot of powerful people have advisors whose job is, more or less, telling powerful people what to do.It seems less about telling and more about being able to actually compel via some form of force—when the cash register gains the ability to auto-issue disciplinary actions, then I think it’s “telling us what to do”. When it’s simply reporting information, it’s still subordinate, just not necessarily to you personally.