This book remind me of a discussion I had with someone recently regarding open sourcing anonymous medical data. His position after exchanging a few replies crystalized to be along the lines of: “The possible downsides of scientific advances based on medical data outweigh the benefits due to entities like banks and governments using it to better determine credit ratings and not lend to at risk people”… my reply was along the lines of “Basically every new technological advance that helps us compute more data or understand the human body, psychology or human societies better will help banks discriminate better creditors on more criteria they have no control over, so why not generalize your position to be against all scientific advancement ?”… I still haven’t gotten a reply.
I think this is a widespread phenomenon, people that are afraid of <insert technology> when what they really mean is that they are afraid of other humans.
Take for example: surveillance, drones, deepfakes, algorithmic bias, job loss to automation, social media algorithms… 0 of these are AI problems, all of these are human problems.
Surveillance is an issue of policy and democracy, there’s active politicians at all levels that would do all they can to ban mass surveillance and make it transparent where it’s not banned. Surveillance happens because people want it to happen.
Drones are an issue because people killing other people is an issue, people have been killing other people without drones and runaway killers (e.g. disease, nuclear waste, mines, poisoned areas) have resulted from people killing other people for a long time.
Algorithmic bias reflects the fact that people are biased (and arguably people can’t be made un-biased, since as Scott mentions it’s impossible to agree on what “un-biased” is)
Deepfakes are an issue because people can use deepfakes to incriminate other people, people have been using photo altering to incriminate or change the narrative against other people since at least the 30s (see soviet era photo-altering to remove/incriminate “enemies of the state”).
Job loss to automation is an issue in the same way jobs moving to cheaper countries or job disappearing to non-AI automation is, it’s an issue in the sense that we don’t have a society-wide mechanism for taking care of people that aren’t useful in a market economy or in an otherwise market-adapted social circle. This has been an issue since forever, or at least since wee keep historical records of societies, most of which include the idea of beggars and thieves doing it to survive.
Social media algorithms are an issue because people voluntarily and with knowledge of the algorithm and with full knowledge of the results expose themselves to them. There’s hundreds of alternative websites and thousands of alternative systems that don’t make use of algorithms meant to stimulate people but at the same time provide them no useful information and on the whole make their life shittier since they are based on promoting fear and anger. This has been an issue since… arguable, some would say the written word, some would say it’s a thing only since modern fear-mongering journalism came about around the 19th century.
But all these are human problems, not AI problems. AI can be used to empower humans thus making human generated problems worse than before, but so can be said about literally any tool we’ve built since the down of time.
Grg’nar has discovered fire and sharp stone, fire and sharp stone allows Grg’nar to better hunt and prepare animal meat thus making him able to care for and father more descendants.
Fire and sharp stone allow Grg’nar to attract more mates and friends since they like the warmth of Grg’nar’s fire and appreciate the protection of Grg’nar’s sharp stone.
This give Grg’nar an unfair advantage in the not-dying and reproducing market over his competition, this allows Grg’nar to unfairly discriminate against people he dislikes by harming them with sharp stone and scaring them with fire.
Thus, I propose that fire and sharp stone should be tightly controlled by a regulatory council and if need be fire and sharp stone should be outlawed or severely limited and no further developments towards bigger fire, sharper stone and wheel should be undertaken for now, until we can limit the harmful effects of fire and stone.
You can apply the technology-conservative argument to anything. I’m not saying the technology-conservative argument is bad, I can see it making sense in certain scenarios, though I would say it’s hard to apply (see industrial era Japan and China). But masking technology-conservative opinions behind the veil of AI is just silly.
This book remind me of a discussion I had with someone recently regarding open sourcing anonymous medical data. His position after exchanging a few replies crystalized to be along the lines of: “The possible downsides of scientific advances based on medical data outweigh the benefits due to entities like banks and governments using it to better determine credit ratings and not lend to at risk people”… my reply was along the lines of “Basically every new technological advance that helps us compute more data or understand the human body, psychology or human societies better will help banks discriminate better creditors on more criteria they have no control over, so why not generalize your position to be against all scientific advancement ?”… I still haven’t gotten a reply.
I think this is a widespread phenomenon, people that are afraid of <insert technology> when what they really mean is that they are afraid of other humans.
Take for example: surveillance, drones, deepfakes, algorithmic bias, job loss to automation, social media algorithms… 0 of these are AI problems, all of these are human problems.
Surveillance is an issue of policy and democracy, there’s active politicians at all levels that would do all they can to ban mass surveillance and make it transparent where it’s not banned. Surveillance happens because people want it to happen.
Drones are an issue because people killing other people is an issue, people have been killing other people without drones and runaway killers (e.g. disease, nuclear waste, mines, poisoned areas) have resulted from people killing other people for a long time.
Algorithmic bias reflects the fact that people are biased (and arguably people can’t be made un-biased, since as Scott mentions it’s impossible to agree on what “un-biased” is)
Deepfakes are an issue because people can use deepfakes to incriminate other people, people have been using photo altering to incriminate or change the narrative against other people since at least the 30s (see soviet era photo-altering to remove/incriminate “enemies of the state”).
Job loss to automation is an issue in the same way jobs moving to cheaper countries or job disappearing to non-AI automation is, it’s an issue in the sense that we don’t have a society-wide mechanism for taking care of people that aren’t useful in a market economy or in an otherwise market-adapted social circle. This has been an issue since forever, or at least since wee keep historical records of societies, most of which include the idea of beggars and thieves doing it to survive.
Social media algorithms are an issue because people voluntarily and with knowledge of the algorithm and with full knowledge of the results expose themselves to them. There’s hundreds of alternative websites and thousands of alternative systems that don’t make use of algorithms meant to stimulate people but at the same time provide them no useful information and on the whole make their life shittier since they are based on promoting fear and anger. This has been an issue since… arguable, some would say the written word, some would say it’s a thing only since modern fear-mongering journalism came about around the 19th century.
But all these are human problems, not AI problems. AI can be used to empower humans thus making human generated problems worse than before, but so can be said about literally any tool we’ve built since the down of time.
Grg’nar has discovered fire and sharp stone, fire and sharp stone allows Grg’nar to better hunt and prepare animal meat thus making him able to care for and father more descendants.
Fire and sharp stone allow Grg’nar to attract more mates and friends since they like the warmth of Grg’nar’s fire and appreciate the protection of Grg’nar’s sharp stone.
This give Grg’nar an unfair advantage in the not-dying and reproducing market over his competition, this allows Grg’nar to unfairly discriminate against people he dislikes by harming them with sharp stone and scaring them with fire.
Thus, I propose that fire and sharp stone should be tightly controlled by a regulatory council and if need be fire and sharp stone should be outlawed or severely limited and no further developments towards bigger fire, sharper stone and wheel should be undertaken for now, until we can limit the harmful effects of fire and stone.
You can apply the technology-conservative argument to anything. I’m not saying the technology-conservative argument is bad, I can see it making sense in certain scenarios, though I would say it’s hard to apply (see industrial era Japan and China). But masking technology-conservative opinions behind the veil of AI is just silly.