To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.
To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.