Haha meh...I don’t think you’re thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you’re just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It’s always a balance of abstract and concrete thinkers...they must work together. The one’s who dismiss the other side are most definitely going to be the losers because it’s dismissing about half of what you actually need to succeed. We need to respect those who think differently from us, we must literally feel desperately in need of them.
To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.
Haha meh...I don’t think you’re thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you’re just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It’s always a balance of abstract and concrete thinkers...they must work together. The one’s who dismiss the other side are most definitely going to be the losers because it’s dismissing about half of what you actually need to succeed. We need to respect those who think differently from us, we must literally feel desperately in need of them.
To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.