I think the key problem with most bad ideas is that they failed to understand enough about the basic structure of the problem to know what a solution would even look like. The probability that you’ve hit on a useful solution without that understanding is nil, so fixing the problem requires learning more, not sharing more uninformed guesses. For example, among other things, your solution doesn’t account for value loading, the question of how we get the system to have the value we care about.
A consolidated list of bad or incomplete solutions could have considerable didactic value—it could keep people learn more about the various challenges involved.
Haha meh...I don’t think you’re thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you’re just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It’s always a balance of abstract and concrete thinkers...they must work together. The one’s who dismiss the other side are most definitely going to be the losers because it’s dismissing about half of what you actually need to succeed. We need to respect those who think differently from us, we must literally feel desperately in need of them.
To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.
Meh.
I think the key problem with most bad ideas is that they failed to understand enough about the basic structure of the problem to know what a solution would even look like. The probability that you’ve hit on a useful solution without that understanding is nil, so fixing the problem requires learning more, not sharing more uninformed guesses. For example, among other things, your solution doesn’t account for value loading, the question of how we get the system to have the value we care about.
A consolidated list of bad or incomplete solutions could have considerable didactic value—it could keep people learn more about the various challenges involved.
The goal of having a list of many bad ideas is different from having a focused explanation about why certain ideas are bad.
Writing posts about bad ideas and how they fail could be a type of post that’s valuable but it’s different than just listing ideas.
For inspiration in the genre of learning-what-not-to-do, I suggest “How To Write Unmaintainable Code”. Also “Fumblerules”.
Haha meh...I don’t think you’re thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you’re just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It’s always a balance of abstract and concrete thinkers...they must work together. The one’s who dismiss the other side are most definitely going to be the losers because it’s dismissing about half of what you actually need to succeed. We need to respect those who think differently from us, we must literally feel desperately in need of them.
To effectively deal with a topic you need to understand something about it.
If you want to be helpful as an ethicist for developing driverless cars, it helps to understand the actual ethical issues involved instead of just trying to project your own unrelated ideas into the problem.
Whether or not a driverless car is allowed to violate laws to achieve other goals such as avoiding accidents is an ethical issue that’s important. Programmers have to decide and regulators have to decide whether to allow companies to produce driverless cars that violate laws.
Instead, ethicists who are too lazy to actually understand the subject matter pretend that the most important ethical issue with driverless cars is the trolly problem which inturn ignores real-world effect such as opening up the possibility to troll driverless cars by pushing a baby troller in front of them if they are predictably coded to do everything to avoid hitting the baby troller.
To get back to AI safety, it’s not necessary to be able to code or do the math to understand current problems in AI safety. Most of what Nick Bostrom for example writes is of philosophic nature and not directly about math or programming.