For one thing, the two risks are interrelated. Money is the life’s blood of any organization. (Indeed, Eliezer regularly emphasizes the importance of funding to SIAI’s mission). Thus, a major financial blow like this is a serious impediment to SIAI’s ability to do its job. If you seriously believe that SIAI is significantly reducing existential risk to mankind, then this theft represents an appreciable increase in existential risk to mankind.
Second, SIAI’s leadership is selling us an approach to thinking which is supposed to be of general application. i.e. it’s not just for preventing ai-related disasters, it’s also supposed to make you better at achieving more mundane goals in life. I can’t say for sure without knowing the details, but if a small not-for-profit has 100k+ stolen from it, it very likely represents a failure of thinking on the part of the organization.
There is an area of thought without which we can’t get anywhere, but which is hard to teach, and that is generating good hypotheses. Once we have a hypothesis in hand, then we can evaluate it. We can compare it to alternatives and use Bayes’ formula to update the probabilities that we assign the rival hypotheses. We can evaluate arguments for and against the hypothesis by for example identifying biases and other fallacious reasoning. But none of this will get us where we want to go unless we happen to have selected the true hypothesis (or one “true enough” in a restricted domain, as Newton’s law of gravity is true enough in a restricted domain). If all you have to evaluate is a bunch of false hypotheses, then Bayesian updating is just going to move you from one false hypothesis to another. You need to have the true hypothesis among your candidate hypotheses if Bayesian updating is ever going to take you to it. Of course, this is a tall order given that, by assumption, you don’t know ahead of time what the truth is. However, the record of discoveries so far suggests that the human race doesn’t completely suck at it.
Generating the true hypothesis (or indeed any hypothesis) involves, I am inclined to say, an act of imagination. For this reason, “use your imagination” is seriously good advice. People who are blindsided by reality sometimes say something like, they never imagined that something like this could happen. I’ve been blindsided by reality precisely because I never considered certain possibilities which consequently took me by surprise.
I agree. It reminds me of a fictional dialogue from a movie about the Apollo 1 disaster:
Clinton Anderson: [at the senate inquiry following the Apollo 1 fire] Colonel, what caused the fire? I’m not talking about wires and oxygen. It seems that some people think that NASA pressured North American to meet unrealistic and arbitrary deadlines and that in turn North American allowed safety to be compromised.
Frank Borman: I won’t deny there’s been pressure to meet deadlines, but safety has never been intentionally compromised.
Clinton Anderson: Then what caused the fire?
Frank Borman: A failure of imagination. We’ve always known there was the possibility of fire in a spacecraft. But the fear was that it would happen in space, when you’re 180 miles from terra firma and the nearest fire station. That was the worry. No one ever imagined it could happen on the ground. If anyone had thought of it, the test would’ve been classified as hazardous. But it wasn’t. We just didn’t think of it. Now who’s fault is that? Well, it’s North American’s fault. It’s NASA’s fault. It’s the fault of every person who ever worked on Apollo. It’s my fault. I didn’t think the test was hazardous. No one did. I wish to God we had.
For one thing, the two risks are interrelated. Money is the life’s blood of any organization. (Indeed, Eliezer regularly emphasizes the importance of funding to SIAI’s mission). Thus, a major financial blow like this is a serious impediment to SIAI’s ability to do its job. If you seriously believe that SIAI is significantly reducing existential risk to mankind, then this theft represents an appreciable increase in existential risk to mankind.
Second, SIAI’s leadership is selling us an approach to thinking which is supposed to be of general application. i.e. it’s not just for preventing ai-related disasters, it’s also supposed to make you better at achieving more mundane goals in life. I can’t say for sure without knowing the details, but if a small not-for-profit has 100k+ stolen from it, it very likely represents a failure of thinking on the part of the organization.
There is an area of thought without which we can’t get anywhere, but which is hard to teach, and that is generating good hypotheses. Once we have a hypothesis in hand, then we can evaluate it. We can compare it to alternatives and use Bayes’ formula to update the probabilities that we assign the rival hypotheses. We can evaluate arguments for and against the hypothesis by for example identifying biases and other fallacious reasoning. But none of this will get us where we want to go unless we happen to have selected the true hypothesis (or one “true enough” in a restricted domain, as Newton’s law of gravity is true enough in a restricted domain). If all you have to evaluate is a bunch of false hypotheses, then Bayesian updating is just going to move you from one false hypothesis to another. You need to have the true hypothesis among your candidate hypotheses if Bayesian updating is ever going to take you to it. Of course, this is a tall order given that, by assumption, you don’t know ahead of time what the truth is. However, the record of discoveries so far suggests that the human race doesn’t completely suck at it.
Generating the true hypothesis (or indeed any hypothesis) involves, I am inclined to say, an act of imagination. For this reason, “use your imagination” is seriously good advice. People who are blindsided by reality sometimes say something like, they never imagined that something like this could happen. I’ve been blindsided by reality precisely because I never considered certain possibilities which consequently took me by surprise.
I agree. It reminds me of a fictional dialogue from a movie about the Apollo 1 disaster:
Clinton Anderson: [at the senate inquiry following the Apollo 1 fire] Colonel, what caused the fire? I’m not talking about wires and oxygen. It seems that some people think that NASA pressured North American to meet unrealistic and arbitrary deadlines and that in turn North American allowed safety to be compromised.
Frank Borman: I won’t deny there’s been pressure to meet deadlines, but safety has never been intentionally compromised.
Clinton Anderson: Then what caused the fire?
Frank Borman: A failure of imagination. We’ve always known there was the possibility of fire in a spacecraft. But the fear was that it would happen in space, when you’re 180 miles from terra firma and the nearest fire station. That was the worry. No one ever imagined it could happen on the ground. If anyone had thought of it, the test would’ve been classified as hazardous. But it wasn’t. We just didn’t think of it. Now who’s fault is that? Well, it’s North American’s fault. It’s NASA’s fault. It’s the fault of every person who ever worked on Apollo. It’s my fault. I didn’t think the test was hazardous. No one did. I wish to God we had.