I doubt you’re “virtually the only true critic of the SIAI.”
But if you think I’m not much of a critic of SIAI/Yudkowsky, you’re right. Many of my posts have included minor criticisms, but that’s because it’s not as valued here to just repeat all the thousands of things on which I agree with Eliezer.
But if you think I’m not much of a critic of SIAI/Yudkowsky, you’re right. Many of my posts have included minor criticisms, but that’s because it’s not as valued here to just repeat all the thousands of things on which I agree with Eliezer.
I actually messaged him telling him that he can edit/delete any harmful submissions of mine without having to expect harmful protest. Does that look like I particularly disagree with him, or assign a high probability to him being Dr. Evil? I don’t, but it is a possibility and it is widely ignored. To get provable friendly AI you’ll need provable friendly humans. If that isn’t possible you’ll need oversight and transparency.
Smart people can be wrong.
Smart people can be evil.
People can appear smarter than they are.
That’s why I demand...
Third-party peer-review of Yudkowsky’s work.
Oversight and transparency.
Progress reports, roadmaps and confirmable success.
Technically it isn’t of course. But I don’t expect unfriendly humans not to show me friendly AI but actually implement something else. What I meant is that you’ll need friendly humans to not end up with some trickster who takes your money and in 30 years you notice that all he has done is to code some chat bot. There are a lot of reasons that the trustworthiness of the humans involved is important. Of course, provable friendly AI is provable friendly no matter who coded it.
I doubt you’re “virtually the only true critic of the SIAI.”
But if you think I’m not much of a critic of SIAI/Yudkowsky, you’re right. Many of my posts have included minor criticisms, but that’s because it’s not as valued here to just repeat all the thousands of things on which I agree with Eliezer.
I actually messaged him telling him that he can edit/delete any harmful submissions of mine without having to expect harmful protest. Does that look like I particularly disagree with him, or assign a high probability to him being Dr. Evil? I don’t, but it is a possibility and it is widely ignored. To get provable friendly AI you’ll need provable friendly humans. If that isn’t possible you’ll need oversight and transparency.
Smart people can be wrong.
Smart people can be evil.
People can appear smarter than they are.
That’s why I demand...
Third-party peer-review of Yudkowsky’s work.
Oversight and transparency.
Progress reports, roadmaps and confirmable success.
Not actually true.
Technically it isn’t of course. But I don’t expect unfriendly humans not to show me friendly AI but actually implement something else. What I meant is that you’ll need friendly humans to not end up with some trickster who takes your money and in 30 years you notice that all he has done is to code some chat bot. There are a lot of reasons that the trustworthiness of the humans involved is important. Of course, provable friendly AI is provable friendly no matter who coded it.