I am not sure I understand your question (sorry, I do not know what is Yudkowsky’DMs)
I basically disclosed, to all, that the way we all think we think, does work.
What kind of responsibility could that bear ?
Sorry, I was being a bit flip/​insider-y. Probably inappropriately so.
I’m curious how much you’ve engaged with the AI Safety literature/​arguments?
“Yudkowsky’s DM” --> Eliezer Yudkowsky’s [Twitter] Direct Messages.
I think I have expressed my views on the matter of responsibility quiet clearly in the conclusion.
I just checked Yudkowsky on Google. He founded this website, so good.
Here is not the place to argue my views on super-intelligence, but I clearly side with Russell and Norvig. Life is just too complex; luckily.
As for safety, the title of Jessica Taylor’s article is:
“Quantilizers: A Safer Alternative to Maximizers for Limited Optimization”.
I will just be glad to have proved that alternative to be effective.
I am not sure I understand your question (sorry, I do not know what is Yudkowsky’DMs)
I basically disclosed, to all, that the way we all think we think, does work.
What kind of responsibility could that bear ?
Sorry, I was being a bit flip/​insider-y. Probably inappropriately so.
I’m curious how much you’ve engaged with the AI Safety literature/​arguments?
“Yudkowsky’s DM” --> Eliezer Yudkowsky’s [Twitter] Direct Messages.
I think I have expressed my views on the matter of responsibility quiet clearly in the conclusion.
I just checked Yudkowsky on Google. He founded this website, so good.
Here is not the place to argue my views on super-intelligence, but I clearly side with Russell and Norvig. Life is just too complex; luckily.
As for safety, the title of Jessica Taylor’s article is:
“Quantilizers: A Safer Alternative to Maximizers for Limited Optimization”.
I will just be glad to have proved that alternative to be effective.