I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.
Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/
I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.
I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html
On the same note, but probably already widely known, Scott Aaronson on “The Signularity Is Far” (2008): http://www.scottaaronson.com/blog/?p=346
Here is another article arguing why we are nowhere near the singularity:
https://timdettmers.wordpress.com/2015/07/27/brain-vs-deep-learning-singularity/
And here is the corresponding thread on /r/machinelearning:
https://www.reddit.com/r/MachineLearning/comments/3eriyg/the_brain_vs_deep_learning_part_i_computational/
Now, that’s what I was looking for.