I can’t speak for the community but after having glanced at your entire post I can’t be sure just what it is about. The closest you come to explaining it is near the end you promise to present a “high-level theory on the functional realities” that seem to be related to everything from increased military spending to someone accidentally creating a virus in the lab that wipes out humanity to combating cognitive bias. But what is your theory?
Your post also makes a number of generalize assumptions about the reader and human nature and invokes the pronoun “we” far too many times. I’m a hypocrite for pointing that out, because I tend to do it as well—but the problem is that unless you have a very narrow audience in mind, especially a community that you are a native to and know intimately, often you run the risk of making assumptions or statements they will at best be confused by, and at worst will get defensive for being included with.
Most of your assumptions aren’t backed up by specific examples, citations to research. For example, in your first sentence you say that we subconsciously optimize for there being no major societal changes precipitated by technology. You don’t back this up. I would assume that part of the reason why there are gold- bugs, just proves there is a huge contingent of people who invest real money based precisely on the fact that they can’t anticipate what major economic changes future technologies might bring. There are currently billions of dollars being spent by firms like Apple, Google, even JP Morgan Chase into A.I. assistants, in anticipation of a major change.
I could one by one go through all these general assumptions, but there are too many for it to be worth my while. Not only that, most of the footnotes you use don’t make reference to any concepts or observations which are particularly new or alien. The pareto principle, Compound Effect, Rumsfeld’s Epistemology… I would expect your average Lesswrong reader is very familiar with these, they present no new insights.
The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction
Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I’m still largely ignorant of the psychological nuances of the average LW reader.
Like you implied, I did have a narrow audience in mind, and I assumed that LW’s algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.
For my first sentence where I make the assumption that we subconciously optimize for
I make a few assumptions.
1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences. 2. I assume that if we have the same definition of “subconcious”, my claim is self-explanatory.
Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.
Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.
CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?
P.S the footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn’t clear.
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]
I am still not sure what your post is intended to be about, what is it about “A.I. Extinction” is it that you have new insight into? I stress “new”.
As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you’ve been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed “A.I.” researcher. The A.I. researcher doesn’t even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.
Who specifically is the researcher you have in mind who said that humanity has only 5 years?
If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.
Myself, I feel like every two weeks or so we see this kind of post with similar style to Eliezer’s so it feels repetitive… but I may be wrong though, just my reaction after seeing that post.
I’m new to LW. Why was this post downvoted? How can I make better posts in the future? https://www.lesswrong.com/posts/n7Fa63ZgHDH8zcw9d/we-can-survive
I can’t speak for the community but after having glanced at your entire post I can’t be sure just what it is about. The closest you come to explaining it is near the end you promise to present a “high-level theory on the functional realities” that seem to be related to everything from increased military spending to someone accidentally creating a virus in the lab that wipes out humanity to combating cognitive bias. But what is your theory?
Your post also makes a number of generalize assumptions about the reader and human nature and invokes the pronoun “we” far too many times. I’m a hypocrite for pointing that out, because I tend to do it as well—but the problem is that unless you have a very narrow audience in mind, especially a community that you are a native to and know intimately, often you run the risk of making assumptions or statements they will at best be confused by, and at worst will get defensive for being included with.
Most of your assumptions aren’t backed up by specific examples, citations to research. For example, in your first sentence you say that we subconsciously optimize for there being no major societal changes precipitated by technology. You don’t back this up. I would assume that part of the reason why there are gold- bugs, just proves there is a huge contingent of people who invest real money based precisely on the fact that they can’t anticipate what major economic changes future technologies might bring. There are currently billions of dollars being spent by firms like Apple, Google, even JP Morgan Chase into A.I. assistants, in anticipation of a major change.
I could one by one go through all these general assumptions, but there are too many for it to be worth my while. Not only that, most of the footnotes you use don’t make reference to any concepts or observations which are particularly new or alien. The pareto principle, Compound Effect, Rumsfeld’s Epistemology… I would expect your average Lesswrong reader is very familiar with these, they present no new insights.
The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction
Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I’m still largely ignorant of the psychological nuances of the average LW reader.
Like you implied, I did have a narrow audience in mind, and I assumed that LW’s algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.
For my first sentence where I make the assumption that we subconciously optimize for
I make a few assumptions.
1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences.
2. I assume that if we have the same definition of “subconcious”, my claim is self-explanatory.
Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.
Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.
CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?
P.S the footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn’t clear.
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]
I am still not sure what your post is intended to be about, what is it about “A.I. Extinction” is it that you have new insight into? I stress “new”.
As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you’ve been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed “A.I.” researcher. The A.I. researcher doesn’t even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.
Who specifically is the researcher you have in mind who said that humanity has only 5 years?
If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.
Myself, I feel like every two weeks or so we see this kind of post with similar style to Eliezer’s so it feels repetitive… but I may be wrong though, just my reaction after seeing that post.