The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction
Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I’m still largely ignorant of the psychological nuances of the average LW reader.
Like you implied, I did have a narrow audience in mind, and I assumed that LW’s algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.
For my first sentence where I make the assumption that we subconciously optimize for
I make a few assumptions.
1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences. 2. I assume that if we have the same definition of “subconcious”, my claim is self-explanatory.
Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.
Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.
CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?
P.S the footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn’t clear.
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]
I am still not sure what your post is intended to be about, what is it about “A.I. Extinction” is it that you have new insight into? I stress “new”.
As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you’ve been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed “A.I.” researcher. The A.I. researcher doesn’t even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.
Who specifically is the researcher you have in mind who said that humanity has only 5 years?
If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.
The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction
Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I’m still largely ignorant of the psychological nuances of the average LW reader.
Like you implied, I did have a narrow audience in mind, and I assumed that LW’s algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.
For my first sentence where I make the assumption that we subconciously optimize for
I make a few assumptions.
1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences.
2. I assume that if we have the same definition of “subconcious”, my claim is self-explanatory.
Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.
Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.
CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?
P.S the footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn’t clear.
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]
I am still not sure what your post is intended to be about, what is it about “A.I. Extinction” is it that you have new insight into? I stress “new”.
As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you’ve been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed “A.I.” researcher. The A.I. researcher doesn’t even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.
Who specifically is the researcher you have in mind who said that humanity has only 5 years?
If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.