Thanks for commenting.
I didn’t include the contents in the link because I thought it would make the post too long and I thought it had a different main idea, so I figured it would be better if I made two separate posts. I can’t because of the automatic rate-restriction, but maybe maybe it would’ve been a better post if I included the contents of the linked doc in the post itself.
I’m realizing that I’m packing an unusually large amount of information within a single post, and I only attempt to fill gaps in information with links & footnotes that will take a significant amount of time to read, and I made little effort to give readers the motivation to read them.
In my next post, I’ll try to give a better reason for reading & I’ll be more thorough clarifying my positions & claims.
I also re-read the comment you’re referring to from the perspective of if someone else had written it, and I see what you mean. I Edited it to “Currently in early phases, so forgive me for linking to a series of incomplete thoughts”. Hopefully that sets expectations low without appearing arrogant or condescending.
The post is targeted towards the subset of the EA/LW community that is concerned about AI extinction
Ultimately, I think I had a misunderstanding of the audience that would end up reading my post, and I’m still largely ignorant of the psychological nuances of the average LW reader.
Like you implied, I did have a narrow audience in mind, and I assumed that LW’s algorithim would function more like popular social media algorithms and only show the post to the subset of the population I was aiming to speak to. I also made the assumption that implications of my post would create the motivation for readers to gain further context through reading links/footnotes for information gaps between me and readers, which seems to be wrong.
For my first sentence where I make the assumption that we subconciously optimize for
I make a few assumptions.
1. The readers definition of subconscious is the same as my definition. Which is a large leap considering my definition was created through a lot of introspection & self-examination through personal experiences.
2. I assume that if we have the same definition of “subconcious”, my claim is self-explanatory.
Ex 1. When a cancer patient is told they have 5 years left to live, they are heartbroken and their entire lives change.
Ex 2. When an AI researcher predicts 5 years left until the advent of AGI/ASI and the subsequent extinction of the human race, no one takes this seriously.
CstineSublime, if I were to redo this post, do you think I should explain this claim using an example like this instead of making the assumption that readers will automatically make this connection in their mind?
P.S the footnotes were meant to be definitions similar to what you see when you hover over a complex word on Wikipedia. Let me know if that wasn’t clear.