Great post, I loved the comprehensive breakdown and feel much more up to date
Thanks!
Great post, I loved the comprehensive breakdown and feel much more up to date
Thanks!
The Sam Bankman Fried reads differently now his massive fraud with FTX is public, might be worth a comment/revision?
I can’t help but see Sam disagreeing with a message as a positive for the message (I know it’s a fallacy, but the feelings still there)
From my perspective, you nailed the emotional vibe dead on. Its what I wouldve needed to hear (if I had the mental resources to process the warning properly before having a breakdown)
Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.
To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.
Assertion: An Ideal Agent never pays people to lie to them.
What if an agent has built a lie-detector and wants to test it out? I expect thats a circumstance where you want somone to lie to you consistently and on demand.Whats the core real-world situation you are trying to address here?
we shouldn’t pretend it’s harder than it is by acting like we don’t get to choose our dragon.
I fully agree, I believe it would be good to ‘roll out a carpet’ for the simplest friendly AI assuming such a thing were possible to increase the odds of a friendly pick from any potential selection of candidate AI’s
I feel like the XKCD nails it, give the AI-box an opportunity to communicate without making nonconsentual changes to it’s enviroment, because yaknow it lives there.
Thanks for the feedback!
I’ll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.
The lack of falsification criteria for AGI (unresearched rant)
Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.
Problem:
Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).
Solution:
A robust and scalable test of abstract cognitive ability.
A test that could be passed by a friendly AI in such a way as to communicate co-operative intent, without all the humans freaking out.
Would anyone be interested in such a test so that we can detect the subject of our study?
Review: If I wanted an opinion piece I would turn on the TV.
I had a browse of your video selection and there is an overwhelming focus on American conservatism.
I didn’t want to blindly judge so I watched the one on functional institutions.
Recap:
Personal assertions without research or evidence.
Institutions are bad/broken. No suggestions on resolution, no comparative analysis.
Fixation on the smartest people and self-bootstrapping.
I expected better from the Less Wrong community. I want to see Analysis backed up by evidence.
I often share the feeling you have, I believe that it’s best characterised as ‘fear/terror/panic’ of the unknown.
Some undefined stuff is going to happen which may be scary, but there’s no reason to think it will specifically be death rather than something else.