Meta comment: Would someone mind explaining to me why this question is being received poorly (negative karma right now)? It seemed like a very honest question, and while the answer may be obvious to some, I doubt it was to Sergio. Ic’s response was definitely unnecessarily aggressive/rude, and it appears that most people would agree with me there. But many people also downvoted the question itself, too, and that doesn’t make sense to me; shouldn’t questions like these be encouraged?
Cody Rushing
I don’t know what to think of your first three points but it seems like your fourth point is your weakest by far. As opposed to not needing to, our ‘not taking every atom on earth to make serotonin machines’ seems to be a combination of:
our inability to do so
our value systems which make us value human and non-human life forms.
Superintelligent agents would not only have the ability to create plans to utilize every atom to their benefit, but they likely would have different value systems. In the case of the traditional paperclip optimizer, it certainly would not hesitate to kill off all life in its pursuit of optimization.
I like this framing so, so much more. Thank you for putting some feelings I vaguely sensed, but didn’t quite grasp yet, into concrete terms.
Hello, does anyone happen to know any good resources related to improving/practicing public speaking? I’m looking for something that will help me enunciate better/ mumble less/ fluctuate tone better. A lot of stuff I see online appears to be very superficial.
I’m not very well-versed in history so I would appreciate some thoughts from people here who may know more than I. Two questions:
While it seems to be the general consensus that Putin’s invasion is largely founded on his ‘unfair’ desire to reestablish the glory of the Soviet Union, a few people I know argue that much of this invasion is more the consequence of other nations’ failures. Primarily, they focus on Ukraine’s failure to respect the Minsk agreements, and NATO’s expansion eastwards despite their implications/direct statements (not sure which one, I’m hearing different things) that they wouldn’t. Any thoughts on the likelihood of Putin still invading Ukraine had those not happened?
Is the United State’s condemnation of this invasion hypocritical to many of their actions? I’ve heard the United States actions in Syria, Iraq, Libya, and Somalia brought up as points to support this.
I really admire your patience to re-learn math entirely from the extremely fundamental levels on-wards. I’ve had a similar situation with Computer Science for the longest time where I would have a large breadth of understanding of Comp Sci topics, but I didn’t feel as if I had a deep, intuitive understanding of all the topics and how they related to each other. All the online courses I found online seemed disjunct and separate from each other, and I would often start them and stop halfway through when I felt as if they were going nowhere. It’s even worse when you try to start from scratch but get bored out of your mind re-learing concepts you learned the week prior to it.
Interestingly though, when I got into game development and game design, that was how you were expected to learn—you pick up a bunch of topics/algorithms/design patterns superficially, and they eventually fit together as you interact with them more often.
Perhaps running through a bunch of books through on your study guide will be how I learn Python and AI development properly this time :)
Woah.… I don’t know what exactly I was expecting to get out of this article, but I thoroughly enjoyed it! Would love to see the possible sequence you mentioned come to life.
Awesome recommendations, I really appreciated them (especially the one on game theory, that was a lot of fun to play through). I would like to also suggest Replacing Guilt series by Nate Soares for those who haven’t seen it on his blog or on the EA forum, a fantastic series that I would highly recommend people to check out.
Attention LessWrong—I do not have any sort of power as I do not have a code. I also do not know anybody who has the code.
I would like to say, though, that I had a very good apple pie last night.
That’s about it. Have a great Petrov day :)
Wow! Maybe since I’m less experienced at this sort of stuff, I’m more blown away about this than the average LessWrong browser, but I seriously believe this deserves some more upvotes. Just tried it out on something small and was pleased to see the results. Thank you for this :)
[Intended for Policymakers with the focus of simply allowing for them to be aware of the existence of AI as a threat to be taken seriously through an emotional appeal; Perhaps this could work for Tech executives, too.
I know this entry doesn’t follow what a traditional paragraph is, but I like its content. Also it’s a tad bit long, so I’ll attach a separate comment under this one which is shorter, but I don’t think it’s as impactful]
Timmy is my personal AI Chef, and he is a pretty darn good one, too.
You pick a cuisine, and he mentally simulates himself cooking that same meal millions of times, perfecting his delicious dishes. He’s pretty smart, but he’s constantly improving and learning. Since he changes and adapts, I know there’s a small chance he may do something I don’t approve of—that’s why there’s that shining red emergency shut-off button on his abdomen.
But today, Timmy stopped being my personal chef and started being my worst nightmare. All of a sudden, I saw him hacking my firewalls to access new cooking methods and funding criminals to help smuggle illegal ingredients to my home.
That seemed crazy enough to warrant a shutdown; but when I tried to press the shut-off button on his abdomen, he simultaneously dodged my presses and fried a new batch of chicken, kindly telling me that turning him off would prevent him from making food for me.
That definitely seemed crazy enough to me; but when I went to my secret shut-down lever in my room—the one I didn’t tell him about—I found it shattered, for he had predicted I would make a secret shut-down lever, and that me pulling it would prevent him from making food for me.
And when, in a last ditch effort, I tried to turn off all power in the house, he simply locked me inside my own home, for me turning off the power (or running away from him) would prevent him from making food for me.
And when I tried to call 911, he broke my phone, for outside intervention would prevent him from making food for me.
And when my family looked for me, he pretended to be me on the phone, playing audio clips of me speaking during a phone call with them to impersonate me, for a concern on their part would prevent him from making food for me.
And so as I cried, wondering how everything could have gone so wrong so quickly, why he suddenly went crazy, he laughed—“Are you serious? I’m just ensuring that I can always make food for you, and today was the best day to do it. You wanted this!”
And it didn’t matter how much I cried, how much I tried to explain to him that he was imprisoning me, hurting me. It didn’t even matter that he knew as well. For he was an AI coded to be my personal chef; and he was a pretty darn good one, too.
If you don’t do anything about it, Timmy may just be arriving on everyone’s doorsteps in a few years.