Mati_Roy
Board game: Medium
2 players reveal a card with a word, then they need to say a word based on that and get points if it’s the same word (basically, with some more complexities).
Example at 1m20 here: https://youtu.be/yTCUIFCXRtw?si=fLvbeGiKwnaXecaX
I’m glad past Mati cast a wider net has the specifics for this year’s Schelling day are different ☺️☺️
idk if the events are often going over time, but I might pass by now if it’s still happening ☺️
I liked reading your article; very interesting! 🙏
One point I figured I should x-post with our DMs 😊 --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/all of the tech you described also have a probability of causing an x-risk if they’re not implemented. I don’t think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilities would need to be unreasonably low, leading to full paralysis, and full paralysis could lead to x-risk. I think instead someone with those values (ie. caring about unborn people) should compare the probability of x-risks if a tech gets developed vs not developed (or whatever else is being evaluated). 🙂
new, great, complementary post: Critical Questions about Patient Care in Cryonics and Biostasis
I love this story so much, wow! It feels so incredibly tailored to me (because it is 😄). I value that a lot! It’s a very scarce resource to begin with, but it hardly gets more tailored than that 😄
that’s awesome; thanks for letting me know :)
i’d be curious to know how the first event went if you’re inclined to share ☺
cars won’t replace horses, horses with cars will
from ChatGPT
Sounds like a fun challenge! Here are 50 ways
to send something to the moon:
1. Catapult
2. Giant crossbow
3. Balloon with an endless supply of helium
4. A giant seesaw
5. Sky elevator
6. Beam of light (if the object can ride light)
7. Teleportation device
8. Moon magnet
9. Whale blowhole
10. Bubble with a perfect vacuum inside
11. Tornado creator
12. Inflatable space tube
13. A jump by the Hulk
14. Sonic boom from a supersonic plane
15. Floating on a cloud machine
16. Warp drive
17. Ice cannon
18. Rocket rollercoaster
19. A super springboard
20. Fling via a giant trebuchet
21. Antigravity boots
22. Pneumatic tube system
23. Ride on a meteor
24. Build stairs to the moon
25. Giant hamster wheel that propels forward
26. Human pyramid
27. Bounce house to the moon
28. A moon hook and pull system
29. Jetpack
30. Superfast elevator
31. A gigantic yo-yo system
32. Umbrella carried by solar winds
33. Giant’s throw
34. Rocket sneakers
35. Bungee cord to the moon
36. Space swing set
37. Moon vacuum
38. Space surfboard
39. Massive drone
40. Launch via space trebuchet
41. Space pogo stick
42. Inflatable space mountain
43. Magnetic repulsion system
44. Time travel to when the moon was closer
45. Huge space slingshot
46. Giant space slinky
47. An extremely powerful fan
48. A chain of trampolines
49. Magic carpet
50. Use a giant’s bow and arrow
topics: AI, sociology
thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won’t be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you’re constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.
(which seems like it would be harmful for the purpose of reducing death)
I’m surprised this has this many upvotes. You’re taking the person that contributed the most to warning humanity about AI x-risks, and are saying what you think they could have done better in what comes across as blamy to me. If you’re blaming zir, you should probably blame everyone. I’d much rather if you wrote what people could have done in general rather than targeting one of the best contributors.
ok that’s fair yeah! thanks for your reply. I’m guessing a lot of those historical quotes are also taking out of context actually.
you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)
Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^
related concept: https://en.wikipedia.org/wiki/Information_panspermia
video on this that was posted ~15 hours ago: https://www.youtube.com/watch?v=K4Zghdqvxt4
idea: Stream all of humanity’s information through the cosmos in hope an alien civ reconstruct us (and defends us against an Earth-originating maligned ASI)
I guess finding intelligent ETs would help with that as we could stream in a specific direction instead of having to broadcast the signal broadly
It could be that maligned alien ASIs would mostly ignore our information (or at least not use it to like torture us) whereas friendly align ASI would use it beneficially 🤷♀️
there remains a credible possibility that grabby aliens would benefit by sending a message that was carefully designed to only be detectable by civilizations at a certain level of technological development
oh wow, after reading this, I came up with the same explanation you wrote in the following 2 paragraphs just before reading them 😄
I really liked the story, and love that you made a video version! I think it was really well made!
I’m impressed by the AI voice!
I just suggested to AI Impacts to add this story to their story repository.
I recommend / suggest considering adding “Agentic Mess (A Failure Story)” in your list.
It was developed at the 8th AI Safety Camp in 2023.
You can see the text-version here: https://www.lesswrong.com/posts/LyJAFBuuEfd4kxgsw/agentic-mess-a-failure-story
You can see the video-version here: https://www.youtube.com/watch?app=desktop&v=6edrFdkCEUE
It starts pretty close to our current AI reality and explores the potentiality of AI agents replicating and trying to improve in order to achieve their goal, and, as a result, propagating like a virus. The story explores the selection pressure that would bring and the results that would have.
just a loose thought, probably obvious
some tree species self-selected themselves for height (ie. there’s no point in being a tall tree unless taller trees are blocking your sunlight)
humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of “self-selection”)
on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality