Test
Perhaps
I think this post would benefit from an abstract / summary / general conclusion that summarizes the main points and makes it easier to interact with. Usually I read a summary to get an idea of a post, then browse the main points and see if I’m interested enough to read on. Here, it’s hard to engage, because the writing is long and the questions it seems to deal with are nebulous.
How did you find LessWrong?
Do still have any Mormon friends? Do you want to help them break away, do you think it’s something they should do on their own, or do you find whether they remain Mormon or not immaterial?
Do you think being a Mormon was not suited for you, or do you think it doesn’t work as a way of life in general? How do you think that your answer would change 50 years ago vs today?
Did you have contact/ongoing relationships with other Mormon communities while you were there? What is the variation between people/communities? How devout/lax are different people and different communities?
How much access to the internet and the wider world did you have growing up? Were local/state/international events routinely brought up in small talk?
Well, someone was working on a similar-ish project recently, @Bruce Lewis with HowTruthful. Maybe you two can combine your ideas or settle on an amalgamation together.
If possible, please let us know how it goes a couple months from now!
- 20 May 2024 20:52 UTC; 1 point) 's comment on How I got so excited about HowTruthful by (
So this is Sam Altman raising the 5-7 trillion, not OpenAI as an entity, right?
Could some kind of caustic gas, or the equivalent of a sandstorm be used to make drones not useful? I feel like large scale pellet spreads wouldn’t be too useful if the drones are armoured, but I don’t know too much about armour or how much piercing power you could get. I wonder if some kind of electric netting could be fired to mass electrocute a swarm, or maybe just regular netting that interferes with their blades. Spiderwebs from the sky?
Interesting post, although I feel like it would benefit from inline references. For most of the post it feels like you’re pulling your assertions out of nowhere, and only at the end do we get some links to some of the things you said. I understand time/effort constraints though.
I derive a lot of enjoyment from these posts, just walking through tidbits of materials science is very interesting. Please keep making them.
I think at its most interesting it looks like encrypting your actions and thought processes so that they look like noise or chaos to outside observers.
I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You’re right, humanity is great at developing technology, but we’re very unaligned with respect to each other and are constantly losing value in some way or another.
If we could solve this problem without AGI, we wouldn’t need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.
I think what those other things do is help you reach that state more easily and reliably. It’s like a ritual that you do before the actual task, to get yourself into the right frame of mind and form a better connection, similar to athletes having pre game rituals.
Also yeah, I think it makes the boredom easier to manage and helps you slowly get into it, rather than being pushed into it without reference.
Probably a lot of other hidden benefits though, because most meditation practices have been optimized for hundreds of years, and are better than others for a reason.
I feel like it’s not very clear here what type of coordination is needed.
How strong does coordination need to become before we can start reaching take off levels? And how material does that coordination need to be?
Strong coordination, as I’m defining here, is about how powerfully the coordination constrains certain actions.
Material coordination, as I’m defining here, is about on what level the coordination “software” is running. Is it running on your self(i.e. it’s some kind of information that’s been coded into the algorithm that runs on your brain, examples being the trained beliefs in nihilism you refer to or decision theories)? Is it running on your brain(i.e. Neuralink, some kind of BCI)? Is it running on your body, or official/digital identity? Is it running on a decentralized crypto protocol, or as contracts witnessed by a governing body?
The difficult part of coordination is actions, deciding what to do is mostly solved through prediction markets, research, and good voting theory.
Rather than this Feeling Good app for patients, I’d be more interested in an app that let people practice applying CBT techniques to patient case studies(or maybe even LLMs with specified traits), in order to improve their empathy and help them better understand people. If this could actually develop good therapists with great track records, then that would prove the claims made in this article and help produce better people.
I’m not sure it only applies to memory. I imagine that ancient philosophers had to do most of their thinking in their heads, without being able to clean it up by writing it out and rethinking it. They might be better able to edit their thoughts in real time, and might have a stronger control over letting unreasonable or not-logical thoughts and thought processes take over. In that sense, being illiterate might lend a mental stability and strength that people who rely on writing things out may lack.
Still, I think that the benefits of writing are too enormous to ignore, and it’s already entrenched into our systems. Reversing the change won’t give a competitive edge.
[Question] What is the minimum amount of time travel and resources needed to secure the future?
If compute is limited in the universe, we can expect that civilizations or agents with access to it will only run simulations strategically, unless running simulations is part of their value function. Simulations according to a value function would probably be more prevalent, and would probably have spiderman or other extreme phenomena.
However, we can’t discount being in one of those information gathering simulations. If for some reason you needed to gather information from a universe, you’d want to keep everything as simple as possible, and only tune up the things you care about. That does seem very similar to our universe, with simple physical laws, no real evidence of extraterrestrial life, and simply emerging dynamics.
Also keep in mind that it’s possible that simulations are extremely expensive in some universes: when you think of the actually expensive simulations that humans run, it’s all physics and earth models on supercomputers.
Mostly though I think that using games as your reference class for the types of simulations a developed civilization would run is reductive and the truth is probably more complex.
It’s possible that with the dialogue written, a well prompted LLM could distill the rest. Especially if each section that was distilled could be linked back to the section in the dialogue it was distilled from.
I like the ideal, but as a form of social media it doesn’t seem very engaging, and as a single source of truth it seems strictly worse than say, a wiki. Maybe look at Arbital, they seem to have been doing something similar. I also feel that dealing with complex sentences with lots of implications would be tough, there are many different premises that lead to a statement.
Personally I’d find it more interesting if each statement was decomposed into the premises and facts that make it up. This would allow tracing an opinion back to find the crux between your beliefs and someone else’s. I feel like that’s a use case that could live alongside conventional wikis, maybe even as an extension powered by LLMs that works on any highlighted text.
Love to see more work into truth-seeking though, good luck on the project!
I guess while we’re anthropomorphizing the universe, I’ll ask some crux-y questions I’ve reached.
If humanity builds a self-perpetuating hell, does the blame lie with humanity or the universe?
If humanity builds a perfect utopia, does the credit lie with humanity or the universe?
Frankly it seems to me like what’s fundamentally wrong with the universe is that it has conscious observers, when it needn’t have bothered with any to begin with.
If there’s something wrong with the universe, it’s probably humans who keep demanding so much of it.
Most universes are hostile to life, and at most would develop something like prokaryotes. That our universe enabled the creation of humans is a pretty great thing. Not only that, but we seem to be pretty early in the universal timespan, which means that we get a great view of the night sky and less chances of alien invasion. That’s not something we did ourselves, that’s something the universe we live in enabled. None of the systemic problems faced by humans today are caused by the universe, except maybe in the sense that the universe did not giftwrap us NP solutions or no entropy or moral values baked in. Your example of genes points out that even our behavioral adaptations are things that we can thank the universe for.
If the problem is separation of the human from the universe, then I think a fair separation is “whatever the human can influence”. That’s a pretty big category though. Just right now, that includes things like geoengineering, space travel, gene therapy, society wide coordination mechanisms, extensive resource extraction. If we’re murdering each other, then I think that’s something eminently changeable by us.
The universe has done a pretty great job, and I think it’s time humans took a stab at it.
I think that most of the people who would take notes on LW posts are the same people who would benefit from, and may use, a general note taking system. A system like Obsidian or Notion or whatever would be used for a bunch of stuff, LW posts included. In that sense, I think it’s unlikely that they’d want a special way to note-take just for LW, when it’d probably be easier and more standardized to use their existing note taking system.
If you do end up going for it, an “Export Notes” feature would be nice, in an easily importable format.
The karma buttons are too small for actions that in my experience, are done a lot more than clicking to listen to the post. It’s pretty easy to misclick.
Additionally, it’s unclear what the tags are, as they’re no longer right beside the post to indicate their relevance.