Retracting the comment because I have seen a couple of couterexamples, including myself!
Sune
The alcor-page was not updated since 15th December 2022, where a person who died in August 2022 (as well as later data) was added, so if he was signed up there, we should not expect it too be mentioned yet. For CI latest update was for a patient dying 29th February 2024, but I can’t see any indication of when that post was made.
My point is that potential parents often care about non-existing people: their potential kids. And once they bring these potential kids into existence, those kids might start caring about a next generation. Simularly, some people/minds will want to expand because that is what their company does, or they would like the experience of exploring a new planet/solar system/galaxy or would like the status of being the first to settle there.
Which non-existing person are you refering to?
Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel.
Somehow people keep finding meaning in failling in love and starting a family, even when billions of people have already done that before. We also find meaning in doing careers that are very similar to what million of people have done before or traveling to destination that has been visited by millions of turist. The more similar an activity is to something our ancestors did, the more meaningful it seems.
From the outside, all this looks grabby, but from the inside it feels meaningful.
There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
There is a huge amount of computation going on in this story and as far as I can tell not even a single experiment. The end hints that there might be some learning from the protagonists experince, at least it is telling it story many times. But I would expect a lot more experimenting, for example with different probe designs and with how much posthumans like different possible negotiated results.
I can see in the story that it make sense not to experiment with posthumans reactions to scenarios, since it might take a long time to send them to the fronter and since it might be possible to simulate them well (its not clear to me if the posthumans are biological). I just wonder if this extreme focus on computation over experiments is a delibrate choice by the author or if it was a blind spot of the author.
An alternative reason for building telescopes would be to recieve updates and more efficient strategies for expanding found after the probe was send out.
How did this happen?! I guess not by rationalists directly trying to influence the pope? But I’m curious to know the process leading up to this.
What does respect mean in this case? That is a word I don’t really understand and seems to be a combination of many different concepts being mixed together.
This is also just another way of saying “willing to be vulnerable” (from my answer below) or maybe “decision to be vulnerable”. Many of these answers are just saying the same thing in different words.
My favourite definition of trust is “willingness to be vulnerable” and I think this answers most of the questions in the post. For example it explains why trust is a decision that can exist independently from your beliefs: if you think someone is genuinely on your side with probability 95%, you can choose to trust them, by doing something that benefit you in 95% of cases and hurt you on the 5% of cases, or you can decide not to, by taking actions that are better in the 5% of cases. Similar for trusting a statement about the world.
I think this definition comes from psychology, but I also found it useful when talking about trusted third parties in cryptography. Also in this case, we don’t care about the probability that the third part is malicious, what matters is that you are vulnerable if and only if they are malicious.
whilst the Jews (usually) bought their land fair and square, the owners of the land were very rarely the ones who lived and worked on it.
I have heard this before but never understood what it meant. Did the people who worked the land respect the ownership of the previous owners, for example by paying rest or by being employed by the previous owners, but they just did not respect the sale? Or did the people who worked the land consider themselves to be owners or didn’t have the same concept of ownership as we do today?
If someone accidentally uses “he” when they meant “she” or vice versa and when talking about a person who’s gender they know, it is likely because the speaker’s first language does not distinguish between he and she. This could be Finnish, Estonian, Hungarian and some Turkic languages and probably also other languages. I haven’t actually use it, but noticed it with a Finnish speaker.
The heading of this question is misleading, but I assume I should answer the question and ignore the heading
P(Global catastrophic risk) What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?
You don’t really need the producers to be “idle”, you just have to ensure that if something important shows up, they are ready to work on that. Instead of having idle producers, you can just have them work on lower priority tasks. Has this also been modelled in queueing theory?
I have a question that tricks GPT-4, but if I post it I’m afraid it’s going to end up in the training data for GPT-5. I might post it once there is a GPT-n that solves it.
You can use ChatGPT 3.5 for free with chat history turned off. This way your chats should not be used as training data.
The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.
Why does the plot start at 3x3 instead of 2x2? Of course, it is not common to have games with only one choice, but for Chicken that is what you end up with when removong one option. You could even start the investigation at 2x1 options.