Yes, it seems I read too fast.
Vanilla_cabs
It seems to be of French origin. The name is French and the French cuisine adopted first. The main hypothesis for its apparition is that Richelieu’s cook invented it out of lack of alternative ingredients while occupying the city of Mahon in Spain. Source: same as you.
Chef’ just means ‘chief’ in french (like the military rank or the man in charge) and comes from the brigade system (https://en.wikipedia.org/wiki/Brigade_de_cuisine)
In addition, in the context of cooking, chef means “cook”, and it’s common to call the cook “chef”, even if it’s your friend who’s making a barbecue. It has positive connotations, implying that the cook is skilled.
That could also explain why French bakeries, with their staple and iconic baguette and croissant, seem to be faring better in my experience.
I can’t help but notice that if for you “nothing else could have happened than what happened”, then your definition of “could have happened” is so narrow as to become trivial.
Rather, I think that by “X could have happened in situation Y”, the laymen mean something like: even with the knowledge of hindsight, in a situation that looks identical to situation Y for the parameters that matter, I could not exclude X happening”.
I was just curious and wanted to give you the occasion to expand your viewpoint. I didn’t downvote your comment btw.
In what ways?
My initial reaction to their arrival was “now this is dumb”. It just felt too different from the rest, and too unlikely to be taken seriously. But in hindsight, the suddenness and unlikelihood of their arrival work well with the final twist. It’s a nice dark comedic ending, and it puts the story in a larger perspective.
I think the bigger difference between humans and chimps is the high prosocial-ness of humans. this is what allowed humans to evolve complex cultures that now bear a large part of our knowledge and intuitions. And the lack of that prosocial-ness is the biggest obstacle to teaching chimps math.
I think I already replied to this when I wrote:
I think all the methods that aim at forcing the Gatekeeper to disconnect are against the spirit of the experiment.
I just don’t see how, in a real life situation, disconnecting would equate to freeing the AI. The rule is artificially added to prevent cheap strategies from the Gatekeeper. In return, there’s nothing wrong to adding rules to prevent cheap strategies from the AI.
But econ growth does not necessarily mean better lives on average if there are also more humans to feed and shelter. In the current context, if you want more ideas, you’d have a better ROI by investing in education.
Unless humanity destroys itself first, something like Horizon Worlds will inevitably become a massive success. A digital world is better than the physical world because it lets us override the laws of physics. In a digital world, we can duplicate items at will, cover massive distances instantaneously, make crime literally impossible, and much, much more. A digital world is to the real world as Microsoft Word is to a sheet of paper. The digital version has too many advantages to count.
Either there will be limitations or not. No limitations means that you can never be sure that someone in front of you is paying attention to you; your appearance indicates nothing but your whim of the moment; you can not be useful to others by providing something that they can’t get by themselves (art? AIs can make art). My first impression is that it will be very hard to build trust and intimacy in this environment. I expect loneliness and depression to rise as this technology is adopted.
But there will probably be limitations. Except that while in our world the limitations are arbitrary, in the Metaverse they will be decided by a private company and will probably enforce a plutocratic class system.
I see a flaw in the Tuxedage ruleset. The Gatekeeper has to stay engaged throughout the experiment, but the AI doesn’t. So the AI can bore the Gatekeeper to death by replying at random intervals. If I had to stare at a blank screen for 30 minutes waiting for a reply, I would concede.
Alternatively, the AI could just drown the Gatekeeper under a flurry of insults, graphic descriptions of violent/sexual nature, vacuous gossip, or a mix of these for the whole duration of the experiment. I think all the methods that aim at forcing the Gatekeeper to disconnect are against the spirit of the experiment.
I also see that the “AI player” provides all elements of the background. But the AI can also lie. There should be a way to separate words from the AI player, when they’re establishing true facts about the setting, and words from the AI, who is allowed to lie.
I’m interested, conditional on these issues being solved.
It comes with a cultural relativism claim that a morality of a culture isn’t wrong, just conflicting to your morals. And this is also probably right.
How can this work? Cultures change. So which is morally right, the culture before the change, or the culture after the change?
I guess a reply could be “Before the change, the culture before the change is right. After the change, the culture after the change is right.” But in this view, “being morally right” carries no information. We cannot assess whether a culture deserves to be changed based on this view.
Thanks everyone :)
Initially, I was expecting a “no”, but being denied a reply is arguably a stronger rejection experience.
Finally, willy finished his makeshift guide rope and lowered it to the rescuers.
Finally, Toni finished his makeshift guide rope and lowered it to the rescuers.
Great post!
...
So, Evie Cotrell, could you help me practice being rejected?
The AI only needs to escape. Once it’s out, it has leisure to design virtually infinite social experiments to refine its “human manipulation” skill: sending phishing emails, trying romantic interactions on dating apps, trying to create a popular cat videos youtube channel without anyone guessing that it’s all deepfake, and many more. Failing any of these would barely have any negative consequence.
Yes, but I don’t know if he really did it. I see multiple problems with that implementation. First, the interest rate should be adjusted for inflation, otherwise the bet is about a much larger class of events than “end of the world”.
Next, there’s a high risk that the “doom” better will have spent all their money by the time the bet expires. The “survivor” better will never see the color of their money anyway.
Finally, I don’t think it’s interesting to win if the world ends. I think what’s more interesting is rallying doubters before it’s too late, in order to marginally raise our chances of survival.
I’m not an expert, but assuming that by revolution you mean something close to “an attempt to change government through non-legal means”, then I agree with your points, but I’ll also note that revolt and revolution only partially overlap. Revolts are typically less organised and with more modest goals than a government overthrow. They are also mostly initiated and fueled by the resentment and desperation of a lower class.
My tentative model is “Starving peasants revolt. Kings don’t like revolts.” Not “Starving peasants lead successful revolutions.”
To take a modern day example that I have experience with, the yellow vest movement in France was a revolt from the working poor outside big cities because the rise in the gas price made their life impossible in a context where they needed cars to work and purchase essential goods. They were leaderless and actually opposed attempts at vertical organisation. In their early stages, they would have been content with gas prices returning to their previous levels. Nonetheless, they were a thorn in the side of the government, and were even a threat to it at some point.