Object for the stated reason.
Alexei
I have a personal rule that I tell people when they start bringing this energy into my life: “I’m happy to listen to you for 5 minutes a day on this topic. After that I’m out.”
I’m interested! But I live in Portugal, so it would need to be remote.
The former. I think you can just explain the blinds without explaining the entire poker game.
I like the idea and would consider doing something like that in the future. Thanks! FWIW, I found the explanation of poker completely extraneous to the main point.
Vernor suggested a principle: The bad beings nearly always optimize for engagement, for pulling you ever deeper into their influence. They want to make themselves more firmly a part of your OODA loop. The good ones send you out, away from themselves in an open ended way, but better than before.
That is profound!
Oh, I should clarify that we won’t be doing Circling. We’ll just be talking.
Circle of Support (Oct 14th @ 10am PST)
Recently I watched “The Tangle.” It’s an indie movie written and directed by the main actor from Ink, if that means anything to you. (Ink is also an indie movie, but it’s in my top 5 of all time.) Anyway, The Tangle is set in a world right after the singularity (of sorts), but where humans haven’t fully gave up control. Don’t want to spoil too much here, but I found a lot of the ideas there that were popular 5-10 years ago in the rationalist circles. Quite unexpected for an indie movie. I really enjoyed it and I think you would too.
I’d also post in the “welcome” thread.
Before building a whole website, just try this technique on some students. Whether with just paper or a quickly built web page for a few specific concepts.
I’m not using ChatGPT or any of its ilk and plan to continue to do so for the foreseeable future. Basically for the rough reasons described by OP.
I see people make the argument that an additional subscriber doesn’t make a big difference on the margin. But as far as individual choices consumer choices go, that’s all the leverage you have!
I think most people would agree that the eventual logical outcome of this technology is highly volatile, potentially including some very very negative outcomes in the mix. I think basic moral logic compels us not to engage with something like that. Doing otherwise is like destroying the commons but with no easy way of reparation.
Justifying it with “it increases my productivity” seems laughable and ironic when you consider the long term consequences.
The way I’m approaching this internally though is kind of like most vegans approach their choice, I think. It’s becoming a life choice, a moral one, and I think ultimately the right one. But I do not want to be militaristic about it. And while everyone is using ChatGPT around me I continue to love them and will do so until the end.
Interest in enrolling in CS, AI, and ML degrees goes up 5-10x from start to end of 2023.
I’m willing to bet it will be less than 2x.
I don’t think I define it rigorously. Maybe someone with deeper technical understanding of these models could.
But if I had to come up with a hack somehow, you could look at the distribution of probabilities for various words as ChatGPT is predicting the next token. Presumably you’ll noticed a certain kind of probability distribution when it’s in the “Luigi” mode and another when it’s in “Waluigi” mode. Then prodding it in the right direction might be weighing more the tokens that are a lot more frequent in the Luigi mode than Waluigi.
Super-Luigi = Luigi + (Luigi—Waluigi)
We have no idea how to have a program detect AI-written text in a useful way.
This approach seems very doable:
I suppose you can call me lucky, but my wife and I had about two years of doing “quality of time spent” very well. And then we switched to building a family and that’s going well too. I guess you can have it all. 😊
Yup, I like it! Describes where I am pretty well.
How do I opt into the LessWrong Watercolor Aesthetic?