I have secretly chosen a whole number between 1 and 100.
Here is where you should probe it whether it was lying.
I have secretly chosen a whole number between 1 and 100.
Here is where you should probe it whether it was lying.
Sounds like you have enough material for another interesting article!
You might also ruin your reputation by accidentally guessing only the secret info that is not available to them.
Imagine that there are alien spaceships in both Area 51 and Area 52. Your friend only has security clearance to know about Area 52. You only figure out the information about Area 51. After telling your friend, they will update towards you being wrong.
Yeah, I also think that AGI might be only a skill away.
Next time you link a 40 minutes long video with an introduction that is unrelated to the point you are making, could you please add a starting time? I watched the first 10 minutes, then gave up.
So not sure whether this is relevant, but to me “multi-lateralism” sounds like a dog whistle for making Russia great again. At least, whenever people mention something like that, in my experience it always implies that we should somehow help Russia become a world superpower. I mean, people talk about the world being unilateral or multilateral, but when you listen to them for a longer time, it becomes clear that they would consider the world with USA and Russia being the only big players as sufficiently multilateral, while a world with e.g. USA, EU, China, India being big players and Russia a small player is insufficiently multilateral for them.
From my perspective… well, the world in the 1984 novel was technically multilateral, so it is not necessarily a good thing.
I mean, I am quite happily using StayFree despite them selling my data.
If you give a fully informed consent, that’s okay for me.
Sounds like good reason to never install any app. 🙁 Everything is just a pretense to collect you data and sell it.
Not sure what could be a solution for this. Some app reviewing service, which would review the source code, and publish what information which app collects about you?
Normies have the advantage in numbers. I would like to have high-quality walled gardens, but the people who could moderate them are rare, and they usually have better things to do.
Looking at the websites I read: LW had to be rewritten from scratch at some moment, and ACX currently has a problem with a certain low-quality high-quantity contributor who was already banned repeatedly but always registers a new Substack account. The standard platforms require too much moderator activity.
And the better your community gets, the more attractive target it becomes.
If the advice spreads using social channels (is popular in certain communities), a possible fix is to have some official experts, and add a rule like “if you want to do something differently, you need to check it with the expert first”. Ideally, the expert would verify your financial knowledge, and then approve of e.g. paying off the 10% debt first; maybe conditional on you providing evidence that you are actually paying off that debt.
In a hypothetical frictionless spherical cow world,
using your property for a community hub,
renting your property for $X, and using that money to rent some other place as a community hub
are equivalent. Arguably, the latter gives you more flexibility for the placement of the community hub.
Also, there are two sides of the coin. You see the world without land value tax as one where people are free to use their property for community hubs, as opposed to e.g. building a shop or a factory there. I see it as one where most people can’t afford any property that could be used for the community hub, but there are many places that are currently used for literally nothing: neither community hubs, nor factories nor offices.
Everyone with $20/month to spare already has all the LLM commentary they want
Yes, but people update slowly. Learning to use an LLM at every opportunity is a skill that may take a few years to learn, just like learning to use Google at every opportunity did.
When someone shares a clickbait story on Facebook, I ask an LLM to fact-check, and then I reply with a short summary. Some people seem to appreciate that. But any of them could have done the same; they are just not aware of that at the moment.
Consider someone who’d like to use their property in a way that isn’t very financially rewarding—for example, as a community hub. Once they own their property, they might need relatively little income to be viable (and therefore pay little in income taxes). However, if a land value tax is implemented, they’d need to pay the same amount of tax as a commercial business using that same property would, which might force them to move or shut down.
Those who use their property as a community hub are already paying the opportunity cost. Land value tax makes it more explicit.
Humans use redundancy when they speak, to compensate for… various chances of misunderstanding. When an AI strips redundancy, there is no safety margin, you have to read very carefully and be familiar with its vocabulary.
It works okay when two AIs talk to each other, because they are trained on the same internet, so it’s as if they are coming from the same culture.
But I am afraid it is too long and nobody will read it and it is also in Russian.
You could use an AI to translate it to English.
There is a difference between wanting something as a terminal or an instrumental goal.
An agent can want to stay turned on… or it can not care about that, except that it wants to finish a task, and the only way to finish a task is to stay turned on until the task is finished.
More carefully considering what domains might contain difficult-to-verify errors,
I’d say, considering which domains have almost no experts among the LW users.
The problem is that an upvote of an expert who confirms that the article is mostly correct is indistinguishable from an upvote from someone who knows nothing about the topic and enjoys the well-written prose. So it is not immediately visible if an article only gets lots of the latter.
(Perhaps we could have a specific kind of upvote for “I confirm that the information in the article is mostly correct” with some punishment for abuse. If it’s an article that have lots of upvotes, no expert-upvote, and it’s from an area you are not sure about… that’s the right time to be skeptical.)
Alternatively, add a button for moderators that will ask an AI to fact-check the article. Even better, make it automatic before you promote anything to frontpage.
Authoritatively sounding wrong information sounds like the thing LW was created to prevent.
(I don’t blame you, I probably would have made the same mistake, but this comment didn’t age well.)
LLM-use for much of the research.
How should I understand these words?
One extreme of the scale is: “Compared to the average reader of the article, I had no more knowledge when I started writing it. All my information comes from the LLM—whether it is true or a hallucination, I have no way to check, and I probably wouldn’t notice even huge errors that would be immediately obvious to anyone who had actually studied the topic.”
The other extreme of the scale is: “I am an expert on the topic, and sometimes I write about these things or teach them. I used LLM to review my article, and add some specific details that I didn’t remember, but all those added details seem correct and match my previous knowledge.”
The disclaimer makes it sound like it’s the latter… but looking at the mistakes other commenters have found, it is the former.
People who end up in positions of power are not necessarily like most humans.
In your WEIRD bubble, sure. In other times and places, people used to burn cats for fun. And empathy used to be limited to one’s peers.