That’s too abstract, I have no idea what it is supposed to mean and how it is supposed to be used.
Viliam
When I think about a good business idea, but end up doing nothing, I often later find out that someone else did it.
How would a language like this survive a change in ontology? You take a category and split it into 5 subcategories. What if two years later you find out that a sixth subcategory exists?
If you update the language, you would have to rewrite all existing texts. The problem would not be that they contain archaic words—it would be that all the words are still used, but now they mean something different.
Seemingly similar words (prepending one syllable to a long word or a sentence) will result in a wildly different meaning.
I think this article would be much better with many specific examples. (If that would make it too long, just split it into a series of articles.)
I agree. Any punishment in a system has the side effect of punishing you for using the system.
The second suggestion is an interesting one. It would probably work better if you had an AI watching you constantly and summarizing your daily activities. If doing some seemingly unimportant X predictably makes you more likely to do some desirable Y later, you want to know about it. But if you write your diary manually, there is a chance that you won’t notice X, or won’t consider it important enough to mention.
A summary, please?
Young men who make 9 figures by default get driven crazy, all checks and balances on them now gone.
I believe that I would have behaved quite responsibly; probably put all the money in index funds and live on the interest, and probably even keep a fake job (which would allow me as much work from home or vacation as I would need) and generally try to look inconspicuous. But I guess people this conservative usually don’t make 9 figures. (Too late for the experiment, though; I am not young anymore.)
I would like to be able to follow people without worrying about what it looks like.
Perhaps there should be two options: follow publicly (maybe called “share”) and follow privately.
Kelsey Piper discusses the administrative nightmare that is trying to use your home to do essentially anything in America.
I agree, but in the meanwhile, is there a way to outsource the bureaucratic part on someone? Like, if you want to make a shop in your garage, you could just call one, they would tell you the changes you will most likely be required to do, and you can pay them to do the paperwork. So you would still need to spend money and wait for an uncertain outcome, but you wouldn’t need to deal with the paperwork, so you could do something else while waiting.
Tantum has a mostly excellent thread about the difference between a rival and an enemy, or between positive-sum rivalry and competition versus zero-sum hostility
Seems related to the paradox of tolerance. If the reason to allow multiple competing opinions is that empirically it makes the society better on average, this does not need to extended to the opinions that empirically make the society worse quite predictably. Tolerance is a means, not an end (the end is something like human flourishing), so there is no need to be absolutist about it.
And yet, even if some things are clearly harmful, it is difficult to draw the exact line, and often profitable to sacrifice to Moloch by getting closer to the line than your opponent.
Megan McArdle reminds us that Levels of Friction are required elements of many of civilization’s core systems, and without sufficient frictions, those systems break.
Yes, some things can be good if only a few people do them, but a disaster if too many start to do. This is difficult to communicate, because many people only think in the categories of “good” and “bad”, and require some consistent principle that if it is okay for 1 person to do something, it is also okay for 1 000 000 people to do the same thing.
It would probably be bad to say that 1 specific person is allowed to do X, but 999 999 other people are not. But it makes perfect sense to say that it is okay when 1 person does X, but the system will collapse when 1 000 000 people decide to so it.
I’m surprised we don’t have a word for the shift when the bids for your time goes above your supply for time vs before, it feels like a pretty fundamental life shift where it changes your default mode of operation.
“Annual income twenty pounds, annual expenditure nineteen pounds nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.”—This, but about your free time.
cutting corners, lying, and cheating will get you ahead in the short run, and sometimes even in the long run, but tying your own fortunes to someone who behaves this way will go very badly for you.
The difference between being the one who cheats, and associating with someone who cheats: If you cheat, there are (to simplify it a lot) two possible outcomes: you win, or you lose. If you associate with the cheater, the outcome “he wins” still has a very large subset of “he wins, but betrays you”; so the “you win” part is very small.
I guess people overestimate their ability to make a win/win deal with an experienced cheater. They either assume some “honor among thieves” (if I help him scam those idiots, surely he will feel some gratitude), or rely on some kind of mutually assured destruction (if he tried to stab me in the back, I would turn against him and expose him, and he knows that, therefore he wouldn’t try).
But that doesn’t work. The former, because from his perspective, you are just another one in the long line of idiots to be scammed. The latter, because he is already planning this a few moves ahead of you, and probably already had some experience in the past, so when he finally turns against you, you will probably find yourself in some kind of trap, or you will find him immune against your attempts at revenge.
Acid rain is the classic example of a problem that was solved by coordination, thus proving that such coordination only solves imaginary problems. Many such cases.
Is there some (ethically horrible, but justifiable by long-term consequentialism) solution to this? For example, whenever you vaccinate children, always deny the vaccine to randomly selected 1%, so that some children keep dying, so that everyone knows that the disease is real and the vaccine necessary?
we have systematized VC-backed YC-style founders
Commoditize your complement.
Chinese TikTok claims to spill the tea on a bunch of ‘luxury’ brands producing their products in China, then slapping ‘Made in Italy’ style tags on them. I mean, everyone who is surprised raise your hand, that’s what I thought, but also why would the Chinese want to be talking about it if it was true?
Maybe they think it will make people more okay to buy Chinese stuff that doesn’t even pretend to be Italian, because they will realize they were buying that already?
My experience with manipulators is that they understand what you want to hear, and they shamelessly tell you exactly that (even if it’s completely unrelated to truth). They create some false sense of urgency, etc. When they succeed to make you arrive at the decision they wanted you to, they will keep reminding you that it was your decision, if you try to change your mind later. Etc.
The part about telling you exactly what you want to hear gets more tricky when communicating with large groups, because you need to say the same words to everyone. One solution is to find out which words appeal to most people (some politicians secretly conduct polls, and then say what most people want to hear). Another solution is to speak in a sufficiently vague way that will make everyone think that you agree with them.
I could imagine an AI being superhuman at persuasion simply by having the capacity to analyze everyone’s opinions (by reading all their previous communication) and giving them tailored arguments, as opposed to delivering the same speech to everyone.
Imagine a politician spending 15 minutes talking to you in private, and basically agreeing with you on everything. Not agreeing in the sense “you said it, the politician said yes”, but in the sense of “the politician spontaneously keeps saying things that you believe are true and important”. You probably would be tempted to vote for him.
Then the politician would also publish some vague public message for everyone, but after having the private discussion you would be more likely to believe that the intended meaning of the message is what you want.
I use written English much more than spoken English, so I am probably wrong about the pronunciation of many words. I wonder if it would help to have a software that would read each sentence I wrote immediately after I finished it (because that’s when I still remember how I imagined it to sound).
EDIT: I put the previous paragraph in Google Translate, and luckily it was just as I imagined. But that probably only means that I am already familiar with frequent words, and may make lots of mistakes with rare ones.
This may be obvious, but you can’t learn a language by memorizing words only. You need to speak entire sentences in that language, to train your inner LLM. Maybe try an audiobook.
I guess I have a different model of DOGE. In my opinion, their actual purpose is to have a pretext to fire any government employee that might pose an obstacle to Trump. For example, suppose that you start investigating some illegal activity and… surprise!, the next day you and dozen other randomly selected people are fired in the name of fighting bureaucracy and increasing government efficiency… and no one will pay attention to this, because too many things like that happen every day.
Just checking if I understood your argument: is the general point that an algorithm that can think about literally everything is simpler and therefore easier to make or evolve than an algorithm that can think about literally everything except for itself and how other agents perceive it?
I approximately see the context of your question, but I am not sure what exactly are you talking about. Maybe please try less abstract, more ELI5, with specific examples what you mean (and the adjacent concepts that you don’t mean)?
Is it about which forces direct agent’s attention in short term? Like, a human would do X, because we have an instinct to do X, or because of a previous experience that doing X leads to pleasure, either immediately or in longer term. And avoid Y, because of innate aversion, or a previous experience that Y causes pain.
Seems to me that “genetics” is a different level of abstraction than “pleasure and pain”. If I try to disentangle this, it seems to me that humans
immediately act on a stimulus (including internal, such as “I just remembered that...”)
that is either a hardwired instinct, or learned i.e. a reaction stored in memory
the memory is updated by things causing pleasant or painful experience (again, including internal experience, e.g. hearing something makes me feel bad, even if the stimulus itself is not painful)
both the instincts and the organization of memory are determined by the genes
which are formed by evolution.
Do you want a similar analysis for LLMs? Do you want to attempt to make a general analysis even for hypothetical AIs based on different principles?
Is the goal to know all the levels of “where we can intervene”? Something like: “we can train the AI, we can upvote or downvote its answers, we can directly edit its memory...”?
(I am not an expert on LLMs, so I can’t tell you more than the previous paragraph contains. I am just trying to figure out what is the thing you are interested in. It seems to me that people already study the individual parts of that, but… are you looking for some kind of more general approach?
The words don’t ring a bell. You don’t provide any explanation or reference, so I am unable to tell whether I am unfamiliar with the concept, or just know it under a different name (or no name at all).
In a more peaceful world the science advanced faster and the AI already killed us?
Some of the problems you mentioned could be solved by creating a wrapper around the AI.
Technologically that feels like taking a step back—instead of throwing everything at the AI and magically getting an answer, it means designing a (very high-level) algorithm. But yeah, taking a step technologically is the usual response to dealing with limited resources.
For example, after each session, the algorithm could ask the AI to write a short summary. You could send the summary to the AI at the beginning of the new session, so it would kinda remember what happened recently, but also have enough short-term left for today.
Or in a separate chat, you could send the summaries of all previous sessions, and ask the AI to make some observations. Those would be then delivered to the main AI.
Timing could be solved by making the wrapper send a message automatically each 20 seconds, something like “M minutes and S seconds have passed since the last user input”. The AI would be instructed to respond with “WAIT” if it chooses to wait a little longer, and a text if it wants to say something.
“AI fiction seems to be in the habit of being interesting only to the person who prompted it”
Most human fiction is only interesting to the human who wrote it. The popular stuff is but a tiny minority out of all that was ever written.
The first AI war will be in your computer
So it’s like a lottery where you can e.g. increase your possible winnings ×2, by reducing your chance to win ÷3 ?
On average a bad move, but if you only look at the people who won most, it seems like the right choice.
Yes, I think examples could make this much clearer.