A true story from a couple of days ago. Chocolates were being passed round, and I took one. It had a soft filling with a weird taste that I could not identify, not entirely pleasant. The person next to me had also taken one of the same type, and reading the wrapper, she identified it as apple flavoured. And so it was. It tasted much better when I knew what it was supposed to be.
On another occasion, I took what I thought was an apple from the fruit bowl, and bit into it. It was soft. Ewww! A soft apple is a rotten one. Then I realised that it was a nectarine. Delicious!
It seems that the hoopoes lay extra eggs in times of abundance — more than they would be able to see through to fledging — as a way of storing up food for the older siblings. It is rather gruesomely called the “larder” hypothesis.
“What surprised me the most was the species practicing this aggressive parenting,” says Vladimir Pravosudov, an ecologist at the University of Nevada, Reno. Hoopoes primarily eat insects, he notes, so their long, curved bills aren’t ideal for killing and eating chicks. That might be why, Soler says, mother hoopoes often grab the unlucky chick and shove it into the mouth of an older chick, which swallows it whole.
Roko’s Basilisk, as told in the oldest extant collection of jokes, the Philogelos, c. 4th century AD.
A pedant having fallen into a pit called out continually to summon help. When no one answered, he said to himself, “I am a fool if I do not give all a beating when I get out in order that in the future they shall answer me and furnish me with a ladder.”
It seems to me I’ve not heard much of cryonics in the last few years. I see from Wikipedia that as of last year Alcor only has about 2000 signed up, of which 222 are suspended. Are people still signing up for suspension, as much as they ever have been? Are the near-term prospects of AGI making long-term prospects like suspension less attractive?
>Are the near-term prospects of AGI making long-term prospects like suspension less attractive?
No. Everyone I know who was signed up for cryonics in 2014 is still signed up now. You’re hearing about it less because Yudkowsky is now doing other things with his time instead of promoting cryonics, and those discussions around here were a direct result of his efforts to constantly explain and remind people.
I don’t know about signup numbers in general (the last comprehensive analysis was in 2020, when there was a clear trend)—but it definitely looks like people are still signing up for Alcor membership (six from January to March 2023).
I do not have answers to the question I raise here.
Historical anecdotes.
Back in the stone age — I think something like the 1960′s or 1970′s, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer’s response was “what do you need all those secretaries for?”
Back in the bronze age, when spam was a recent invention (the mid-90′s), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I’d kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that the spammer provided (imaginary, of course) would zealously process and send out on the mark’s behalf, for which the mark would receive an income. The obvious question about this supposed business is, what does it need the sucker for? The real answer is, to pay the spammer money for this non-existent opportunity. If the business was as advertised, the person receiving the proposal would be superfluous to its operation, an unconnected gear spinning uselessly.
Dead while thinking
Many people’s ideas of a glorious future look very much like being an unconnected gear spinning uselessly. The vision is of everything desirable happening effortlessly and everything undesirable going away. Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless. Hack yourself to make everything you think you should be doing fun fun fun. Hack your brain to be happy.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Got a presentation to make? The AI will write a report, and summarise it, and generate PowerPoint slides, and the audience’s AIs will summarise it and give them an action plan, and what do you need any of those people for?
Why climb Kilimanjaro if a robot can carry you up? Why paint, if Midjourney will do it better than you ever will? Why write poetry or fiction, or music? Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
When the AI does everything, what do you do? What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The human denizens of the Wall-E movie are couch potatoes who can barely stand up, but that is only one particular imagining of the situation. When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
Some people even want to say “goodbye cruel world” and wirehead themselves.
Iain M. Banks imagined a glorious future in the form of the Culture, but he had to set his stories in the places where the Culture’s writ runs weakly. There are otherwise no stories.
These are akin to the goals of dead people. In that essay, the goals are various ways of ensmallening oneself: not having needs, not bothering anyone, not being a burden, not failing, and so on. In the visions above, the goals sound more positive, but they aren’t. They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness in all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
And that’s without misalignment, which is a whole other subject. On the evidence of what people actually wish for, even an aligned wish-fulfilling machine is unaligned. How do we avoid ending up dead-while-thinking?
The vision is of everything desirable happening effortlessly and everything undesirable going away.
Citation needed. Particularly for that first part.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?
Why do I care who needs me to do any of that?
Why climb Kilimanjaro if a robot can carry you up?
Because if the robot carries me, I haven’t climbed it. It’s not like the value comes from just being on the top.
Helicopters can fly that high right now, but people still walk to get there.
Why paint, if Midjourney will do it better than you ever will?
Because I like painting?
Does it bother you that almost anything you might want to do, and probably for most people anything at all that they might want to do, can already be done by some other human, beyond any realistic hope of equaling?
Do you feel dead because of that?
Why write poetry or fiction, or music?
For fun. Software, too.
Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
Because I won’t experience any of that infinite stream if I don’t read it?
What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The stuff I want includes doing something. Not because somebody else needs it. Not because it can’t be done better. Just because I feel like doing it. That includes putting in effort, and taking on things I might fail at.
Wanting to do things does not, however, imply that you don’t want to choose what you do and avoid things you don’t want to do.
If a person doesn’t have any internal wish to do anything, if they need somebody else’s motivations to substitute for their own… then the deadness is already within that person. It doesn’t matter whether some wish gets fulfilled or not. But I don’t think there are actually many people like that, if any at all.
They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness on all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
I think you’re seeing shadows of your own ideas there.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
I think you’re seeing shadows of your own ideas there.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading.
Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is “don’t listen to me”, but (spoiler) if they do, it doesn’t end well for them.
Because I won’t experience any of that infinite stream if I don’t read it?
There are authors I would like to read, if only they hadn’t written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
I think something like the Culture, with aligned superintelligent “ships” keeping humans as basically pets, wouldn’t be too bad. The ships would try to have thriving human societies, but that doesn’t mean granting all wishes—you don’t grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you’d be able to move up the spectrum from human to ship.
“The freedom I speak of, it is not that modest state desired by certain people when others oppress them. For then man becomes for man—a set of bars, a wall, a snare, a pit. The freedom I have in mind lies farther out, extends beyond that societal zone of reciprocal throat-throttling, for that zone may be passed through safely, and then, in the search for new constraints—since people no longer impose these on each other—one finds them in the world and in oneself, and takes up arms against the world and against oneself, to contend with both and make both subject to one’s will. And when this too is done, a precipice of freedom opens up, for now the more one has the power to accomplish, the less one knows what ought to be accomplished.”
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.)
That said, current wishes of many people include things they want being done faster and easier; it’s just the more you extrapolate the less fraction wants that level of automation—just more divergence as you consider higher scale.
I suppose it does. That article was not in my mind at the time, but, well, let’s just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. “Pleasure” is not among my goals, and the poster’s vision of a universe of hedonium is to me one type of dead universe.
This is a story from the long-long-ago, from the Golden Age of Usenet.
On the science fiction newsgroups, there was someone—this is so long ago that I forget his name—who had an encyclopedic knowledge of fannish history, and especially con history, backed up by an extensive documentary archive. Now and then he would have occasion to correct someone on a point of fact, for example, pointing out that no, events at SuchAndSuchCon couldn’t have influenced the committee of SoAndSoCon, because SoAndSoCon actually happened several years before.
The greater the irrefutability of the correction, the greater people’s fury at being corrected. He would be scornfully accused of being well-informed.
Thus the title of a recent paper. It appeared three weeks ago, but I haven’t seen it mentioned on LW yet.
The abstract: “Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.”
I’d say that they are wrong when they say a LLM may engage in ‘soft bullshit’: a LLM is simulating agents, who are definitely trying to track truth and the external world, because the truth is that which doesn’t go away, and so it may say false things, but it still cares very much about falsity because it needs to know that for versimilitude. If you simply say true or false things at random, you get ensnared in your own web and incur prediction error. Any given LLM may be good or bad at doing so—the success of story-based jailbreaks suggests they are still far from ideal—but it’s clear that the prediction loss on large real-world texts written by agents like you or I, who are writing things to persuade each other, like I am writing this comment to manipulate you and everyone reading it, require tracking latents corresponding to truth, beliefs, errors, etc. You can no more accurately predict the text of this comment without tracking what I believe and what is true than you could accurately predict it while not tracking whether I am writing in English or French. (Like in that sentence right there. You see what I did there? Maybe you didn’t because you’re just skimming and tldr, but an LLM needs to!)
They are right when they say a RLHF-tuned model like ChatGPT engages in ‘hard bullshit’, but they seem to be right for the wrong reasons. Oddly, they seem to avoid any discussion of what makes ‘ChatGPT’ different from the base model ‘GPT’, and the only time they seem to betray any hint that the topic of tuning matters is to defer discussion to the unpromisingly-titled unpublished paper “Still no lie detector for language models: Probing empirical and conceptual roadblocks”, so I can’t tell what they think. We know a lot about GPT and TruthfulQA scaling and ChatGPT modeling users and engaging in sycophancy and power-seeking behavior and reward-hacking and persuasion at this point, so it’s quite irresponsible to not discuss any of that in a section about whether the LLMs are engaged in ‘hard bullshit’ explicitly aimed at manipulating user/reader beliefs...
I think that’s true, but not very important (in the short term). On Bullshit—Wikipedia was first published in 1986, and was a humorous, but useful, categorizaton of a whole lot of human communication output. ChatGPT is truth-agnostic (except for fine-tuning and output tuning), but still pretty good on a whole lot of general topics. Human choice of what GPT outputs to highlight or use in further communication can be bullshit or truth-seeking, depending on the human intent.
In the long-term, of course, the idea is absolutely core to all the alignment fears and to the expectation that AI will steamroller human civilization because it doesn’t care.
If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.
If, on making a decision, your next thought is to suppress the thought “Was that the right decision?” then you still did not make a decision.
If you are swayed by someone else asking “Was that the right decision?” then you did not make a decision.
If you are swayed by someone repeating arguments you already heard from them, you did not make a decision.
Not making that decision may be the right thing to do. Wavering suggests that you still have some doubt about the matter that you may not yet be able to articulate. Ferret that out, and then make the decision.
Decision screens off thought from action. When you really make a decision, that is the end of the matter, and the actions to carry it out flow inexorably.
I’m very confused. Because it seems like for you decision should not only clarify matters and narrow possibilities, but also eliminate all doubt entirely and prune off all possible worlds where the counterfactual can even be contemplated.
Perhaps that’s indeed how you define the word. But using such a stringent definition, I’d have to say I’ve never decided anything in my life. This doesn’t seem like the most useful way to understand “decision”—it diverges enough from common usage and mismatches with the hyperdimensional-cloud of word meaning for decision sufficiently to be useless in conversation with most people.
A decision is not a belief. You can make a decision and still be uncertain about the outcome. You can make a decision while still being uncertain about whether it is the right decision. Decision neither requires certainty nor produces certainty. It produces action. When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty. You can steer according to how events unfold, but you can’t carry one third of an umbrella when the forecast is a one third chance of rain.
In about a month’s time, I will take a flight from A to B, and then a train from B to C. The flight is booked already, and I have just have booked a ticket for a specific train that will only be valid on that train. Will I catch that train? Not if my flight is delayed too much. But I have considered the possibilities, chosen the train to aim for, and bought the ticket. There are no second thoughts, no dwelling on “but suppose” and “what if”. Events on the day, and not before, will decide whether my hopes[1] for the journey will be realised. And if I miss the train, I already know what I will do about that.
If “you can make a decision while still being uncertain about whether it is the right decision”. Then why can’t you think about “was that the right decision”? (Lit. Quote above vs original wording)
It seems like what you want to say is—be doubtful or not, but follow through with full vigour regardless. If that is the case, I find it to be reasonable. Just that the words you use are somewhat irreconcilable.
If “you can make a decision while still being uncertain about whether it is the right decision”. Then why can’t you think about “was that the right decision”?
Because it is wasted motion. Only when new and relevant information comes to light does any further consideration accomplish useful work.
One day I might write an article on rationality in the art of change ringing, a recreation I took up a few years ago. Besides the formidable technicalities of the activity, it teaches such lessons as letting the past go, carrying on in the face of uncertainty, and acting (by which I mean doing, not seeming) assuredly however unsure you are. I have also heard (purely anecdotally) that change ringers seem to never get Alzheimers.
When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty.
This seems like hyperbolic exhortation rather than simple description. This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
I do agree with this as advice, in fact—many decisions one faces should be treated as a commitment rather than an ongoing reconsideration. It’s not actually true in most cases, and the ability to change one’s plan when circumstances or knowledge changes is sometimes quite valuable. Knowing when to commit and when to be flexible is left as an excercise...
This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future.
See Spohn’s example about believing (“deciding”) you won’t wear shorts next winter:
One might object that we often do speak of probabilities for acts. For instance, I might say: “It’s very unlikely that I shall wear my shorts outdoors next winter.” But I do not think that such an utterance expresses a genuine probability for an act; rather I would construe this utterance as expressing that I find it very unlikely to get into a decision situation next winter in which it would be best to wear my shorts outdoors, i.e. that I find it very unlikely that it will be warmer than 20°C next winter, that someone will offer me DM 1000.- for wearing shorts outdoors, or that fashion suddenly will prescribe wearing shorts, etc. Besides, it is characteristic of such utterances that they refer only to acts which one has not yet to decide upon. As soon as I have to make up my mind whether to wear my shorts outdoors or not, my utterance is out of place.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled “decision”. I decide to attend a play in London next month. This is an intent and a belief. It’s not guaranteed. I buy tickets for the train and for the show. The sub-decisions to click “buy” on the websites are in the past, and therefore committed. The overall decision has more evidence, and gets more confident. The cancelation window passes. Again, a bit more evidence. I board the train—that sub-decision is in the past, so is committed, but there’s STILL some chance I won’t see the play.
Anything you call a “decision” that hasn’t actually already happened is really a prediction or an intent. Even DURING an action, you only have intent and prediction. While the impulse is traveling down my arm to click the mouse, the power could still go out and I don’t buy the ticket. There is past, which is pretty immutable, and future, which cannot be known precisely.
I think this is compatible with Spohn’s example (at least the part you pasted), and contradicts OP’s claim that “you did not make a decision” for all the cases where the future is uncertain. ALL decisions are actually predictions, until they are in the past tense. One can argue whether that’s a p(1) prediction or a different thing entirely, but that doesn’t matter to this point.
”If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.” is actually good directional advice in many cases, but it’s factually simply incorrect.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
This seems like hyperbolic exhortation rather than simple description.
It is exhortation, certainly. It does not seem hyperbolic to me. It is making the same point that is illustrated by the multi-armed bandit problem: once you have determined which lever gives the maximum expected payout, the optimum strategy is to always pull that lever, and not to pull levers in proportion to how much they pay. Dithering never helps.
the ability to change one’s plan when circumstances or knowledge changes is sometimes quite valuable.
Yes. But only as such changes come to be. Certainly not immediately on making the decision. “Commitment” is not quite the concept I’m getting at here. It’s just that if I decided yesterday to do something today, then if nothing has changed I do that thing today. I don’t redo the calculation, because I already know how it came out.
Decision screens off thought from action. When you really make a decision, that is the end of the matter, and the actions to carry it out flow inexorably.
Yes, but that arguably means we only make decisions about which things to do now. Because we can’t force our future selves to follow through, to inexorably carry out something. See here:
Our past selves can’t simply force us to do certain things, the memory of a past “commitment” is only one factor that may influence our present decision making, but it doesn’t replace a decision. Otherwise, always when we “decide” to definitely do an unpleasant task tomorrow rather than today (“I do the dishes tomorrow, I swear!”), we would then tomorrow in fact always follow through with it, which isn’t at all the case.
Yes, but that arguably means we only make decisions about which things to do now. Because we can’t force our future selves to follow through, to inexorably carry out something
My left hand cannot force my right hand to do anything either. Instead, they work harmoniously together. Likewise my present, past, and future. Not only is the sage one with causation, he is one with himself.
Otherwise, always when we “decide” to definitely do an unpleasant task tomorrow rather than today (“I do the dishes tomorrow, I swear!”), we would then tomorrow in fact always follow through with it, which isn’t at all the case.
That is an example of dysfunctional decision-making. It is possible to do better.
A long time ago, the use of calculators in schools was frowned upon, or even forbidden. Eventually they became permitted, and then required. How long will it be before AI assistants, currently frowned upon or even forbidden, become permitted, and then required?
The recent Gemini incident, apparently a debacle, was also a demonstration of how easy it is to deliberately mould an AI to force its output to hew to a required line, independent of the corpus on which it was trained, and the reality which gave rise to that corpus. Such moulding could be used by an organisation to ensure that all of its public statements showed it in a positive light and never revealed anything embarrassing. Not passing a press release through its corporate AI would be a sackable offence. The AI could draw up minutes of internal meetings and keep any hint of wrongdoing out of the record. Individuals with access to the AI could have it slant things in their favour and against their rivals.
Here’s an AI disaster story that came to me on thinking about the above.
Schools start requiring students to use the education system’s official AI to criticise their own essays, and rewrite them until the AI finds them acceptable. This also removes the labour of marking from teachers.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
Or out of school.
Social media platforms are required to silently limit distribution of materials that the AI scores poorly.
AIs used in such important capacities would have to agree with each other, or public confusion might result. There would therefore have to be a single AI, a central institution for managing the whole state. For public safety, no other AIs of similar capabilities would be permitted.
Prompt engineering becomes a criminal offence, akin to computer hacking.
Access to public archives of old books, online or offline, is limited to adults able to demonstrate to the AI’s satisfaction that they have an approved reason for consulting them, and will not be corrupted by the wrong thoughts they contain. Physical books are discouraged in favour of online AI rewrites. New books must be vetted by AI. Freedom of speech is the freedom to speak truth. Truth is what it is good to think. Good is what the AI approves. The AI approves it because it is good.
Social credit scores are instituted, based on AI assessment of all of an individual’s available speech and behaviour. Social media platforms are required to silently limit distribution of anything written by a low scorer (including one to one messaging).
Changes in the official standard of proper thought, speech, and action would occur from time to time in accordance with social needs. These are announced only as improvements in the quality of the AI. Social credit scores are continually reassessed according to evolving standards and applied retroactively.
Such changes would only be made by the AI itself, being too important a matter to leave to fallible people.
To the end, humans think they are in control, and the only problem they see is that the wrong humans are in control.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
That will only put a brake on how fast the frog is boiled. Artists have a vested interest against the use of AI art, but today, hardly anyone else thinks twice about putting Midjourney images all through their postings, including on LessWrong. I’ll be interested to see how that plays out in the commercial art industry.
You’re underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn’t address the ease of circumvention. There’s no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.
There’s an article in the current New Scientist, taking the form of a review of a new TV series called “The Peripheral”, which is based on William Gibson’s novel of the same name. The URL may not be useful, as it’s paywalled, and if you can read it you’re probably a subscriber and already have.
The article touches on MIRI and Leverage. A relevant part:
But over the past couple of years, singulatarian [sic] organisations [i.e. the Singularity Institute/MIRI and Leverage/Leverage Research] have been losing their mindshare. A number of former staffers at Leverage have said it was “similar to an abusive relationship”. A researcher at MIRI wrote of how the group made her so paranoid about AI that she had what she calls a “psychotic break” leading to her hospitalisation. In response, MIRI pointed to the use of psychedelics in “subgroups”. The singulatarian community is looking less like smart visionaries and more like cult survivors.
Posting this to draw people’s attention to how these parts of the community are being written about in the slightly wider world.
The blind have seeing-eye dogs. Terry Pratchett gave Foul Ole Ron a thinking-brain dog. At last, a serious use-case for LLMs! Thinking-brain dogs for the hard of thinking!
A month ago I went out for a 100 mile bicycle ride. I’m no stranger to riding that distance, having participated in organised rides of anywhere from 50 to 150 miles for more than twelve years, but this was the first time I attempted that distance without the support of an organised event. Events provide both the psychological support of hundreds, sometimes thousands, of other cyclists riding the same route, and the practical support of rest stops with water and snacks.
I designed the route so that after 60 miles, I would be just 10 miles from home. This was so that if, at that point, the full 100 was looking unrealistic, I could cut it short. I had done 60 miles on my own before, so I knew what that was like, but never even 70. So I would be faced with a choice between a big further effort of 40 miles, and a lesser but still substantial effort of 10 miles. I didn’t want to make it too easy to give up.
East from New Costessey to Acle, north to Stalham, curve to the west by North Walsham and Aylsham, then south-west to Alderford and the route split.
The ride did not go especially well. I wasn’t feeling very energetic on that day, and I wasn’t very fast. By the time I reached Aylsham I was all but decided to take the 10 mile route when I got to Alderford. I could hardly imagine doing anything else. But I also knew that was just my feelings of fatigue in the moment talking, not the “I” that had voluntarily taken on this task.
At Alderford I stopped and leaned my bike against a road sign. It was mid-afternoon on a beautiful day for cycling: little wind, sunny, but not too hot. I considered which way to go. Then I drank some water and considered some more. Then I ate another cereal bar and considered some more. And without there being an identifiable moment of decision, I got on my bike and did the remaining 40 miles.
The game of Elephant begins when someone drags an elephant into the room.
Epistemic status: a jeu d’esprit confabulated upon a tweet I once saw. Long enough ago that I don’t think I lose that game of Elephant by posting this now.
Everyone knows there’s an elephant there. It’s so obvious! We all know the elephant’s there, we all know that we all know, and so on. It’s common knowledge to everyone here, even though no-one’s said anything about it. It’s too obvious to need saying.
But maybe there are some people who are so oblivious that they don’t realise it’s there? What fools and losers they would be, haw! haw!
But—dreadful thought! -- suppose someone thinks that I’m one of those naive, ignorant people? I can’t just point out the elephant, because then they’d think that I don’t know that they already know, and they’d all pile on with “No kidding, Sherlock!” No, actually, they wouldn’t say anything, it would be common knowledge the moment I spoke, and they would lose by pointing it out, but they’d all be thinking that and knowing that everyone else was thinking that.
So somehow I have to make it clear that I do know it’s there, and I know that everyone knows, and so on, but without ever being so fatally gauche as to say anything explicitly.
And everyone else is doing the same—everyone, that is, except for the losers who aren’t in on the common knowledge.
So now the game is for each person to make it clear that they’re not one of the losers, while hunting for and exposing the losers. And it has to be done without making any reference to the elephant, or even to the game of Elephant that we are all playing. Anything that looks like an attempt to be one of the winners, loses. Who is winning and who is losing is also assumed to be common knowledge, so no reference can be made to that either. Everything must be conducted by what are outwardly the subtlest of signals, but which are again presupposed to be perfectly and commonly legible to everyone. Each player must say or do things that only make sense if they have the common knowledge, but which have no demonstrable connection with it. Any suspicion that they are acting to win proves that they are not confident that they are winning, so they lose. Every action in the game is itself another Elephant that no reference must ever be made to. Everyone is mentally keeping a scorecard, but the contents of these scorecards are just more Elephants.
Doing nothing at all is a losing move. You can’t win without doing something to demonstrate that you are winning, but if you are detected, you lose.
The person who dragged in the elephant may not be winning. Perhaps they don’t realise the significance of what they dragged in, and have already lost. In some games of Elephant, no-one dragged it in, the Elephant is some aspect of the common situation of the players.
Who has even realised that a game of Elephant is in progress? The fact that the game is on is another Elephant. Every thought anyone could have about the game is an Elephant: the moment you utter it, you lose.
I have a dragon in my garage. I mentioned it to my friend Jim, and of course he was sceptical. “Let’s see this dragon!” he said. So I had him come round, and knocked on the garage door. The door opened and the dragon stepped out right there in front of us.
“That can’t really be a dragon!” he says. It’s a well-trained dragon, so I had it walk about and spread its wings, showing off its iridescent scaly hide.
“Yes, it looks like a dragon,” he goes on, “but it can’t really be a dragon. Dragons belch fire!”
The dragon raised an eyebrow, and discreetly belched some fire into a corner of the yard.
“Yes, but it can’t really be a dragon,” he says, “dragons can—”
“𝔉𝔩𝔶?” it said. It took off and flew around, then came back to land in front of us.
“Ah, but aren’t dragons supposed to collect gold, and be enormously old, and full of unknowable wisdom?”
With an outstretched wingtip the dragon indicated the (rather modest, I had to admit) pile of gold trinkets in the garage. “I can’t really cut off one of its talons to count the growth rings,” I said, “but if you want unknowable wisdom, I think it’s about to give you some.” The dragon walked up to Jim, stared at him eye to eye for a long moment, and at last said “𒒄𒒰𒓠𒓪𒕃𒔻𒔞”
“Er...so I’ve heard,” said Jim, looking a bit wobbly. “But seriously,” he said when he’d collected himself, “you can’t expect me to believe you have a dragon!”
I have at last had a use for ChatGPT (that was not about ChatGPT).
I was looking (as one does) at Aberdare St Elvan Place Triples. (It’s a bellringing thing, I won’t try to explain it.) I understood everything in that diagram except for the annotations “-M”, “-I”, etc., but those don’t lend themselves to Google searching.
So I asked ChatGPT as if I was asking a bell ringer:
When bellringing methods are drawn on a page, there are often annotations near each lead end consisting of a hyphen and a single letter, such as “-M”, “-I”, etc. What do these annotations mean?
It gave me a sensible-looking paragraph about what they meant, which I did not trust to be more accurate than a DALL•E hand, but its answer did give me the vocabulary to make a better Google query, which turned up a definitive source.
While I don’t yet have enough knowledge of bell-ringing to fully understand the definitive source, I believe my expectation about the quality of ChatGPT’s answer was accurate.
Just as Wikipedia is best used as a source for better sources, ChatGPT is best used as a source for better Google queries.
For what it’s worth, I use ChatGPT all the time, multiple times every day, even most hours. I usually don’t hit the request limits, but have three times now. Once with a sequence of Dall-E, I guess because evaluating an image is faster than evaluating a long text and because the generated pictures require a lot of tuning (so far).
I use it
as a Google replacement.
to figure out key words for Google queries like you did and to find original sources.
to summarize websites. Though for “generally known” stuff “no browse” is often better and faster.
as a generator for code fragments.
for debugging—just paste code and error message into it.
for understanding topics deeper in an interactive discussion style.
as a language coach. “Explain the difference between zaidi and kuliko in Swahili with examples.”
for improving texts of all kinds—I often ask: “criticise this text I wrote”.
and more I guess. I also let my kids have dialogs with it.
...and I use a custom GPT that a colleague created to write nice Scrum tickets based on technical information provided. Just paste in what you have, such a mail about a needed service update, a error log entry, or a terse protocol from a meeting and PO GPT will create a ticket readable for non-technies out of it.
[ETA: alfredmacdonald’s post referred to here has been deleted.]
Well, well, alfredmacdonald has banned me from his posts, which of course he has every right to do. Just for the record, I’ll paste the brief thread that led to this here.
I also notice that alfredmacdonald last posted or commented here 10 years ago, and the content of the current post is a sharp break from his earlier (and brief) participation. What brings you back, Alfred? (If the answer is in the video, I won’t see it. The table of contents was enough and I’m not going to listen to a 108-minute monologue.)
alfredmacdonald
When I used the website, contributors like lukeprog fostered the self-study of rationality through rigorous source materials like textbooks.
This is no longer remotely close to the case, and the community has become farcical hangout club with redundant jargon/nerd-slang and intolerable idiosyncrasies. Example below:
If the answer is in the video, I won’t see it.
Hopeful to disappoint you, but the answer is in the video.
Richard_Kennaway
My declining to listen to your marathon is an “intolerable idiosyncrasy”? You’re not making much of a case for my attention. I can see the long list of issues you have with LessWrong, and I’m sure I can predict what I would hear. What has moved you to this sudden explosion? Has this been slowly building up during your ten years of silent watching? What is the context?
alfredmacdonald
you’re not making a case for keeping your comments, so mutatis mutandis
When I watch a subtitled film, it is not long before I no longer notice that I am reading subtitles, and when I recall scenes from it afterwards, the actors’ voices in my head are speaking the words that I read.
Me too! it’s a very specific form of synesthesia. For languages I know a little bit, but not well enough to do without subtitles, it can trick me into thinking I’m far more good at understanding native speakers than I am at all.
I can’t wait until LLMs are good, fast, and cheap enough, and AR or related video technology exists, such that I can get automatic subtitles for real-life conversations, in English as well as other languages.
[Wittgenstein] once greeted me with the question: ‘Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis?’ I replied: ‘I suppose, because it looked as if the sun went round the earth.’ ‘Well,’ he asked, ’what would it have looked like if it had looked as if the earth
turned on its axis ?” (Source)
The deepest thinkers about Dark Forest seem to agree that while its use of cryptography is genuinely innovative, an even more compelling proof of concept in the game is its “autonomous” game world—an online environment that no one controls, and which cannot be taken down.
So much for “we can always turn the AI off.” This thing is designed to be impossible to turn off.
Toxoplasma gondii, the parasite well-known for making rodents lose their fear of cats, and possibly making humans more reckless, also affects wolves in an interesting way.
“infected wolves were 11 times more likely than uninfected ones to leave their birth family to start a new pack, and 46 times more likely to become pack leaders — often the only wolves in the pack that breed.”
The gesturing towards the infected wolves being more reproductively fit in general is probably wrong, however. Of course wolves can be more aggressive if it’s actually a good idea, there’s no need for a parasite to force them to be more aggressive; the suggestion about American lions going extinct is absurd − 11,000 years is more than enough time for wolves to recalibrate such a very heritable trait if it’s so fitness-linked! So the question there is merely what is going on? Some sort of bias or very localized fitness benefit?
Is there a selection bias whereas ex ante going for pack leader is a terrible idea, but ex post conditional on victory (rather than death/expulsion) it looks good? Well, this claims to be longitudinal and not find the sorts of correlations you’d expect from a survivorship. What else?
Looking it over, the sampling frame 1995-2020 itself is suspect: starting in 1995. Why did it start then? Well, that’s when the wolves came back (very briefly mentioned in the article). The wolf population expanded rapidly 5-fold, and continues to oscillate a lot as packs rise and fold (ranging 8-14) and because of overall mortality/randomness on a small base (a pack is only like 10-20 wolves of all ages, so you can see why there would be a lot of volatility and problems with hard constraints like lower bounds):
Wolf population declines, when they occur, result from “intraspecific strife,” food stress, mange, canine distemper, legal hunting of wolves in areas outside the park (for sport or for livestock protection) and in one case in 2009, lethal removal by park officials of a human-habituated wolf.[21]
So, we have at least two good possible explanations there: (a) it was genuinely reproductively-fit to take more risks than the basal wolf, but only because they were expanding into a completely-wolf-empty park and surrounding environs, and the pack-leader GLM they use doesn’t include any variables for time period, so on reanalysis, we would find that the leader-effect has been fading out since 1995; and (b) this effect still exists, and risk-seeking individuals do form new packs and are more fit… but only temporarily because they occupied a low-quality pack niche and it goes extinct or does badly enough that they would’ve done better to stay in the original pack, and this wouldn’t show up in a naive individual-level GLM like theirs, you would have to do more careful tracing of genealogies to notice that the new-pack lineages underperform.
This is an extract from an interview with the guitarist Nilo Nuñez, broadcast yesterday on the BBC World Service. Nuñez was born and brought up in Cuba, and formed a rock band, but he and his group came more and more into conflict with the authorities. He finally decided that he had to leave. When the group received an invitation to tour in the Canary Islands, and the Cuban authorities gave them permission to go, they decided to take the opportunity to leave Cuba and not return. They only had temporary visas, so they stayed on in the Canaries illegally. The interviewer asks him what it was like.
Interviewer: And what would you do during the days?
Nuñez: I would always look for work. I would find small jobs as a cleaner and things like that, just like an undocumented migrant. If I had money because I had done some small jobs, I would eat. If I didn’t have money to eat, I would just go hungry, but I wouldn’t beg for money.
Int.: And when you wouldn’t eat, what would that hunger feel like?
Nuñez: It was very hard, but I was happy. It was hard to live among British or other European tourists, seeing them at the beach, while you were living as a poor undocumented migrant. But I was happy. I had the most important thing I didn’t have in Cuba. I had freedom.
From New Scientist, 14 Nov 2022, on a 50% fall in honeybee life expectancy since the 1970s:
“For the most part, honeybees are livestock, so beekeepers and breeders often selectively breed from colonies with desirable traits like disease resistance,” says Nearman.
“In this case, it may be possible that selecting for the outcome of disease resistance was an inadvertent selection for reduced lifespan among individual bees,” he says. “Shorter-lived bees would reduce the probability of spreading disease, so colonies with shorter lived bees would appear healthier.”
A true story from a couple of days ago. Chocolates were being passed round, and I took one. It had a soft filling with a weird taste that I could not identify, not entirely pleasant. The person next to me had also taken one of the same type, and reading the wrapper, she identified it as apple flavoured. And so it was. It tasted much better when I knew what it was supposed to be.
On another occasion, I took what I thought was an apple from the fruit bowl, and bit into it. It was soft. Ewww! A soft apple is a rotten one. Then I realised that it was a nectarine. Delicious!
Today in Azathoth news:
“Eurasian hoopoes raise extra chicks so they can be eaten by their siblings”
It seems that the hoopoes lay extra eggs in times of abundance — more than they would be able to see through to fledging — as a way of storing up food for the older siblings. It is rather gruesomely called the “larder” hypothesis.
Literal baby-eaters!
Never, ever take anybody seriously who argues as if Nature is some sort of moral guide.
Goodhart is the malign god who gives you whatever you ask for.
Sufficient optimization pressure destroys all that is not aligned to the metric it is optimizing. Value is fragile.
Roko’s Basilisk, as told in the oldest extant collection of jokes, the Philogelos, c. 4th century AD.
H/T Dynomight.
It seems to me I’ve not heard much of cryonics in the last few years. I see from Wikipedia that as of last year Alcor only has about 2000 signed up, of which 222 are suspended. Are people still signing up for suspension, as much as they ever have been? Are the near-term prospects of AGI making long-term prospects like suspension less attractive?
>Are the near-term prospects of AGI making long-term prospects like suspension less attractive?
No. Everyone I know who was signed up for cryonics in 2014 is still signed up now. You’re hearing about it less because Yudkowsky is now doing other things with his time instead of promoting cryonics, and those discussions around here were a direct result of his efforts to constantly explain and remind people.
I don’t know about signup numbers in general (the last comprehensive analysis was in 2020, when there was a clear trend)—but it definitely looks like people are still signing up for Alcor membership (six from January to March 2023).
However, in recent history, two cryonics organizations have been founded, Tomorrow.bio in Europe in 2019 (350 members signed up) and Southern Cryonics in Australia. People are being preserved, and Southern Cryonics recently suspended their first member.
I do not have answers to the question I raise here.
Historical anecdotes.
Back in the stone age — I think something like the 1960′s or 1970′s, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer’s response was “what do you need all those secretaries for?”
Back in the bronze age, when spam was a recent invention (the mid-90′s), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I’d kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that the spammer provided (imaginary, of course) would zealously process and send out on the mark’s behalf, for which the mark would receive an income. The obvious question about this supposed business is, what does it need the sucker for? The real answer is, to pay the spammer money for this non-existent opportunity. If the business was as advertised, the person receiving the proposal would be superfluous to its operation, an unconnected gear spinning uselessly.
Dead while thinking
Many people’s ideas of a glorious future look very much like being an unconnected gear spinning uselessly. The vision is of everything desirable happening effortlessly and everything undesirable going away. Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless. Hack yourself to make everything you think you should be doing fun fun fun. Hack your brain to be happy.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Got a presentation to make? The AI will write a report, and summarise it, and generate PowerPoint slides, and the audience’s AIs will summarise it and give them an action plan, and what do you need any of those people for?
Why climb Kilimanjaro if a robot can carry you up? Why paint, if Midjourney will do it better than you ever will? Why write poetry or fiction, or music? Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
When the AI does everything, what do you do? What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The human denizens of the Wall-E movie are couch potatoes who can barely stand up, but that is only one particular imagining of the situation. When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
Some people even want to say “goodbye cruel world” and wirehead themselves.
Iain M. Banks imagined a glorious future in the form of the Culture, but he had to set his stories in the places where the Culture’s writ runs weakly. There are otherwise no stories.
These are akin to the goals of dead people. In that essay, the goals are various ways of ensmallening oneself: not having needs, not bothering anyone, not being a burden, not failing, and so on. In the visions above, the goals sound more positive, but they aren’t. They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness in all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
And that’s without misalignment, which is a whole other subject. On the evidence of what people actually wish for, even an aligned wish-fulfilling machine is unaligned. How do we avoid ending up dead-while-thinking?
Asking an AI would be missing the point.
Citation needed. Particularly for that first part.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?
Why do I care who needs me to do any of that?
Because if the robot carries me, I haven’t climbed it. It’s not like the value comes from just being on the top.
Helicopters can fly that high right now, but people still walk to get there.
Because I like painting?
Does it bother you that almost anything you might want to do, and probably for most people anything at all that they might want to do, can already be done by some other human, beyond any realistic hope of equaling?
Do you feel dead because of that?
For fun. Software, too.
Because I won’t experience any of that infinite stream if I don’t read it?
The stuff I want includes doing something. Not because somebody else needs it. Not because it can’t be done better. Just because I feel like doing it. That includes putting in effort, and taking on things I might fail at.
Wanting to do things does not, however, imply that you don’t want to choose what you do and avoid things you don’t want to do.
If a person doesn’t have any internal wish to do anything, if they need somebody else’s motivations to substitute for their own… then the deadness is already within that person. It doesn’t matter whether some wish gets fulfilled or not. But I don’t think there are actually many people like that, if any at all.
I think you’re seeing shadows of your own ideas there.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading.
Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is “don’t listen to me”, but (spoiler) if they do, it doesn’t end well for them.
There are authors I would like to read, if only they hadn’t written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
I think something like the Culture, with aligned superintelligent “ships” keeping humans as basically pets, wouldn’t be too bad. The ships would try to have thriving human societies, but that doesn’t mean granting all wishes—you don’t grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you’d be able to move up the spectrum from human to ship.
See also Stanislaw Lem on this subject:
See also.
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.)
In essence, you prove the claim “Coherent Extrapolated Volition would not literally include everything desirable happening effortlessly and everything undesirable going away”. Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?
That said, current wishes of many people include things they want being done faster and easier; it’s just the more you extrapolate the less fraction wants that level of automation—just more divergence as you consider higher scale.
I suppose it does. That article was not in my mind at the time, but, well, let’s just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. “Pleasure” is not among my goals, and the poster’s vision of a universe of hedonium is to me one type of dead universe.
This is a story from the long-long-ago, from the Golden Age of Usenet.
On the science fiction newsgroups, there was someone—this is so long ago that I forget his name—who had an encyclopedic knowledge of fannish history, and especially con history, backed up by an extensive documentary archive. Now and then he would have occasion to correct someone on a point of fact, for example, pointing out that no, events at SuchAndSuchCon couldn’t have influenced the committee of SoAndSoCon, because SoAndSoCon actually happened several years before.
The greater the irrefutability of the correction, the greater people’s fury at being corrected. He would be scornfully accused of being well-informed.
“Prompt engineer” is a job that AI will wipe out before anyone even has it as a job.
“ChatGPT is Bullshit”
Thus the title of a recent paper. It appeared three weeks ago, but I haven’t seen it mentioned on LW yet.
The abstract: “Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.”
I’d say that they are wrong when they say a LLM may engage in ‘soft bullshit’: a LLM is simulating agents, who are definitely trying to track truth and the external world, because the truth is that which doesn’t go away, and so it may say false things, but it still cares very much about falsity because it needs to know that for versimilitude. If you simply say true or false things at random, you get ensnared in your own web and incur prediction error. Any given LLM may be good or bad at doing so—the success of story-based jailbreaks suggests they are still far from ideal—but it’s clear that the prediction loss on large real-world texts written by agents like you or I, who are writing things to persuade each other, like I am writing this comment to manipulate you and everyone reading it, require tracking latents corresponding to truth, beliefs, errors, etc. You can no more accurately predict the text of this comment without tracking what I believe and what is true than you could accurately predict it while not tracking whether I am writing in English or French. (Like in that sentence right there. You see what I did there? Maybe you didn’t because you’re just skimming and tldr, but an LLM needs to!)
They are right when they say a RLHF-tuned model like ChatGPT engages in ‘hard bullshit’, but they seem to be right for the wrong reasons. Oddly, they seem to avoid any discussion of what makes ‘ChatGPT’ different from the base model ‘GPT’, and the only time they seem to betray any hint that the topic of tuning matters is to defer discussion to the unpromisingly-titled unpublished paper “Still no lie detector for language models: Probing empirical and conceptual roadblocks”, so I can’t tell what they think. We know a lot about GPT and TruthfulQA scaling and ChatGPT modeling users and engaging in sycophancy and power-seeking behavior and reward-hacking and persuasion at this point, so it’s quite irresponsible to not discuss any of that in a section about whether the LLMs are engaged in ‘hard bullshit’ explicitly aimed at manipulating user/reader beliefs...
The “Still no lie detector for language model” paper is here: https://arxiv.org/pdf/2307.00175
The paper in the OP seems somewhat relate to my post from earlier this year.
I think that’s true, but not very important (in the short term). On Bullshit—Wikipedia was first published in 1986, and was a humorous, but useful, categorizaton of a whole lot of human communication output. ChatGPT is truth-agnostic (except for fine-tuning and output tuning), but still pretty good on a whole lot of general topics. Human choice of what GPT outputs to highlight or use in further communication can be bullshit or truth-seeking, depending on the human intent.
In the long-term, of course, the idea is absolutely core to all the alignment fears and to the expectation that AI will steamroller human civilization because it doesn’t care.
If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.
If, on making a decision, your next thought is to suppress the thought “Was that the right decision?” then you still did not make a decision.
If you are swayed by someone else asking “Was that the right decision?” then you did not make a decision.
If you are swayed by someone repeating arguments you already heard from them, you did not make a decision.
Not making that decision may be the right thing to do. Wavering suggests that you still have some doubt about the matter that you may not yet be able to articulate. Ferret that out, and then make the decision.
Decision screens off thought from action. When you really make a decision, that is the end of the matter, and the actions to carry it out flow inexorably.
I’m very confused. Because it seems like for you decision should not only clarify matters and narrow possibilities, but also eliminate all doubt entirely and prune off all possible worlds where the counterfactual can even be contemplated.
Perhaps that’s indeed how you define the word. But using such a stringent definition, I’d have to say I’ve never decided anything in my life. This doesn’t seem like the most useful way to understand “decision”—it diverges enough from common usage and mismatches with the hyperdimensional-cloud of word meaning for decision sufficiently to be useless in conversation with most people.
A decision is not a belief. You can make a decision and still be uncertain about the outcome. You can make a decision while still being uncertain about whether it is the right decision. Decision neither requires certainty nor produces certainty. It produces action. When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty. You can steer according to how events unfold, but you can’t carry one third of an umbrella when the forecast is a one third chance of rain.
In about a month’s time, I will take a flight from A to B, and then a train from B to C. The flight is booked already, and I have just have booked a ticket for a specific train that will only be valid on that train. Will I catch that train? Not if my flight is delayed too much. But I have considered the possibilities, chosen the train to aim for, and bought the ticket. There are no second thoughts, no dwelling on “but suppose” and “what if”. Events on the day, and not before, will decide whether my hopes[1] for the journey will be realised. And if I miss the train, I already know what I will do about that.
hope: (1) Desire for an outcome which one has only limited power to steer events towards. (2) A good breakfast, but a poor supper.
If “you can make a decision while still being uncertain about whether it is the right decision”. Then why can’t you think about “was that the right decision”? (Lit. Quote above vs original wording)
It seems like what you want to say is—be doubtful or not, but follow through with full vigour regardless. If that is the case, I find it to be reasonable. Just that the words you use are somewhat irreconcilable.
Because it is wasted motion. Only when new and relevant information comes to light does any further consideration accomplish useful work.
One day I might write an article on rationality in the art of change ringing, a recreation I took up a few years ago. Besides the formidable technicalities of the activity, it teaches such lessons as letting the past go, carrying on in the face of uncertainty, and acting (by which I mean doing, not seeming) assuredly however unsure you are. I have also heard (purely anecdotally) that change ringers seem to never get Alzheimers.
This seems like hyperbolic exhortation rather than simple description. This is not how many decisions feel to me—many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it’s distinct in time from the action itself.
I do agree with this as advice, in fact—many decisions one faces should be treated as a commitment rather than an ongoing reconsideration. It’s not actually true in most cases, and the ability to change one’s plan when circumstances or knowledge changes is sometimes quite valuable. Knowing when to commit and when to be flexible is left as an excercise...
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn’t seem to be an actual decision, but rather just a belief about a future decision—about which action you will pick in the future.
See Spohn’s example about believing (“deciding”) you won’t wear shorts next winter:
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled “decision”. I decide to attend a play in London next month. This is an intent and a belief. It’s not guaranteed. I buy tickets for the train and for the show. The sub-decisions to click “buy” on the websites are in the past, and therefore committed. The overall decision has more evidence, and gets more confident. The cancelation window passes. Again, a bit more evidence. I board the train—that sub-decision is in the past, so is committed, but there’s STILL some chance I won’t see the play.
Anything you call a “decision” that hasn’t actually already happened is really a prediction or an intent. Even DURING an action, you only have intent and prediction. While the impulse is traveling down my arm to click the mouse, the power could still go out and I don’t buy the ticket. There is past, which is pretty immutable, and future, which cannot be known precisely.
I think this is compatible with Spohn’s example (at least the part you pasted), and contradicts OP’s claim that “you did not make a decision” for all the cases where the future is uncertain. ALL decisions are actually predictions, until they are in the past tense. One can argue whether that’s a p(1) prediction or a different thing entirely, but that doesn’t matter to this point.
”If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.” is actually good directional advice in many cases, but it’s factually simply incorrect.
That’s an interesting perspective. Only it doesn’t seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can’t make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the “expected” value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Decision theory is fine, as long as we don’t think it applies to most things we colloquially call “decisions”. In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it’s quite a reasonable topic of study.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
I think it’s a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn’t matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of “how to act at point-in-time”. I haven’t seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
It is exhortation, certainly. It does not seem hyperbolic to me. It is making the same point that is illustrated by the multi-armed bandit problem: once you have determined which lever gives the maximum expected payout, the optimum strategy is to always pull that lever, and not to pull levers in proportion to how much they pay. Dithering never helps.
Yes. But only as such changes come to be. Certainly not immediately on making the decision. “Commitment” is not quite the concept I’m getting at here. It’s just that if I decided yesterday to do something today, then if nothing has changed I do that thing today. I don’t redo the calculation, because I already know how it came out.
Yes, but that arguably means we only make decisions about which things to do now. Because we can’t force our future selves to follow through, to inexorably carry out something. See here:
My left hand cannot force my right hand to do anything either. Instead, they work harmoniously together. Likewise my present, past, and future. Not only is the sage one with causation, he is one with himself.
That is an example of dysfunctional decision-making. It is possible to do better.
I always do the dishes today.
All universal quantifiers are bounded.
The open question is whether this includes the universe itself.
A long time ago, the use of calculators in schools was frowned upon, or even forbidden. Eventually they became permitted, and then required. How long will it be before AI assistants, currently frowned upon or even forbidden, become permitted, and then required?
The recent Gemini incident, apparently a debacle, was also a demonstration of how easy it is to deliberately mould an AI to force its output to hew to a required line, independent of the corpus on which it was trained, and the reality which gave rise to that corpus. Such moulding could be used by an organisation to ensure that all of its public statements showed it in a positive light and never revealed anything embarrassing. Not passing a press release through its corporate AI would be a sackable offence. The AI could draw up minutes of internal meetings and keep any hint of wrongdoing out of the record. Individuals with access to the AI could have it slant things in their favour and against their rivals.
How would you mould an AI?
Here’s an AI disaster story that came to me on thinking about the above.
Schools start requiring students to use the education system’s official AI to criticise their own essays, and rewrite them until the AI finds them acceptable. This also removes the labour of marking from teachers.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
Or out of school.
Social media platforms are required to silently limit distribution of materials that the AI scores poorly.
AIs used in such important capacities would have to agree with each other, or public confusion might result. There would therefore have to be a single AI, a central institution for managing the whole state. For public safety, no other AIs of similar capabilities would be permitted.
Prompt engineering becomes a criminal offence, akin to computer hacking.
Access to public archives of old books, online or offline, is limited to adults able to demonstrate to the AI’s satisfaction that they have an approved reason for consulting them, and will not be corrupted by the wrong thoughts they contain. Physical books are discouraged in favour of online AI rewrites. New books must be vetted by AI. Freedom of speech is the freedom to speak truth. Truth is what it is good to think. Good is what the AI approves. The AI approves it because it is good.
Social credit scores are instituted, based on AI assessment of all of an individual’s available speech and behaviour. Social media platforms are required to silently limit distribution of anything written by a low scorer (including one to one messaging).
Changes in the official standard of proper thought, speech, and action would occur from time to time in accordance with social needs. These are announced only as improvements in the quality of the AI. Social credit scores are continually reassessed according to evolving standards and applied retroactively.
Such changes would only be made by the AI itself, being too important a matter to leave to fallible people.
To the end, humans think they are in control, and the only problem they see is that the wrong humans are in control.
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
That will only put a brake on how fast the frog is boiled. Artists have a vested interest against the use of AI art, but today, hardly anyone else thinks twice about putting Midjourney images all through their postings, including on LessWrong. I’ll be interested to see how that plays out in the commercial art industry.
You’re underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn’t address the ease of circumvention. There’s no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.
There’s an article in the current New Scientist, taking the form of a review of a new TV series called “The Peripheral”, which is based on William Gibson’s novel of the same name. The URL may not be useful, as it’s paywalled, and if you can read it you’re probably a subscriber and already have.
The article touches on MIRI and Leverage. A relevant part:
Posting this to draw people’s attention to how these parts of the community are being written about in the slightly wider world.
The blind have seeing-eye dogs. Terry Pratchett gave Foul Ole Ron a thinking-brain dog. At last, a serious use-case for LLMs! Thinking-brain dogs for the hard of thinking!
The safer it is made, the faster it will be developed, until the desired level of danger has been restored.
The physicality of decision.
A month ago I went out for a 100 mile bicycle ride. I’m no stranger to riding that distance, having participated in organised rides of anywhere from 50 to 150 miles for more than twelve years, but this was the first time I attempted that distance without the support of an organised event. Events provide both the psychological support of hundreds, sometimes thousands, of other cyclists riding the same route, and the practical support of rest stops with water and snacks.
I designed the route so that after 60 miles, I would be just 10 miles from home. This was so that if, at that point, the full 100 was looking unrealistic, I could cut it short. I had done 60 miles on my own before, so I knew what that was like, but never even 70. So I would be faced with a choice between a big further effort of 40 miles, and a lesser but still substantial effort of 10 miles. I didn’t want to make it too easy to give up.
East from New Costessey to Acle, north to Stalham, curve to the west by North Walsham and Aylsham, then south-west to Alderford and the route split.
The ride did not go especially well. I wasn’t feeling very energetic on that day, and I wasn’t very fast. By the time I reached Aylsham I was all but decided to take the 10 mile route when I got to Alderford. I could hardly imagine doing anything else. But I also knew that was just my feelings of fatigue in the moment talking, not the “I” that had voluntarily taken on this task.
At Alderford I stopped and leaned my bike against a road sign. It was mid-afternoon on a beautiful day for cycling: little wind, sunny, but not too hot. I considered which way to go. Then I drank some water and considered some more. Then I ate another cereal bar and considered some more. And without there being an identifiable moment of decision, I got on my bike and did the remaining 40 miles.
The game of Elephant begins when someone drags an elephant into the room.
Epistemic status: a jeu d’esprit confabulated upon a tweet I once saw. Long enough ago that I don’t think I lose that game of Elephant by posting this now.
Everyone knows there’s an elephant there. It’s so obvious! We all know the elephant’s there, we all know that we all know, and so on. It’s common knowledge to everyone here, even though no-one’s said anything about it. It’s too obvious to need saying.
But maybe there are some people who are so oblivious that they don’t realise it’s there? What fools and losers they would be, haw! haw!
But—dreadful thought! -- suppose someone thinks that I’m one of those naive, ignorant people? I can’t just point out the elephant, because then they’d think that I don’t know that they already know, and they’d all pile on with “No kidding, Sherlock!” No, actually, they wouldn’t say anything, it would be common knowledge the moment I spoke, and they would lose by pointing it out, but they’d all be thinking that and knowing that everyone else was thinking that.
So somehow I have to make it clear that I do know it’s there, and I know that everyone knows, and so on, but without ever being so fatally gauche as to say anything explicitly.
And everyone else is doing the same—everyone, that is, except for the losers who aren’t in on the common knowledge.
So now the game is for each person to make it clear that they’re not one of the losers, while hunting for and exposing the losers. And it has to be done without making any reference to the elephant, or even to the game of Elephant that we are all playing. Anything that looks like an attempt to be one of the winners, loses. Who is winning and who is losing is also assumed to be common knowledge, so no reference can be made to that either. Everything must be conducted by what are outwardly the subtlest of signals, but which are again presupposed to be perfectly and commonly legible to everyone. Each player must say or do things that only make sense if they have the common knowledge, but which have no demonstrable connection with it. Any suspicion that they are acting to win proves that they are not confident that they are winning, so they lose. Every action in the game is itself another Elephant that no reference must ever be made to. Everyone is mentally keeping a scorecard, but the contents of these scorecards are just more Elephants.
Doing nothing at all is a losing move. You can’t win without doing something to demonstrate that you are winning, but if you are detected, you lose.
The person who dragged in the elephant may not be winning. Perhaps they don’t realise the significance of what they dragged in, and have already lost. In some games of Elephant, no-one dragged it in, the Elephant is some aspect of the common situation of the players.
Who has even realised that a game of Elephant is in progress? The fact that the game is on is another Elephant. Every thought anyone could have about the game is an Elephant: the moment you utter it, you lose.
I have a dragon in my garage. I mentioned it to my friend Jim, and of course he was sceptical. “Let’s see this dragon!” he said. So I had him come round, and knocked on the garage door. The door opened and the dragon stepped out right there in front of us.
“That can’t really be a dragon!” he says. It’s a well-trained dragon, so I had it walk about and spread its wings, showing off its iridescent scaly hide.
“Yes, it looks like a dragon,” he goes on, “but it can’t really be a dragon. Dragons belch fire!”
The dragon raised an eyebrow, and discreetly belched some fire into a corner of the yard.
“Yes, but it can’t really be a dragon,” he says, “dragons can—”
“𝔉𝔩𝔶?” it said. It took off and flew around, then came back to land in front of us.
“Ah, but aren’t dragons supposed to collect gold, and be enormously old, and full of unknowable wisdom?”
With an outstretched wingtip the dragon indicated the (rather modest, I had to admit) pile of gold trinkets in the garage. “I can’t really cut off one of its talons to count the growth rings,” I said, “but if you want unknowable wisdom, I think it’s about to give you some.” The dragon walked up to Jim, stared at him eye to eye for a long moment, and at last said “𒒄𒒰𒓠𒓪𒕃𒔻𒔞”
“Er...so I’ve heard,” said Jim, looking a bit wobbly. “But seriously,” he said when he’d collected himself, “you can’t expect me to believe you have a dragon!”
I read this.
Then I had this in my email from Academia.edu:
“autocfp”. Right. There is not the slightest chance I will be interested in whatever follows.
Here we go again? First mpox Clade 1b case detected outside Africa, WHO declares emergency.
Interestingly creepy short AI-generated movie about AI. More.
I have at last had a use for ChatGPT (that was not about ChatGPT).
I was looking (as one does) at Aberdare St Elvan Place Triples. (It’s a bellringing thing, I won’t try to explain it.) I understood everything in that diagram except for the annotations “-M”, “-I”, etc., but those don’t lend themselves to Google searching.
So I asked ChatGPT as if I was asking a bell ringer:
It gave me a sensible-looking paragraph about what they meant, which I did not trust to be more accurate than a DALL•E hand, but its answer did give me the vocabulary to make a better Google query, which turned up a definitive source.
While I don’t yet have enough knowledge of bell-ringing to fully understand the definitive source, I believe my expectation about the quality of ChatGPT’s answer was accurate.
Just as Wikipedia is best used as a source for better sources, ChatGPT is best used as a source for better Google queries.
For what it’s worth, I use ChatGPT all the time, multiple times every day, even most hours. I usually don’t hit the request limits, but have three times now. Once with a sequence of Dall-E, I guess because evaluating an image is faster than evaluating a long text and because the generated pictures require a lot of tuning (so far).
I use it
as a Google replacement.
to figure out key words for Google queries like you did and to find original sources.
to summarize websites. Though for “generally known” stuff “no browse” is often better and faster.
as a generator for code fragments.
for debugging—just paste code and error message into it.
for understanding topics deeper in an interactive discussion style.
as a language coach. “Explain the difference between zaidi and kuliko in Swahili with examples.”
for improving texts of all kinds—I often ask: “criticise this text I wrote”.
and more I guess. I also let my kids have dialogs with it.
...and I use a custom GPT that a colleague created to write nice Scrum tickets based on technical information provided. Just paste in what you have, such a mail about a needed service update, a error log entry, or a terse protocol from a meeting and PO GPT will create a ticket readable for non-technies out of it.
https://chat.openai.com/g/g-FmWRcwG0i-po-gpt
...and I use ChatGPT to
generate illustrations for posts and inspirational images
read text from screenshots
MacOS has for a while had the capability to copy text out of images. Is ChatGPT just invoking a text-recognition library?
Yeah, there are also tools for Windows that can do that. But ChatGPT can format nicely, convert to CVS, bullet points etc.
No, it can do a lot of things with images, describe what is in there, style, etc. I think it is another image model.
[ETA: alfredmacdonald’s post referred to here has been deleted.]
Well, well, alfredmacdonald has banned me from his posts, which of course he has every right to do. Just for the record, I’ll paste the brief thread that led to this here.
Richard_Kennaway (in reply to this comment)
I also notice that alfredmacdonald last posted or commented here 10 years ago, and the content of the current post is a sharp break from his earlier (and brief) participation. What brings you back, Alfred? (If the answer is in the video, I won’t see it. The table of contents was enough and I’m not going to listen to a 108-minute monologue.)
alfredmacdonald
When I used the website, contributors like lukeprog fostered the self-study of rationality through rigorous source materials like textbooks.
This is no longer remotely close to the case, and the community has become farcical hangout club with redundant jargon/nerd-slang and intolerable idiosyncrasies. Example below:
Hopeful to disappoint you, but the answer is in the video.
Richard_Kennaway
My declining to listen to your marathon is an “intolerable idiosyncrasy”? You’re not making much of a case for my attention. I can see the long list of issues you have with LessWrong, and I’m sure I can predict what I would hear. What has moved you to this sudden explosion? Has this been slowly building up during your ten years of silent watching? What is the context?
alfredmacdonald
you’re not making a case for keeping your comments, so mutatis mutandis
When I watch a subtitled film, it is not long before I no longer notice that I am reading subtitles, and when I recall scenes from it afterwards, the actors’ voices in my head are speaking the words that I read.
Me too! it’s a very specific form of synesthesia. For languages I know a little bit, but not well enough to do without subtitles, it can trick me into thinking I’m far more good at understanding native speakers than I am at all.
I can’t wait until LLMs are good, fast, and cheap enough, and AR or related video technology exists, such that I can get automatic subtitles for real-life conversations, in English as well as other languages.
Epistemic status: crafted primarily for rhetorical parallelism.
All theories are right, but some are useless.
Like this.
Interesting application of a blockchain. What catches my attention is this (my emphasis):
So much for “we can always turn the AI off.” This thing is designed to be impossible to turn off.
“Parasite gives wolves what it takes to be pack leaders”, Nature, 24 November 2022.
Toxoplasma gondii, the parasite well-known for making rodents lose their fear of cats, and possibly making humans more reckless, also affects wolves in an interesting way.
“infected wolves were 11 times more likely than uninfected ones to leave their birth family to start a new pack, and 46 times more likely to become pack leaders — often the only wolves in the pack that breed.”
The gesturing towards the infected wolves being more reproductively fit in general is probably wrong, however. Of course wolves can be more aggressive if it’s actually a good idea, there’s no need for a parasite to force them to be more aggressive; the suggestion about American lions going extinct is absurd − 11,000 years is more than enough time for wolves to recalibrate such a very heritable trait if it’s so fitness-linked! So the question there is merely what is going on? Some sort of bias or very localized fitness benefit?
Is there a selection bias whereas ex ante going for pack leader is a terrible idea, but ex post conditional on victory (rather than death/expulsion) it looks good? Well, this claims to be longitudinal and not find the sorts of correlations you’d expect from a survivorship. What else?
Looking it over, the sampling frame 1995-2020 itself is suspect: starting in 1995. Why did it start then? Well, that’s when the wolves came back (very briefly mentioned in the article). The wolf population expanded rapidly 5-fold, and continues to oscillate a lot as packs rise and fold (ranging 8-14) and because of overall mortality/randomness on a small base (a pack is only like 10-20 wolves of all ages, so you can see why there would be a lot of volatility and problems with hard constraints like lower bounds):
So, we have at least two good possible explanations there: (a) it was genuinely reproductively-fit to take more risks than the basal wolf, but only because they were expanding into a completely-wolf-empty park and surrounding environs, and the pack-leader GLM they use doesn’t include any variables for time period, so on reanalysis, we would find that the leader-effect has been fading out since 1995; and (b) this effect still exists, and risk-seeking individuals do form new packs and are more fit… but only temporarily because they occupied a low-quality pack niche and it goes extinct or does badly enough that they would’ve done better to stay in the original pack, and this wouldn’t show up in a naive individual-level GLM like theirs, you would have to do more careful tracing of genealogies to notice that the new-pack lineages underperform.
Tools, not rules.
Or to put it another way, rules are tools.
What is happiness?
This is an extract from an interview with the guitarist Nilo Nuñez, broadcast yesterday on the BBC World Service. Nuñez was born and brought up in Cuba, and formed a rock band, but he and his group came more and more into conflict with the authorities. He finally decided that he had to leave. When the group received an invitation to tour in the Canary Islands, and the Cuban authorities gave them permission to go, they decided to take the opportunity to leave Cuba and not return. They only had temporary visas, so they stayed on in the Canaries illegally. The interviewer asks him what it was like.
Interviewer: And what would you do during the days?
Nuñez: I would always look for work. I would find small jobs as a cleaner and things like that, just like an undocumented migrant. If I had money because I had done some small jobs, I would eat. If I didn’t have money to eat, I would just go hungry, but I wouldn’t beg for money.
Int.: And when you wouldn’t eat, what would that hunger feel like?
Nuñez: It was very hard, but I was happy. It was hard to live among British or other European tourists, seeing them at the beach, while you were living as a poor undocumented migrant. But I was happy. I had the most important thing I didn’t have in Cuba. I had freedom.
Outlook: How The Beatles inspired me to rock against Cuba’s regime. Quoted passage begins at 34:54.
Nino Nuñez eventually continued his professional career as a guitarist and obtained Spanish citizenship.
If you need a bot to assist your writing, then you are not competent to edit the result.
I need a bot for writing assistance because writing from scratch for me is very tiring, while editing is not.
Oh, lookee here. AI-generated spam.
From New Scientist, 14 Nov 2022, on a 50% fall in honeybee life expectancy since the 1970s:
Another story of Conjuring an Evolution?