Hegel—A Very Short Introduction by Peter Singer—Book Review Part 1: Freedom
Hegel is a philosopher who is notorious for being incomprehensible. In fact, for one of his books he signed a contract that assigned a massive financial penalty for missing the publishing deadline, so the book ended up being a little rushed. While there was a time when he was dominant in German philosophy, he now seems to be held in relatively poor regard and his main importance is seen to be historical. So he’s not a philosopher that I was really planning to spend much time on.
Given this, I was quite pleased to discover this book promising to give me A Very Short Introduction, especially since it is written by Peter Singer, a philosopher who write and thinks rather clearly. After reading this book, I still believe that most of what Hegel wrote was pretentious nonsense, but the one idea that struck me as the most interesting was his conception of freedom.
A rough definition of freedom might be ensuring that people are able to pursue whatever it is that they prefer. Hegel is not a fan abstract definitions of freedom which treat all preferences the same and don’t enquire where they come from.
In his perspective, most of our preferences are purely a result of the context in which we exist and so such an abstract definition of freedom is merely the freedom to be subject to social and historical forces. Since we did not choose our desires, he argues that we are not free when we act from our desires. Hegel argues that, “every condition of comfort reveals in turn its discomfort, and these discoveries go on for ever”. One such example would be the marketing campaigns to convince us that sweating was embarrassing (https://www.smithsonianmag.com/…/how-advertisers-convinced…/).
This might help clarify further: Singer ties this to the more modern debate between Radical Economists and Liberal Economists. Liberal economists use how much people pay as a measure of how strong their preferences are and refuse to get into the question of whether any preferences are more valuable than any other seeing this as ideological. Radical economists argue that many of our desires are a result of capitalism. They would say that if I convince you that you are ugly and then I sell you $100 of beauty products to restore your confidence, then I haven’t created $100 worth of value. They argue that refusing to value any preference above any other preference is an ideological choice in and of itself; and that there is no way to step outside of ideology.
If pursuing our desires is not freedom, what is? Kant answers that freedom is following reason and performing your duty. This might not sound very much like freedom, quite the opposite, but for Kant not following your reason was allowing yourself to be a slave of your instincts. Here’s another argument: perhaps a purely rational being wouldn’t desire the freedom to shirk their duty, so insofar as this is freedom, it might not be of a particularly valuable kind and if you think this is imposing on your freedom this is because of your limited perspective.
Hegel thought that Kant’s answer was a substantial advance, but he also thought it was empty of content. Kant viewed duty in terms of the categorical imperative, “Do not act except if you could at the same time will that it would become a universal law”, but Hegel thought it was empty of content. Kant would say that you shouldn’t steal because you couldn’t will a world where everyone would steal from everyone else. But mightn’t some people be fine with such a world, particularly if they thought they might come out on top? Even if you don’t want to consider people with views that extreme, you can almost always find a universal to justify whatever action you want. Why should the universal that the thief would have to accept be, “Anyone can steal from another person” instead of, “Anyone can steal from someone who earned who doesn’t deserve their wealth?” (See section III of You Kant Dismiss Universalizability). Further, Kant’s absolutist form of morality (no lying even to save a friend from a murderer) seems to require us to completely sacrifice our natural desires.
Hegel’s solution to this was to suggest the need for what he calls an organic community; or a community that is united in its values. He argues that such communities shape people’s desires to such an extent that most people won’t even think about pursuing their own interests and that this resolves the opposition between morality and self-interest that Kant’s vision of freedom creates. However, unlike the old organic communities which had somewhat arbitrary values, Hegel argued that the advance of reason meant that the values of these communities also had to be based on reason, otherwise freethinking individuals wouldn’t align themselves with the community.
Indeed, this is the key part of his much-aligned argument that the Prussian State was the cumulation of history. He argued that the French revolution has resulted in such bloodshed because it was based on an abstract notion of freedom which was pursued to the extent that all the traditional institutions were bulldozed over. Hegel argued that the evolution of society should built upon what already exists and not ignore the character of the people or the institutions of society. For this reason, his ideal society would have maintained the monarchy, but with most of the actual power being delegated to the houses, except in certain extreme circumstances.
I tend to think of Hegel as primarily important for his contributions to the development of Western philosophy (so even if he was wrong on details he influenced and framed the work of many future philosophers by getting aspects of the framing right) and for his contributions to methodology (like standardizing the method of dialectic, which on one hand is “obvious” and people were doing it before Hegel, and on the other hand is mysterious and the work of experts until someone lays out what’s going on).
Which aspects of framing do you think he got right?
“In more simplistic terms, one can consider it thus: problem → reaction → solution. Although this model is often named after Hegel, he himself never used that specific formulation. Hegel ascribed that terminology to Kant. Carrying on Kant’s work, Fichte greatly elaborated on the synthesis model and popularized it.”—Wikipedia; so Hegel deserves less credit than he is originally granted.
I don’t recall anymore, it’s been too long for me to remember enough specifics to answer your question. It’s just an impression or cached thought I have that I carry around from past study.
“The history of all hitherto existing society is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman, in a word, oppressor and oppressed, stood in constant opposition to one another, carried on an uninterrupted, now hidden, now open fight, that each time ended, either in the revolutionary reconstitution of society at large, or in the common ruin of the contending classes”
Overall summary: Given the rise of socialism in recent years, now seemed like an appropriate time to review the Communist Manifesto. At times I felt that Marx’s writing was keenly insightful, at other times I felt he was in ignorance of basic facts and at other times I felt that he held views that were reasonable at the time, but for which the flaws are now obvious. In particular, I found the first-half much more engaging than I expected because, say what you like about Marx, he’s an engaged and poetic writer. Towards the end, the focused shifted into particular time-bounded political disputes for which I neither had the knowledge to understand nor the interest to acquire. At the start, I felt that I already had a decent grasp of the communist impulse and I haven’t become any more favourable to communism, but reading this rounded out a few more details of the communist critique of capitalism.
Capitalism: Despite being its most famous critic, Marx has a strong appreciation for the power of capitalism. He writes about it sweeping away all the old feudal bonds and how it draws even the most “barbarian” nations into civilisation. He writes about it stripping every occupation previously admired of its halo into its “paid wage labourers”; and undoubtedly some professions are affected far too much by market concerns, but this has to be weighed up against the increase in access that has been brought. He even writes that it has accomplished “wonders far exceeding the Egyptian Pyramids, Roman Acquaducts and Gothic Cathedrals” and his willingness to acknowledge this in such strong terms increased my respect for him. Marx can’t see capitalism as anything, but exploitation; for those who would answer that it lifts all boats, I don’t think he has a strong reply apart from denial that this occurs. To steelman him, even if people are better off financially, they can be worse off overall if they are now working only the simplest, most monotonous jobs. That would have been a stronger argument when much more work was in factories, but with increasing automation, these are precisely those jobs that are disappearing. Another argument would be that over time the capitalists who survive will be those who are best at lowering wage costs, by minimising the use of labour and ensuring that the work is set up to use as much unskilled labour as possible. So even if people were financially better off in the short term, they might be worse off over the long term. However, history seems to have shown the opposite, with modern wages far greater than in pre-industrial, pre-capitalist times.
Class warfare: Marx made several interesting comments on this. How the bourgeoise were often empowered by the monarchy to limit the power of the nobility. That the proletariat should be thought of as a new class, separate from the peasants, since their interests diverge with the later more likely to try rolling things back than to support creating a new order. How the bourgeois would seek help from the proletariat against aristocrats, dragging the proletariat into the political arena. How the proletariat were not unified in Marx’s time, but how improved communication provided the means for national unification. And that a section of the bourgeois who were threatened with falling into the proletariat would join with the proletariat. I definitely think class analysis has value, but I worry how Marxists often don’t be able to see things in any way other than class. We are members of classes; that is true; but we are also individuals and no one way of carving up the space captures all of reality. For example, Marx includes masters/apprentices in his oppressor/oppressed hierarchy, even the though most of the later will eventually become the former
Personal property: It was interesting hearing him talking about abolishing personal property as that is an element of the original communism that seems to be de-emphasised these days, with the focus more on seizing the means of production. I expect that this is related to a change in context; Marx was able to write that private property is done away with for 9/10s of the population, I don’t know how true it was at the time, but it certainly isn’t true today. Nonetheless, I found it interesting that his desire to abolish bourgeois property was similar to the bourgeois desire to abolish feudal property; both believe that the kind of property they want to abolish is based upon exploitation and unearned privilege.
False consciousness: For Marx, the ideas that are dominant in society are just the ideas of the elites. Law, morality and religion are just prejudices of the bourgeois. People don’t structure society based upon ideas, rather the ideas are determined by the structure of society and what allows society to be as productive as possible. Marx doesn’t provide an exact chain of causation, but perhaps he believes that the elites benefit from increases in production and therefore always push society in that direction, in order to realise their short-term interests. The question then arises: if everyone else has a false consciousness why then doesn’t Marx also? Again speculating, perhaps Marx would say when a system is on its last legs, the flaws and contradiction become too large for the elite ideology to remain cover up. Alternatively, perhaps it is only the dominant ideas in society that are determined by the structure of society and other ideas can exist, just without being allowed any real influence. I still feel Marx overstates the power of false consciousness, but at least I now have an answer to this question that’s somewhat reasonable.
It is not obvious to me from reading the text whether you are aware of the distinction between “private property” and “personal property” in Marxism. So, just to make sure: “private property” refers to the means of production (e.g. a factory), and “personal property” refers to things that are not means of production (e.g. a house where you live, clothes, food, toys).
The ownership of “private property” should be collectivized (according to Marx/ists), because… simply said, you can use the means of production to generate profit, then use that profit to buy more means of production, yadda yadda, the rich get exponentially richer on average and the poor get poorer.
With “personal property” this effect does not happen; if you have one table and I have two tables, there is no way for me to use this advantage to generate further tables, until I become the table-lord of the planet.
(There seem to be problems with this distinction. For example, things can be used either productively or unproductively; I can use my computer to create software or browse social networks. Some things can be used productively in unexpected ways; even the extra table could be used in a workshop to produce stuff. I am not a Marxist, but I suppose the answer would probably be something like “you are allowed to browse the web on your personal computer, but if we catch you privately producing and selling software, you get shot”.)
Marx was able to write that private property is done away with for 9/10s of the population, I don’t know how true it was at the time, but it certainly isn’t true today.
So, is this the confusion of Marxist terms, or do you mean that today more than 10% of people own means of production? In which sense? (Not sure if Marx would also count indirect ownership, such as having your money in an index fund, which buys shares of companies, which own the means of production.)
Did Marx actually argue for abolishing “personal proprety” (according to his definition, i.e. ownership of houses or food)?
For many people nowadays, their own brain is their means of production, often assisted by computers and their software, but those are cheap compared what what can be earned by using them. Marx did not know of such things, of course, but how do modern Marxists view this type of private ownership of means of production? For that matter, how did Marx view a village cobbler who owned his workshop and all his tools? Hated exploiter of his neighbours? How narrow was his motte here?
I once talked about this with a guy who identified as a Marxist, though I can’t say how much his opinions are representative for the rest of his tribe. Anyway… he told me that in the trichotomy of Capital / Land / Labor, human talent is economically most similar to the Land category. This is counter-intuitive if you take the three labels literally, but if you consider their supposed properties… well, it’s been a few decades since I studied economics, but roughly:
The defining property of Capital is fungibility. You can use money to buy a tech company, or an airplane factory, or a farm with cows. You can use it to start a company in USA, or in India. There is nothing that locks money to a specific industry or a specific place. Therefore, in a hypothetical perfectly free global market, the risk-adjusted profit rates would become the same globally. (Because if investing the money in cows gives you 5% per annum, but investing money in airplanes gives you 10%, people will start selling cow farms and buying airplane factories. This will reduce the number of cow farms, thus increasing their profit, and increase the competition in the airplane market, thus reducing their profit, until the numbers become equal.) If anything is fungible in the same way, you can classify it as Capital.
The archetypal example of Labor is a low-qualified worker, replaceable at any moment by a random member of the population. Which also means that in a free market, all workers would get the same wage; otherwise the employers would simply fire the more expensive ones and replace them with the cheaper ones. However, unlike money, workers are typically not free to move across borders, so you get different wages in different countries. (You can’t build a new factory in the middle of USA, and move ten thousand Indian workers there to work for you. You could do it the other way round: move the money, and build the factory in India instead. But if there are reasons to keep the factory in USA, you are stuck with American workers.) But within country it means that as long as a fraction of population is literally starving, you can hire them for the smallest amount of money they can survive with, which sets the equilibrium wage on that level. Because those starving ones won’t say no, and anyone who wants to be paid more will be replaced by those who accept the lower wage. Hypothetically, if you had more available job positions than workers, the wages would go up… but according to Malthus, this lucky generation of workers would simply have many kids, which would fix this exception in the next generation. -- Unless the number of job positions for low-qualified workers can keep growing faster than the population. But even in that case, the capitalists would probably successfully lobby the government to fix the problem by letting many immigrants in. Somewhere on the planet, there are enough starving people. Also, if the working people are paid just as much as they need to survive, they can hardly save money, so they can’t get out of this trap.
Now the category of Land contains everything that is scarce, so it usually goes to the highest bidder. But no matter how much rent you get for the land, you cannot use the rent to generate more of it. So, in long term the land will get even more expensive, and a lot of increased productivity will be captured by the land owners.
From this perspective, being born with a IQ 200 brain is like having inherited a gold mine, which would belong to the Land category. Some people need your for their business, and they can’t replace you with a random guy on the street. The number of potential jobs for IQ 200 people exceeds the number of IQ 200 people, so the employers must bid for your brain. But it is different from the land in the sense that it’s you who has to work using your brain; you can’t simply rent your brain to a factory and let some cheap worker operate it. Perhaps this would be equivalent to a magical gold mine, where only the owner can enter, so if he wants to profit from owning the gold mine, he has to also do all the work. Nonetheless, he gets extra profit from the fact that he owns the gold mine. So it’s like he offers the employer a package consisting of his time + his brain. And his salary could be interpreted as consisting of two parts: the wage, for the time he spends using his brain (which is numerically equivalent to how much money a worker would get for working the same amount of time); and the rent for the brain, that is the extra money compared to the worker. (For example, suppose that workers in your country are paid $500 monthly, and software developers are paid $2000 monthly. That would mean that for an individual software developer, the $500 is the wage for his work, and $1500 is the rent for using his brain.) That means that extraordinarily smart employees are (smaller) part working class, and (greater) part rentier class. They should be reminded that if, one day, enough people become equally smart (whether through eugenics, genetic engineering, selective immigration, etc.), their income will also drop to the smallest amount of money they can survive with.
As I said, no idea whether this is an orthodox or a heretical opinion within Marxism.
IANAM[1], but intuitively it seems to me that an exception ought to be made (given the basic idea of Marxist theory) for individuals who own means of production the use of which, however, does not involve any labor but their own.
So in the case of the village cobbler, sure, he owns the means of production, but he’s the only one mixing his labor with the use of those tools. Clearly, he can’t be exploiting anyone. Should the cobbler take on an assistant (continuing my intuitive take on the theory), said assistant would presumably have to now receive some suitable share in the ownership of the workshop/tools/etc., and in the profits from the business (rather than merely being paid a wage), as any other arrangement would constitute alienation from the fruits of his (the assistant’s) labor.
On this interpretation, there does not here seem to be any contradiction or inconsistency in the theory. (I make no comment, of course, on the theory’s overall plausibility, which is a different matter entirely.)
Book Review: So Good They Can’t Ignore You by Cal Newport:
This book makes an interesting contrast to The 4 Hour Workweek. Tim Ferris seems to believe that the purpose of work should be to make as much money as possible in the least amount of time and that meaning can then be pursued during your newly available free time. Tim gives you some productivity tips in the hope that it will make you valuable enough to negotiate flexibility in terms of how, when and where you complete your work, plus some dirty tricks as well.
Cal Newport’s book is similar in that it focuses on becoming valuable enough to negotiate a job that you’ll love and downplays the importance of pursuing your passions in your career. However, while Tim extolls the virtues of being a digital nomad, Cal Newport emphasises self-determination theory and autonomy, competence and relatedness. That is, the freedom to decide how you pursue your work, the satisfaction of doing a good job and the pleasure of working with people who you feel connected to. He argues that these traits are rare and valuable and so that if you want such a job you’ll need skills that rare and valuable to offer in return.
That’s the core of his argument against pre-existing passion; passions tend to cluster into a few fields such as music, arts or sports and only a very few people can ever make these the basis of their careers. Even for those who are interested in less insanely competitive pursuits such as becoming a yoga instructor or organic farmer, he cautions against pursuing the dream of just quitting your job one day. That would involve throwing away all of the career capital that you’ve accumulated and hence your negotiating power. Further, it can easily lead to restlessness, that is, jumping from career to career all the while searching for the “one” that meets an impossibly high bar.
Here are some examples of the kind of path he endorses:
Someone becoming an organic farmer after ten years of growing and selling food on the side, starting in high school. Lest this been seen as a confirmation of the passion hypothesis, this was initially just to make some money
A software tester making her way up to the head of testing to the point where she could demand that she reduce her hours to thirty per week and study philosophy
A marketer who gained such a strong reputation that he was able to form his own sub-agency within the bigger agency and then eventually form his own completely independent operation
Cal makes a very strong argument. When comparing pursuing a passion to more prosaic career paths, we often underestimate how fulfilling the later might eventually become if we work hard and use our accumulated career capital to negotiate the things that we truly want. This viewpoint resonates with me as I left software to study philosophy and psychology, without fully exploring options related to software. I now have a job that I really enjoy as it offers me a lot of freedom and flexibility.
One of the more compelling examples is Cal’s analysis of Steve Jobs. We tend to think of Job’s success as a prototypical case of following your passion, but his life shows otherwise. Jobs’ entry into technology (working for Atari) was based upon the promise of a quick buck. He’d been traversing around India and needed a real job. Jobs was then involved in a timesharing company, but he left for a commune without telling the others and was replaced by the time he made it back. So merely a year before he started Apple, he was hardly passionate about technology or entrepreneurship. This seems to have only occurred as he became more successful.
This is prototypical of Cal’s theory: instead of leveraging passion to become So Good They Can’t Ignore You (TM), he believes that if you become So Good They Can’t Ignore You (TM) that passion will follow. In evidence, Cal notes that people often passionate about many different things at different times, including things they definitely weren’t passionate about before. He suggests this is indicative of our ability to develop passions under the right circumstances.
Personally, I feel that the best approach will vary hugely depending on individual circumstance, but I suspect Cal is sadly right for most people. Nonetheless, Cal provides lists three exceptions. A job or career path is not suitable for his strategy if there aren’t opportunities to distinguish yourself, it is pointless or harmful to society or if it requires you to work with people you hate.
Towards the end of the book, Cal focuses on strategies for becoming good at what you do. While this section wasn’t bad, I didn’t find it particularly compelling either. I wish I’d just read the start of the book which covers his case against focusing on pre-existing passion, as that was by far the most insightful and original part of the book for me. Perhaps the most interesting aspect was how he found spending 14 hours of focused attention deconstructing a key paper in his field to have been a valuable use of time. I was surprised to hear that it paid off in terms of research opportunities, but I suppose it isn’t so implausible that such projects could pay off if you picked an especially important paper.
Further notes: - If you are going to only read this or Four Hour Workweek, I’d suggest this one to most people. I feel that this one is less likely to be harmful and is applicable to a broader range of people, many who won’t immediately have the career capital to follow Tim’s advice. On the other hand, Tim’s book might be more useful if, unlike me, you don’t need to be convinced of Cal’s thesis. - Cal points out that if you become valuable enough to negotiate more freedom, then you also become valuable enough that people will want to stop you. The challenge is figuring out whether you have sufficient career capital to overcome this resistance. Cal suggest not pursuing control without evidence people are willing to pay you either in money or with something else valuable; I find his position reductive and insufficiently justified. - Cal believes that it is important to have a mission for your career, but that it is hard to pick a mission without already being deep inside a field. He notes that discoveries are often made independently and theorises that this is because often a discovery isn’t likely or even possible until certain prerequisites are in place, such as ideas, technologies or social needs. It’s only when you are at the frontier that you have sufficient knowledge to see and understand the next logical developments
As I said before, I’ll be posting book reviews. Please let me know if you have any questions and I’ll answer them to the best of my ability.
Book Review: The AI does not hate you by Tom Chivers
The title of this book comes from a quote by Elizier Yudkowsky which reads in full: “The AI does not hate you, nor does it love you, but you are made of atoms which it can use of something else”. This book covers not only potential risks from AI, but the rationalist community from which this evolved and also touches on the effective altruism movement.
This book fills something of a gap in the book market; when people are first learning about existential risks from AI I usually recommend the two-part Wait by Why post (https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) and then I’m not really sure what to recommend next. The sequences are ridiculously long and Bostrom’s Superintelligence is a challenging read for those not steeped in philosophy and computer science. In contrast, this book is much more accessible and provides the right level of detail for a first introduction, rather than someone who has already decided to try entering the field.
I mostly listened to this book to see if I could recommend it. Most of the material was familiar, but I was also pleasantly surprised a few times to hear a new take (at least to me). It was engaging and well-written throughout. Regarding what’s covered: there’s an excellent introduction to the alignment problem; the discussion of Less Wrong mostly focuses on cognitive biases, but also covers a few other key concepts like the Map and Territory and Bayesianism; the Center for Applied Rationality is mostly reduced to just double crux; Slatestarcodex is often quoted, but not a focus; and Effective Altruism isn’t the focus, but there’s a good general introduction. I also thought he dealt well with someone of the common criticisms of the community.
Even though there are notable omissions, these are understandable given the need to keep the book to a reasonable length. And it could have been possible to more fully capture the flavour of the community, but given how hard it is to describe the essence of a community with such broad interests, I think he did an admirable job. All in all, this is an excellent introduction to the topic if you’ve been hearing about AI Safety or Less Wrong and want to dive in more
There is a world that needs to be saved. Saving the world is a team sport. All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.
I have no interest in honor if it’s celebrated on a field of the dead. Virtue ethics is fine, as long as it’s not an excuse to not figure out what needs doing and how it’s going to get done.
Doing ones own part and trusting that the other parts are done by anonymous unknown others is a very silly coordination strategy. We need plans that amount to success, not just everyone doing whatever sounds nice to them.
Edit: I very much agree that saving the world is a team sport. Perhaps it’s relevant that successful teams always do some planning and coordinating.
It’s at times like these that I absolutely love the distinction between “karma” and “agreement” around here. +1 for the former, as per the overall sentiment. −1 for the latter, as per the sheer nonsensical-ity of the scale of the matter.
The “world” doesn’t need “saving”. Never did. Never will. If for no other reason than there is no “one” world, to begin with. What you think about when mentioning the “world” itself will drastically different from what I have in mind, from what Eliezer has in mind, from what anyone else around here has in mind.
Our brains can only ever hold such a tiny amount of information in our short-term storage, that to even hope it ever represents any significant portion of the “world” itself is laughable. Even your long-term storage / episodic + semantic memory only ever came in contact with such a tiny portion of the “world”.
You can’t “save” what you barely “know” to begin with.
Yet there’s a deeper rabbit hole still.
When you say “save the world” you likely mean either “saving our local ecosystem” (as in: all the biological forms of self-organizing matter, as you know it), “saving our species” (Homo Sapiens, first and foremost), or “saving your world” (as in: the part of reality you have personally grown up in, conditioned yourself to, assimilated with, and currently project onto the rest of real world as the likely only world, to begin with—a.k.a. Typical Mind Fallacy).
The “world” doesn’t need “saving”, though. It came before you. It will persist after you. Probably. Physics. Anyhow.
What may need some “help” is society. Not “the” abstract, ephemeral, all-encompassing, thus absolutely void of any and all meaning to begin, “society”. But the society, made out of “people”. As in: “individuals”. Living in their own “world”. Only ever coming in contact with <1% of information you’ve likely come into contact with, so far.
They don’t need your attempts at “saving” them, either. What they need is specific solutions to specific problems within specific domains of specific kind of relationship to the domains, closely/farther adjacent to it.
You will never solve any of them. Unless you stop throwing around phrases like “saving the world”, in the first place. The world came into being via a specific kind of process. It is now maintained by specific kind of feedback loops, recurrent cycles, incentive structures, reward/punishment mechanisms driving mostly unconscious decision making processes, and individual habits of each and every individual operating within their sphere of influence.
You want to help? Figure out what kind of incremental changes you can begin to introduce in any of them, in order to begin extinguishing the sort of problems you’ve now elevated to the rank of “saving-worthy” in your own head. Note that, in all likelihood, by extinguishing one you will merrily introduce a whole bunch of others—something you won’t get to discover until much later one. Yet that is, realistically, what you can actually go on to accomplish.
“Saving the world”? Please. Do you even know what’s exactly going on in the opposite side of the globe today?
Great sentiment. Horrible phrasing. Nothing personal. “Helping people” is a team’s sport.
Side note: are these quick takes turning into a new Twitter/X feed? Gosh, please don’t. Please!
You want to help? Figure out what kind of incremental changes you can begin to introduce in any of them, in order to begin extinguishing the sort of problems you’ve now elevated to the rank of “saving-worthy” in your own head. Note that, in all likelihood, by extinguishing one you will merrily introduce a whole bunch of others—something you won’t get to discover until much later one. Yet that is, realistically, what you can actually go on to accomplish.
I read this paragraph as saying ~the same thing as the original post in a different tone
We know well enough what people mean by “world”—the stuff they care about. The fact that physics keeps on happening if humanity is snuffed out is no comfort at all to me or to most humans.
Arguing epistemology is not going to prevent a nuclear apocalypse or us being wiped out by the new intelligent species we are inventing. The fact that you don’t know what’s happening on the other side of the world has no bearing on existential dangers facing those people. That’s what I mean by saving the world, and I expect what the author meant. This is a different thing than just helping people by your own values and estimates.
I very much agree that pithy mysterious statements for others to argue over is not a good use of the quick takes here.
This is the kind of book that you either love or hate. I found value in it, but I can definitely understand the perspective of the haters. First off: the title. It’s probably one of the most blatant cases of over-promising that I’ve ever seen. Secondly, he’s kind of a jerk. A number of his tips involve lying and in school he had a strategy of interrogating his lecturers in detail when they gave him a bad mark so that they’d think very carefully assigning him a bad grade. And of course, while drop-shipping might have been an underexploited strategy at the time when he wrote the book, it’s now something of a saturated market.
On the plus side, Tim is very good at giving you specific advice. To give you the flavour, he advises the following policies for running an online store: avoid international orders, no expedited or overnight shipping, two options only—standard and premium; no cheque or Western union, no phone number if possible, minimum wholesale order with tax id and faxed in order form, ect. Tim is extremely process oriented and it’s clear that he has deep expertise here and is able to share it unusually well. I found it fascinating to see how he thought even though I don’t have any intention of going into this space.
This book covers a few different things: - Firstly, he explains why you should aim to have control over when and where you work. Much of this is about cost, but it’s also about the ability to go on adventures, develop new skills and meet people you wouldn’t normally meet. He makes a good case and hopefully I can confirm whether it is as amazing as he says soon enough - Tim’s philosophy of work is that you should try to find a way of living the life you want to live now. He’s not into long-term plans that, in his words, require you to sacrifice the best years of your life in order to obtain freedom later. He makes a good point for those with enough career capital to make it work, but it’s bad advice for many other who decide to just jump on the travel blogging or drop-shipping train without realistic expectations of how hard it is to make it in those industries - Tim’s productivity advice focuses on ruthlessly (and I mean ruthlessly) minimising what he does to the most critical by applying the 80⁄20 rule. For example, he says that you should have a todo list and a not todo list. He says that your todo list shouldn’t have more than two items and you should ask yourself, “If this was the only thing I accomplished today, would I be satisfied?”. - A large part of minimising your work involves delegating these tasks to other people and Tim goes into detail about how to do this. He is a big fan of virtual assistants, to the point ofc even delegating his email. - Lots of this book is business advice. Unlike most businesses, Tim isn’t optimising for making the most money, but for making enough money to support his lifestyle while taking up the least amount of his time. I suspect that this would be great advice for many people who already own a business - Tim also talks about how to figure out what to do with your spare time if you manage to obtain freedom. He advises chasing excitement instead of happiness. He finds happiness too vague, while excitement will motivation you to grow and develop. He suggests that it is fine to go wild at first, jumping from place to place, chasing whatever experiences you want, but at some point it’ll lose it’s appeal and you’ll want to find something more meaningful.
I’d recommend this book, but only to people with a healthy sense of skepticism. There’s lots of good advice in this book, but think very carefully before you become drop-shipper #2001. And remember that you don’t have to become a jerk just because he tells you to! That said, it’s not all about drop-shipping. A much wider variety of people probably could find a way to work remotely or reduce their hours than we normally think, although it might require some hard work to get there. In so far as the goal is to optimise for your own happiness, I generally agree with his idea of the good life.
Further highlights: - Doing the unrealistic is easier than doing the realistic as there is less competition - Leverage strengths, instead of fixing weakness. Multiplication of results beats incremental improvement - Define your nightmare. Would it really be permanent? How could you get it back on track? What are the benefits of the more probable outcome? - We encourage children to dream and adults to be realistic
Freud is the most famous psychologist of all time and although many of his theories are now discredited or seem wildly implausible, I thought it’d be interesting to listen to him to try and understand why it sounded plausible in the first place.
At times Freud is insightful and engaging; at other times, he falls into psychoanalytic lingo in such a way that I couldn’t follow what he was trying to say. I suppose I can see why people might have assumed that the fault was with their failure to understand.
It’s a short read, so if you’re curious, there isn’t that much cost to going ahead and reading it, but this is one of those rare cases where you can really understand the core of what he was getting at from the summary on Wikipedia (https://en.m.wikipedia.org/wiki/Civilization_and_Its_Discontents)
Since Wikipedia has a summary, I’ll just add a few small remarks. This book focuses on a key paradox; our utter dependence on it for anything more than the most basic survival; but how it requires us to repress our own wants and desires so as to fit in with an ordered society. I find this to be an interesting answer to the question of why there is so much misery despite our material prosperity.
It’s interesting to re-examine this in light of the modern context. Society is much more liberal than it was in Freud’s time, but in recent years people have become more scared of speaking their minds. Repression still exists, it is just off a different form. If Freud is to be believed, we should expect this repression to result in all kinds of be psychological effects, many of which won’t appear linked on the surface.
Further thoughts: - I liked his chapter on methods humans deal suffering and their limitations as it contained what seemed to be found evaluations. He points out that that the path of a yogi is at best the happiness of quietness, that love cannot be guaranteed to last, that sublimation through art is available only to a few and is even then only of limited strength, ect. He just didn’t think there was any good solution to this problem. - Freud was sceptical of theories like communism because he didn’t believe that human nature could really change. He argued that aggression existed in the nursery and before the existence of property. He didn’t doubt that we could suppress urges, but he seemed to believe that it was much more costly than other people realised, and even then that it would likely come out in some other form - Freud proposed his theory of the Narcissism of Small Differences, that the people who we hate most not those with values completely foreign to our own, but this who we are in close proximity to. He describes this as a form of narcissism since these conflicts can flare up over the most minor of differences. - Freud suggested that those who struggled the most with temptation were saints, since their self-denial led to the constant frustration of their desires - Freud noted how absurd, ” Love your neighbour as yourself” would sound to someone hearing it for the first time. He imagines that we’d skepticalky ask questions, “Why should I care about them just as much as my family?” and “Why should I love them if they are bad people or don’t love me?”. He actually goes further and argues that “a love that does not discriminate does injustice to its object”
Thoughts on the introduction of Goodhart’s. Currently, I’m more motivated by trying to make the leaderboard, so maybe that suggests that merely introducing a leaderboard, without actually paying people, would have had much the same effect. Then again, that might just be because I’m not that far off. And if there hadn’t been the payment, maybe I wouldn’t have ended up in the position where I’m not that far off.
I guess I feel incentivised to post a lot more than I would otherwise, but especially in the comments rather than the posts since if you post a lot of posts that likely suppresses the number of people reading your other posts. This probably isn’t a worthwhile tradeoff given that one post that does really well can easily outweight 4 or 5 posts that only do okay or ten posts that are meh.
Another thing: downvotes feel a lot more personal when it means that you miss out on landing on the leaderboard. This leads me to think that having a leaderboard for the long term would likely be negative and create division.
I really like the short-form feature because after I have articulated a thought my head feels much clearer. I suppose that I could have tried just writing it down in a journal or something; but for some reason I don’t feel quite the same effect unless I post it publicly.
This is the first classic that I’m reviewing. One of the challenges with figuring out which classics to read is that there are always people speaking very highly of it and in a vague enough manner that it makes it hard for you to decide whether to read it. Hopefully I can avoid this trap.
Book Review: Animal Farm
You probably already know the story. In a thinly veiled critique of the Russian Revolution, the animals in a farm decide to revolt against the farmer and run the the farm themselves. At start, the seven principles of Animalism are idealistically declared, but as time goes on, things increasingly seem to head downhill…
Why is this a classic?: This book was released at a time when the intellectual class was firmly sympathetic to the Soviets, ensuring controversy and then immortality when history proved it right.
Why you might want to read this: Short (only 112 pages or 3:11 on Audible), the story always moves along at a brisk pace, the writing is engaging and a few very emotionally impactful moments. The broader message of being wary of the promises made by idealistic movements still holds (especially “all animals are equal, but some animals are more equal than others”). This book does a good job illustrating many of the social dynamics that occur in totalitarianism, from the rewriting of history, to the false confessions, to the the cult of the individual.
Why you might not want to read this: The concrete anti-Soviet message is of little relevance now given that what happened is common knowledge. You can probably already guess how the story goes: the movement has a promising start, but with small red flags that become bigger over time. The animals are constantly unrealistically naive, maybe this strikes you as clumsy, or maybe you see that just as how satire is?
Wow, I’ve really been flying through books recently. Just thought I should mention that I’m looking for recommendations for audio books; bonus points for books that are short. Anyway....
Book Review: Zero to One
Peter Thiel is the most famous contrarian in Silicon Valley. I really enjoyed hearing someone argue against the common wisdom of the valley. Most people think in terms of beating the competition; Thiel thinks in terms of establishing a monopoly so that there is no competition. Agile methodology and the lean startup are all the rage, but Thiel argues that this only leads to incremental improvements and that truly changing the world requires you to commit to a vision. Most companies was to disrupt your competitors, but for Thiel this means that you’ve fallen into competition, instead of forging your own unique path. Most venture funds aim to diversify, but Thiel is more selective, only investing in companies that have billion dollar potential. Many startups spurn marketing, but Thiel argues that this is dishonest and that PR is also a form of marketing, even if that isn’t anyone’s job title. Everyone is betting on AI replacing humans, while Thiel is more optimistic about human/ai teams.
Some elaboration is in order, I’ll just mention that might prefer to read the review on Slatestarcodex instead of mine (https://slatestarcodex.com/20…/…/31/book-review-zero-to-one/) • Aren’t monopolies bad? Thiel argues that monopoly power is what allows a corporation to survive the brutal world of competing to survive. This means that it can pay employees well, have social values other than making profit and invest in the future. Read Scott’s review for a discussion on how to build a company that truly is one of a kind. • Thiel argues that monopolies try to hide that fact by presenting themselves as just one player in a larger industry (ie. Google presents itself as a tech company, instead of an internet advertising company, even that this aspect brings in essentially all the money), while those firms competing try to present themselves as having cornered an overly specific market (ie. isn’t clear that British food in Palo Alto is its own market as opposed to competing against all the other food chains) • In addition to splitting people into optimists and pessimists, Thiel splits people into define and indefinite. You might think that a “definite optimist” would be someone who is an optimist and 100% certain the future will go well, but what he actually means is that they are an optimist and they have an idea of what the future will look like or could like like. In contrast, an indefinite optimist would be an optimist who has no idea how exactly the world might improve or could change. • Thiel argues that startup returns are distributed according to a power law such that half of the return from a portfolio might be just one company. He applies it to life too; arguing that it’s better to set yourself up so that they’ll be one career that you’ll be amazing at, rather than studying generally so that there’ll be a dozen that you’d be only okay at. • While many in the valley believe in just building a product and figuring out how to sell it later, Thiel argues that you don’t have a product if you don’t have a way of reaching customers
I’m not involved in startups, so I can’t vouch for how good his advice is, but given that caveat, I’d strongly recommend it for anyone thinking of going into that space since it’s always good to have your views challenged. But, I’d also recommend it as a general read, I think that there’s a lot that’d be interesting for a general audience, especially the argument against acquired a broad undifferentiated experience. I do think that in order to get the most out of this, you’d need to already be familiar with startup culture; ie. minimum viable products, the lean startup, ect. as he kind of assumes that you know this stuff.
So should you read the book or just Scott’s review? The main aspect Scott misses is the discussion of power law distributions. This discussion is basically the Pareto Principle on steroids; when a single billion-dollar company could make your more profit than the rest of your investments combined all that matters is whether a company could be a unicorn or not (the essay Prospecting for Gold makes a similar point for EA https://www.effectivealtruism.org/…/prospecting-for-gold-o…/) But besides from that, Scott’s review covers most of the main ideas well. So maybe you could skip the book, but if you’re like me you might find that you need to read the book in order to actually remember these ideas. Besides, it’s concise and well-written.
I think that there’s good reasons why the discussion on Less Wrong has turned increasingly towards AI Alignment, but I am also somewhat disappointed that there’s no longer a space focusing on rationality per se.
Just as the Alignment forum exists as a separate space that automatically cross-posts to LW, I’m starting to wonder if we need a rationality forum that exists as a separate space that cross-posts to LW, as if I were just interested in improving my rationality I don’t know if I’d come to Less Wrong.
(To clarify, unlike the Alignment Forum, I’d expect such a forum to be open-invite b/c the challenge would be gaining any content at all).
Alternatively, I think there is a way to hide the AI content on LW, but perhaps there should exist a very convenient and visible user interface for that. I would propose an extreme solution, like a banner on the top of the page containing a checkbox that hides all AI content. So that anyone, registered or not, could turn the AI content off in one click.
The Alignment forum works because there are a bunch of people who professionally pursue research over AI Alignment. There’s no similar group of people for whom that’s true for rationality.
I don’t know if you need professionals, just a bunch of people who are interested in discussing the topic. It wouldn’t need to use the Alignment Forum’s invite-only system.
Instead, it would just be a way to allow LW to cater to both audiences at the same time.
IIRC, you can get post on Alignment Forum only if you are invited or moderators crossposted it? The problem is that Alignment Forum is deliberately for some sort of professionals, but everyone wants to write about alignment. Maybe it would be better if we had “Alignment Forum for starters”.
One thing I’m finding quite surprising about shortform is how long some of these posts are. It seems that many people are using this feature to indicate that they’ve just written up these ideas quickly in the hope that the feedback is less harsh. This seems valuable; the feedback here can be incredibly harsh at times and I don’t doubt that this has discouraged many people from posting.
I pushed a bit for the name ‘scratchpad’ so that this use case was a bit clearer (or at least not subtly implied as “wrong”). Shortform had enough momentum as a name that it was a bit hard to change tho. (Meanwhile, I settled for ’shortform means either the writing is short, or it took a (relatively) short amount of time to write)
I’ll post some extracts from the Seoul Summit. I can’t promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it’s helpful until someone publishes something that’s more polished:
Frontier AI Safety Commitments, AI Seoul Summit 2024
The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: “internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sharing; to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; to incentivize third-party discovery and reporting of issues and vulnerabilities; to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated; to publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use; to prioritize research on societal risks posed by frontier AI models and systems; and to develop and deploy frontier AI models and systems to help address the world’s greatest challenges”
″Risk assessments should consider model capabilities and the context in which they are developed and deployed”—I’d argue that the context in which it is deployed should account for whether it is open or closed source/weights
”They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk.”—always great to make policy concrete”
In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.”—Very important that when this is applied the ability to iterate on open-source/weight models is taken into account
Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders’ Session
Signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America.
”We support existing and ongoing efforts of the participants to this Declaration to create or expand AI safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations”—guess we should now go full-throttle and push for the creation of national AI Safety institutes
“We recognise the importance of interoperability between AI governance frameworks”—useful for arguing we should copy things that have been implemented overseas.
“We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments.”—Important as Frontier AI needs to be treated as different from regular AI.
Seoul Statement of Intent toward International Cooperation on AI Safety Science
Signed by the same countries.
“We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems”—similar to what we listed above, but more specifically focused on AI Safety Institutes which is a great.
”We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety”—Really good! We don’t just want AIS Institutes to run current evaluation techniques on a bunch of models, but to be actively contributing to the development of AI safety as a science.
“We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety”—very important for them to share research among each other
Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity
Signed by: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union
“It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future”—considering future risks is a very basic, but core principle
”Interpretability and explainability”—Happy to interpretability explicitly listed
”Identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations”—important work, but could backfire if done poorly
”Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors.”—sensible, we need to ensure that the risks of open-sourcing and open-weight models are considered in terms of the ‘deployment context’ and ‘foreseeable uses and misuses’
”Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks,”—very pleased to see a focus beyond just deployment
”We further recognise that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organisations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight”—this is massive. There was a real risk that these issues were going to be ignored, but this is now seeming less likely.
”We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security.”—“Unique role”, this is even better!
”We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France”—even better than above b/c it commits to a specific action and timeline
I don’t want to comment on the whole Leverage Controversy as I’m far away enough from the action that other people are probably better positioned to sensemake here.
On the other hand, I have been watching some of Geoff Anders’ streams does seem pretty good at theorising by virtue of being able to live-stream this. I expect this to be a lot harder than it looks, when I’m trying to figure out my position on an issue, I often find myself going over the same ground again and again and again, until eventually I figure out a way of putting what I want to express into words.
That said, I’ve occasionally debated with some high-level debaters and given almost any topic they’re able to pretty much effortlessly generate a case and how the debate is likely to play out. I guess it seems on par with this.
So I think his ability to livestream demonstrates a certain level of skill, but I almost view it as speed-chess vs. chess, in that there’s only so much you can tell about a person’s ability in normal chess from how good they are at speed chess.
I think I’ve improved my own ability to theorise by watching the streams, but I wouldn’t be surprised if I improved similarly from watching Eliezer, Anna or Duncan livestream their attempts to think through an issue. I also expect that there’s a similar chance I would have gained a significant proportion of the benefit just from watching someone with my abilities or even slightly worse on the basis of a) understanding the theorising process from the outside b) noticing where they frame things differently than I would have.
Trying to think about what is required to be a good debater:
general intelligence—to quickly understand the situation and lay out your response;
“talking” skills—large vocabulary, talking clearly, not being shy, body language and other status signals;
background knowledge—knowing the models, facts, frequently used arguments, etc.;
precomputed results—if you already spent a lot of time thinking about a topic, maybe even debating it.
These do not work the same way, for example clear talking and good body language generalize well; having lots of precomputed results in one area will not help you much in other areas (unless you use a lot of analogies to the area you are familiar with—if you do this the first time, you may impress people, but if you do this repeatedly, they will notice that you are a one-topic person).
I believe that watching good debaters in action would help. It might be even better to focus on different aspects separately (observing their body language, listening to how they use their voice, understanding their frames, etc.).
Random idea: A lot of people seem discouraged from doing anything about AI Safety because it seems like such a big overwhelming problem.
What if there was a competition to encourage people to engage in low-effort actions towards AI safety, such as hosting a dinner for people who are interested, volunteering to run a session on AI safety for their local EA group, answering a couple of questions on the stampy wiki, offering to proof-read a few people’s posts or offering a few free tutorial sessions to aspiring AI Safety Researchers.
I think there’s a decent chance I could get this funded (prize might be $1000 for the best action and up to 5 prizes of $100 for random actions above a certain bar)
Possible downsides: Would be bad if people reach out to important people or the media without fully thinking stuff through, but can be mitigated by excluding those kinds of actions/ adding guidelines
Those don’t seem like very low effort to me, but they will to some. Do they seem to you like they are effective (or at least impactful commensurate with the effort)? How would you know which ones to continue and what other types of thing to encourage?
I fear that it has much of the same problem that any direct involvement in AI safety does: what’s the feedback loop for whether it’s actually making a difference? Your initial suggestions seem more like actions toward activism and pop awareness, rather than actions toward AI Safety.
The nice thing about prizes and compensation is that it moves the question from the performer to the payer—the payer has to decide if it’s a good value. Small prizes or low comp means BOTH buyer and worker have to make the decision of whether this is worthwhile.
Solving the productivity-measurement problem itself seems overwhelming—it hasn’t happened even for money-grubbing businesses, let alone long-term x-risk organizations. But any steps toward it will do more than anything else I can think of to get broader and more effective participation. Being able to show that what I do makes a measurable difference, even through my natural cynicism and imposter syndrome, is key to my involvement.
I am not typical, so don’t take my concerns as the final word—this seems promising and relatively cheap (in money; it will take a fair bit of effort in guiding the sessions and preparing materials for the tutoring. Honestly, that’s probably more important than the actual prizes).
I guess they just feel like as good a starting place as any and are unlikely to be net-negative. That’s more important than anything else. The point is to instill agency so that people start looking for further opportunities to make a difference. I might have to write a few paragraphs of guidelines/suggestions for some of the most common potential activities.
I hadn’t really thought too much about follow-up, but maybe I should think more about it.
Here’s a crazy[1] idea that I had. But I think it’s an interesting thought experiment.
What if we programmed an AGI had the goal of simulating the Earth, but with one minor modification? In the simulation, we would have access to some kind of unfair advantage, like an early Eliezer Yudkowsky getting a mysterious message dropped on his desk containing a bunch of the progress we’ve made in AI Alignment.
So we’d all die in real life when the AGI broke out of its box and turned the Earth into compute to better simulate us, but we might survive in virtual reality, at least if you believe simulations to be conscious.
In other news, I may have just spoiled a short story I was thinking of writing.
I really dislike the fiction that we’re all rational beings. We really need to accept that sometimes people can’t share things with us. Stronger: not just accept but appreciate people who make this choice for their wisdom and tact. ALL of us have ideas that will strongly trigger us and if we’re honest and open-minded, we’ll be able recall situations when we unfairly judged someone because of a view that they held. I certainty can, way too many times to list.
I say this as someone who has a really strong sense of curiosity, knowing that I’ll feel slightly miffed when someone doesn’t feel comfortable being open with me. But it’s my job to deal with that, not the other person.
Don’t get me wrong. Openness and vulnerability are important. Just not *all* the time. Just not *everything*.
When you start identifying as a rationalist, the most important habit is saying “no” whenever someone says: “As a rationalist, you have to do X” or “If you won’t do X, you are not a true rationalist” etc. It is not a coincidence that X usually means you have to do what the other person wants for straightforward reasons.
Because some people will try using this against you. Realize that this usually means nothing more then “you exposed a potential weakness, they tried to exploit it” and is completely unrelated to the art of rationality.
(You can consider the merits of the argument, of course, but you should do it later, alone, when you are not under pressure. Don’t forget to use the outside view; the easiest way is to ask a few independent people.)
I’ve recently been reading about ordinary language philosophy and I noticed that some of their views align quite significantly with LW. They believed that many traditional philosophical question only seemed troubling because of the philosophical tendency to assume words like “time” or “free will” necessarily referred to some kind of abstract entity when this wasn’t necessary at all. Instead they argued that by paying attention to how we used these words in ordinary, everyday situations we could see that the way people used these words didn’t need to assume these abstract entities and that we could dissolve the question.
I found it interesting that the comment thread on dissolving the question makes no reference to this movement. It doesn’t reference Wittgenstein either who also tried to dissolve questions.
Is that surprising? It’s not as if the rationalsphere performed some comprehensive survey of philosophy before announcing the superiority of its own methods.
From my perspective, saying that “this philosophical opinion is kinda like this Less Wrong article” sounds like “this prophecy by Nostradamus, if you squint hard enough, predicts coronavirus in 2020″. What I mean is that if you publish huge amounts of text open to interpretation, it is not surprising that you can find there analogies to many things. I would not be surprised to find something similar in the Bible; I am not surprised to find something similar in philosophy. (I would not be surprised to also find a famous philosopher who said the opposite.) In philosophy, the generation of text is distributed, so some philosophers likely have track record much better than the average of their discipline. Unfortunately—as far as I know—philosophy as a discipline doesn’t have a mechanism to say “these ideas of these philosophers are the good ones, and this is wrong”. At least my time at philosophy lessons was wasted listening to what Plato said, without a shred of ”...and according to our current scientific knowledge, this is true, and this is not”.
Also, it seems to me that philosophers were masters of clickbait millenia before clickbait was a thing. For example, a philosopher is rarely satisfied by saying things like “human bodies are composed of 80% water” or “most atoms in the universe are hydrogen atoms”. Instead, it is typically “everything is water”. (Or “everything is fire”. Or “everything is an interaction of quantum fields”… oops, the last one was actually not said by a philosopher; what a coincidence.) Perhaps this is selection bias. Maybe people who walked around ancient Greece half-naked and said things like “2/3 of everything is water” existed, but didn’t draw sufficient attention. But if this is true, it would mean that philosophy optimizes for shock value instead of truth value.
So, without having read Wittgenstein, my priors are that he most likely considered all words confused; yes, words like “time” and “free will”, but also words like “apple” and “five”. (And then there was Plato who assumed that there was a perfect idea of “apple” and a perfect idea of “time”.)
Now I am not saying that everything written by Wittgenstein (or other philosophers) is worthless. I am saying that in philosophy there are good ideas mixed with bad ones, and even the good ones are usually exaggerated. And unless someone does the hard work of separating the wheat from chaff, I’d rather ignore philosophy, and read sources that have better signal-to-noise ratio.
I won’t pretend that I have a strong understanding here, but as far as I can tell, (Later) Wittgenstein and the Ordinary Language Philosophers considered our conception of the number “five” existing as an abstract object as mistaken and would instead explain how it is used and consider that as a complete explanation. This isn’t an unreasonable position, like I honestly don’t know what numbers are and if we say they are an abstract entity it’s hard to say what kind of entity.
Regarding the word “apple” Wittgenstein would likely say attempts to give it a precise definition are doomed to failure because there are an almost infinite number of contexts or ways in which it can be used. We can strongly state “Apple!” as a kind of command to give us one, or shout it to indicate “Get out of the way, there is an apple coming towards you” or “Please I need an Apple to avoid starving”. But this is only saying attempts to spec out a precise definition are confused, not the underlying thing itself.
(Actually, apparently Wittgenstein consider attempts to talk about concepts like God or morality as necessarily confused, but thought that they could still be highly meaningful, possibly the most meaningful things)
These are all good points. I could agree that all words are to some degree confused, but I would insist that some of them are way more confused than others. Otherwise, the very act of explaining anything would be meaningless: we would explain one word by a bunch of words, equally confusing.
If the word “five” is nonsense, I can take the Wittgenstein’s essay explaining why it is nonsense, and say that each word in that essay is just a command that we can shout at someone, but otherwise is empty of meaning. This would seem to me like an example of intelligence defeating itself.
Wittgenstein didn’t think that everything was a command or request; his point was that making factual claims about the world is just one particular use of language that some philosophers (including early Wittgenstein) had hyper-focused on.
Anyway, his claim wasn’t that “five” was nonsense, just that when we understood how five was used there was nothing further for us to learn. I don’t know if he’d even say that the abstract concept five was nonsense, he might just say that any talk about the abstract concept would inevitably be nonsense or unjustified metaphysical speculation.
These are situations where I woud like to give a specific question to the philosopher. In this case it would be: “Is being a prime number a property of number five, or is it just that we decided to use it as a prime number?”
I honestly have no idea how he’d answer, but here’s one guess. Maybe we could tie prime numbers to one of a number of processes for determining primeness. We could observe that those processes always return true for 5, so in a sense primeness is a property of five.
This book aims to convince everyone, even skeptics and athiests, that there is value in some spiritual practises, particularly those related to meditation. Sam Harris argues that mediation doesn’t just help with concentration, but can also help us reach transcendental states that reveal the dissolution of the self. It mostly does a good job of what it sets out to do, but unfortunately I didn’t gain very much benefit from this book because it focused almost exclusively on persuading you that there is value here, which I already accepted, rather than providing practical instructions.
One area where I was less convinced was his claims about there not being a self. He writes that when meditating allows you to directly experience this, but worry he hasn’t applied sufficient skepticism. If you experience flying through space in an altered mental, it doesn’t mean that you are really flying through space. Similarly, how do we know that he is experiencing the lack of a self, rather than the illusion of there being no self?
I was surprised to see that Sam was skeptical of a common materialist belief that I had expected him to endorse. Many materialists argue against the notion of philosophical-zombies by arguing that if it seems conscious we should assume it is conscious. However, Sam Harris argues that the phenomenon of anaesthesia awareness, waking up completely paralysed during surgery, shows that there isn’t always a direct link between appearing conscious and actual consciousness. (Dreams seem to imply the same point, if less dramatically). Given the strength of this argument, I’m surprised that I haven’t heard it before.
Sam also argues that split-brain patients imply that consciousness is divisible. While split-brain patients actually still possess some level of connection between the two halves, I still consider this phenomenon to be persuasive evidence that this is the case. After all, it is possible for the two halves to have completely different beliefs and objectives without either side being aware of these.
On meditation, Sam is a fan of the Dzogchen approach that directly aims at experiencing no-self, rather than the slower, more gradual approaches. This is because waiting years for a payoff is incredibly discouraging and because practises like paying attention to the sensation of breath reinforce the notion of the self which meditation seeks to undermine. At the same time, he doesn’t fully embrace this style of teaching, arguing that the claim every realisation is permanent is dangerous as it leads to treating people as role models even when their practise is flawed.
Sam argues against the notion of gurus being perfect; they are just humans like the rest of us. He notes that is hard to draw the line between practises that lead to enlightenment and abuse; indeed he argues that a practise can provide spiritual insight AND be abusive. He notes that the reason why abuse seems to occur again and again is that when people seek out a guru it’s because they’ve arrived at the point where they realise that there is so much that they don’t know and they need the help of someone who does.
He also argues against assuming mediative experiences provide metaphysical insights. He points out that they are often the same experiences that people have on psychedelics. In fact, he argues that for some people having a psychedelic experience is vital for their spiritual development as it demonstrates that there really are other brain states out there. He also discusses near death experiences and again dismisses claims that they provide insight into the afterlife—they match experiences people have on drugs and they seem to vary by culture.
Further points: - Sam talked about experiencing universal love while on DMT. Many religions contain this idea of universal love, but he couldn’t appreciate it until he had this experience - He argues that it is impossible to stay angry for more than a few seconds without continuously thinking thoughts to keep us angry. To demonstrates this, he asks us to imagine that we receive an important phone call. Most likely we will put our anger aside.
FWIW no self is a bad reification/translation of not self, and the overhwleming majority seem to be metaphysically confused about something that is just one more tool rather than some sort of central metaphysical doctrine. When directly questioned “is there such a thing as the self” the Buddha is famously mum.
No-self is an ontological claim about everyone’s phenomenology. Not self is a mental state that people can enter where they dis-identify with the contents of consciousness.
Many materialists argue against the notion of philosophical-zombies by arguing that if it seems conscious we should assume it is conscious. However, Sam Harris argues that the phenomenon of anaesthesia awareness, waking up completely paralysed during surgery, shows that there isn’t always a direct link between appearing conscious and actual consciousness.
One of the problems with the general anti zombie principle, is that it makes much too strong a claim that what appears conscious, must be.
There appears to be something of a Sensemaking community developing on the internet, which could roughly be described as a spirituality-inspired attempt at epistemology. This includes Rebel Wisdom, Future Thinkers, Emerge and maybe you could even count post-rationality. While there are undoubtedly lots of critiques that could be made of their epistemics, I’d suggest watching this space as I think some interesting ideas will emerge out of it.
I wasn’t a fan of this book, but maybe that’s just because I’m not in the target audience. As a first introduction to AI safety I recommend The AI Does Not Hate You by Tom Chivers (facebook.com/casebash/posts/10100403295741091) and for those who are interested in going deeper I’d recommend Superintelligence by Nick Bostrom. The strongest chapter was his assault on arguments against those who think we shouldn’t worry about superintelligence, but you can just read it here: https://spectrum.ieee.org/…/many-experts-say-we-shouldnt-wo…).
I learned barely anything that was new from this book. Even when it came to Russell’s own approach, Co-operative Reinforcement Learning, I felt that the treatment was shallow (I won’t write about this approach until I’ve had a chance to review it directly again). There were a few interesting ideas that I’ll list below, but I was surprised by how little I’d learned by the end. There’s a decent explanation of some very basic concepts within AI, but this was covered in a way that was far too shallow for me to recommend it.
Interesting ideas/quotes: - More processing power won’t solve AI without better algorithms. It simply gets you the wrong answer faster - Language bootstrapping: Comprehension is dependent on knowing facts and extracting facts is dependent on comprehension. You might think that we could bootstrap an AI using easy to comprehend text, but in practise we end up extracting incorrect facts that scrambled further comprehension - We have an advantage with predicting humans as we have a human mind to simulate with; it’ll take longer for AIs to develop this ability - He suggests that we have a right to mental security and that it is naive to trust that the truth will win out. Unfortunately, he doesn’t address any of the unfortunate concerns - By default, a utility maximiser won’t want us to turn it off as that would interfere with its goals. We could reward it when we turn it off, but that could incentivise it to manipulate it to turn us off. Instead, if the utility maximiser is trying to optimise for our reward function and it is uncertain about what it is, then it would let us turn it off - We might decide that we don’t want to satisfy all preferences, for example, we mightn’t feel any obligation to take into account preferences that are sadistic, vindictive or spiteful. But refusing to consider these preferences could unforeseen consequences, what if envy can’t be ignored as a factor without destroying our self-esteem? - It’s hard to tell if an experience has taught someone more about preferences or changed their preferences (at least without looking into their brain. In either case the response is the same. - We want robots to avoid interpreting commands too literally, as opposed to information about human preferences. For example, if I ask a robot to fetch a cup of coffee, I assume that the nearest outlet isn’t the next city over nor that it will cost $100. We don’t want the robot to fetch it at all costs.
Despite having read dozens of articles discussing Evidential Decision Theory (EDT), I’ve only just figured out a clear and concise explanation of what it is. Taking a step back, let’s look at how this is normally explained and one potential issue with this explanation. All major decision theories (EDT, CDT, FDT) rate potential decisions using expected value calculations where:
Each theory uses a different notion of probability for the outcomes
Each theory uses the same utility function for valuing the outcomes
So it should be just a simple matter of stating what the probability function is. EDT is normally explained as using P(O|S & D) where O is the outcome, S is the prior state and D is the decision. At this point it seems like this couldn’t possibly fail to be what we want. Indeed, if S described all state, then there wouldn’t be the possibility of making the smoking lesion argument.
However, that’s because it fails to differentiate between hidden state and visible state. EDT uses visible state, so we can write it as P(O|V & D). The probability distribution of O actually depends on H as well, ie. it is some function f(V, H, D). In most cases H is uncorrelated with D, but this isn’t always necessarily the case. So what might look like the direct effect of V and D on P might actually turn out to be the indirect effects of D affecting our expected distribution of H then affecting P. For example, in Smoking Lesion, we might see ourselves scoring poorly in the counterfactual where we smoke and we assume that this is because of our decision. However, this ignores the fact that when we smoke, H is likely to contain the lesion and also cancer. So we think we’ve set up a fair playing field for deciding between smoking and non-smoking, but we haven’t because of the differences in H.
Or to summarise: “The decision can correlate with hidden state, which can affect the probability distribution of outcomes”. Maybe this is already obvious to everyone, but this was the key I need to be able to internalise these ideas on an intuitive level.
Induction is the belief that the more often a pattern happens the more likely it is to continue. Anti-induction is the opposite claim: the more likely a pattern happens the less likely future events are to follow it.
Somehow I seem to have gotten the idea in my head that anti-induction is self-reinforcing. The argument for it is as follows: Suppose we have a game where at each step a screen flashes an A or a B and we try to predict what it will show. Suppose that the screen always flashes A, but the agent initially thinks that the screen is more likely to display B. So it guesses B, observes that it guessed incorrectly and then, if it is an anti-inductive agent will increase it’s likelihood that the next symbol will be B because of anti-induction. So in this scenario your confidence that the next symbol will be B, despite the long stream of As, will keep increasing. This particular anti-inductive belief is self-reinforcing.
However, there is a sense in which anti-induction is contradictory—if you observe anti-induction working, then you should update towards it not working in the future. I suppose the distinction here is that we are using anti-induction to update our beliefs on anti-induction and not just our concrete beliefs. And each of these is a valid update rule: in the first we apply this update rule to everything including itself and in the other we apply this update rule to things other than itself. The idea of a rule applying to everything except itself feels suspicious, but is not invalid.
Also, it’s not that the anti-inductive belief that B will be next is self-reinforcing. After all, anti-induction given consistent As pushes you towards believing B more and more regardless of what you believe initially. In other words, it’s more of an attractor state.
Here’s one way of explaining this: it’s a contradiction to have a provable statement that is unprovable, but it’s not a contradiction for it to be provable that a statement is unprovable. Similarly, we can’t have a scenario that is simultaneously imagined and not imagined, but we can coherently imagine a scenario where things exist without being imagined by beings within that scenario.
If I can imagine a tree that exists outside of any mind, then I can imagine a tree that is not being imagined. But “an imagined X that is not being imagined” is a contradiction. Therefore everything I can imagine or conceive of must be a mental object.
Berkeley ran with this argument to claim that there could be no unexperienced objects, therefore everything must exist in some mind — if nothing else, the mind of God.
The error here is mixing up what falls inside vs. outside of quotation marks. “I’m conceiving of a not-conceivable object” is a formal contradiction, but “I’m conceiving of the concept ‘a not-conceivable object’” isn’t, and human brains and natural language make it easy to mix up levels like those.
Here’s one way of explaining this: it’s a contradiction to have a provable statement that is unprovable, but it’s not a contradiction for it to be provable that a statement is unprovable.
Inverted, by switching “provable” and “unprovable”:
It’s a contradiction to have an unprovable statement that is provable, but it’s not a contradiction for it to be unprovable that a statement is provable.
“It’s a contradiction to have a provable statement that is unprovable”—I meant it’s a contradiction for a statement to be both provable and unprovable.
“It’s not a contradiction for it to be provable that a statement is unprovable”—this isn’t a contradiction
You made a good point, so I inverted it. I think I agree with your statements in this thread completely. (So far, absent any future change.) My prior comment was not intended to indicate an error in your statements. (So far, in this thread.)
If there is a way I could make this more clear in the future, suggestions would be appreciated.
Elaborating on my prior comment via interpretation, so that it’s meaning is clear, if more specified*:
[A] it’s a contradiction to have a provable statement that is unprovable, [B] but it’s not a contradiction for it to be provable that a statement is unprovable.
[A’] It’s a contradiction to have an unprovable statement that is provable, [B’] but it’s not a contradiction for it to be unprovable that a statement is provable.
A’ is the same as A because:
it’s a contradiction for a statement to be both provable and unprovable.
While B is true, B’ seems false (unless I’m missing something). But in a different sense B’ could be true. What does it mean for something to be provable? It means that ‘it can be proved’. This gives two definitions:
a proof of X “exists”
it is possible to make a proof of X
Perhaps a proof may ‘exist’ such that it cannot exist (in this universe). That as a consequence of its length, and complexity, and bounds implied by the ‘laws of physics’* on what can be represented, constructing this proof is impossible. In this sense, X may be true, but if no proof of X may exist in this universe, then:
Something may have the property that it is “provable”, but impossible to prove (in this universe).**
*Other interpretations may exist, and as I am not aware of them, I think they’d be interesting.
Book Review: Awaken the Giant Within Audiobook by Tony Robbins
First things first, the audiobook isn’t the full book or anything close to it. The standard book is 544 pages, while the audiobook is a little over an hour and a half. The fact that it was abridged really wasn’t obvious.
We can split what he offers into two main categories: motivational speaking and his system itself. The motivational aspect of his speaking is very subjective, so I’ll leave it to you to evaluate yourself. You can find videos of his on Youtube and you should know within a few minutes whether you like his style.
Instead I’ll focus on reviewing his system. The first key aspect Robbins focuses on what he calls neuro-associations; that is what experiences we link pleasure and pain to. While we may be able to maintain a habit using willpower in the short-term, Robbins believes that in order to maintain it over the long term we need to change our neuro-associations to link please to actions that are good for us and pain to actions that are bad for us.
He argues that we can attach positive or negative neuro-associations to an action by making the advantages or disadvantages as salient as possible. The images on packs of cigarettes are a good example of that principle in action, as would be looking the scans of people who have lung cancer. In addition, we can reward ourselves for success (though he doesn’t discuss the possibility of punishing yourself for failure). This seems like a plausible method for affecting change and one that seems worthwhile experimenting with, although I’ve never experienced much motivation from rewarding myself as it doesn’t really feel like the action is connected to the reward.
The second key aspect of his system is to draw a distinction between decisions and preferences. Most of the time when we say that we’ve decided to do something, such as going to the gym, we’re only just saying that we were prefer that to happen. We haven’t really decided that we WILL do what we’ve said, come what may.
Robbins see the ability to make decisions that we are strongly committed to as key to success. For that reason he recommends practising using our “decision muscles” to strengthen them, so that they are ready when needed. This seems like good advice. Personally, I think it’s important to be honest with yourself about when you have a preference and when you’ve actually made a decision in Robbin’s sense. After all, committed decisions take energy and have a cost as sometimes you’ll commit to something that is a mistake, so it’s important to be selective about what you are truly committed to as otherwise you may end up committed to nothing at all.
There are lots more elements to his system, but those two particular ones are at the core and seemed to be the most distinctive aspects of this book. It’s hard to review such a system without having tried it, but my current position is as follows: I could see myself listening to another one of his audiobooks, although it isn’t really a priority for me.
The sad thing about philosophy is that as your answers become clearer, the questions become less mysterious and awe-inspiring. It’s easy to assume that an imposing question must have an impressive answer, but sometimes the truth is just simple and unimpressive and we miss this because we didn’t evolve for this kind of abstract reasoning.
I used to find the discussion of free will interesting before I learned it was just people talking past each other. Same with “light is both a wave and a particle” until I understood that it just meant that sometimes the wave model is a good approximation and other times the particle model is. Debates about morality can be interesting, but much less so if you are a utilitarian or non-realist.
I used to find the discussion of free will interesting before I learned it was just people talking past each other
Semantic differences almost always happen, but are rarely the only problem.
There are certainly different definitions of free will, but even so problems, remain:-
There is still an open question as to whether compatibilist free will is the only kind anyone ever needed or believed in, and as to whether libertarian free will is possible at all.
The topic is interesting, but no discussion about it is interesting. These are not contradictory.
The open question about strong determinism vs libertarian free will is interesting, and there is a yet-unexplained contradiction between my felt experience (and others reported experiences) and my fundamental physical model of the universe. The fact that nobody has any alternative model or evidence (or even ideas about what evidence is possible) that helps with this interesting question makes the discussion uninteresting.
Not new that I could tell—it is a refreshing clarity for strict determinism—free will is an illusion, and “possible” is in the map, not the territory. “Deciding” is how a brain feels as it executes it’s algorithm and takes the predetermined (but not previously known) path.
He does not resolve the conflict that it feels SOOO real as it happens.
I’m going to start writing up short book reviews as I know from past experience that it’s very easy to read a book and then come out a few years later with absolutely no knowledge of what was learned.
Book Review: Everything is F*cked: A Book About Hope
To be honest, the main reason why I read this book was because I had enjoyed his first and second books (Models and The Subtle Art of Not Giving A F*ck) and so I was willing to take a risk. There were definitely some interesting ideas here, but I’d already received many of these through other sources: Harrari, Buddhism, talks on Nietzsche, summaries of The True Believer; so I didn’t gain as much from this as I’d hoped.
It’s fascinating how a number of thinkers have recently converged on the lack of meaning within modern society. Yuval Harrari argues that modernity has essentially been a deal sacrificing meaning for power. He believes that the lack of meaning could eventually lead to societal breakdown and for this reason he argued that we need to embrace shared narratives that aren’t strictly true (religion without gods if you will; he personally follows Buddhism). Jordan Peterson also worries about a lack of meaning, but seeks to “revive God” as someone kind of metaphorical entity.
Mark Manson is much more skeptical, but his book does start asking similar lines. He tells the story of gaining meaning from his grandfather’s death by trying to make him proud although this was kind of silly as they hadn’t been particularly close or even talked recently. Nonetheless, he felt that this sense of purpose had made him a better person and improved his ability to achieve his goals. Mark argues that we can’t draw motivation from our thinking brain and that we need these kinds of narratives to reach our emotional brain instead.
However, he argues that there’s also a downside to hope. People who are dissatisfied with their lives can easily fall prey to ideological movements which promise a better future, especially when they feel a need for hope. In other words, there is both good and bad hope. It isn’t especially clear what the difference is in the book, but he explained to me in an email that his main concern was how movements cause people to detach from reality.
His solution is to embrace Nietzsche concept of Amor Fati—that is a love of one’s fate whatever it may be. Even though this is also a narrative itself, he believes that it isn’t so harmful as unlike other “religions” it doesn’t require us to detach from reality. My main takeaway was his framing of the need for hope as risky. Hope is normally assumed to be good; now I’m less likely to make this assumption.
It was fascinating to see how he put his own tact on this issue and it certainly isn’t a bad book, but there just wasn’t enough new content for me. Maybe others who haven’t been exposed to some of these ideas will be more enthused, but I’ve read his blog so most of the content wasn’t novel to me.
Further thoughts: After reading the story of his Grandfather, I honestly was expecting him to to propose avoiding sourcing our hope from big all-encapsulating narratives in favour of micro-narratives, but he didn’t end up going this direction.
I was talking with Rupert McCallum about the simulation hypothesis yesterday. Rupert suggested that this argument is self-defeating; that is it pulls the rug from under its own feet. It assumes the universe has particular properties, then it tries to estimate the probability of being in a simulation from these properties and if the probability is sufficiently high, then we conclude that we are in a simulation. But if we are likely to be in a simulation, then our initial assumptions about the universe are likely to be false, so we’ve disproved the assumptions we relied on to obtain these probabilities.
This all seems correct to me, although I don’t see this as a fatal argument. Let’s suppose we start by assuming that the universe has particular properties AND that we are not a simulation. We can then estimate the odds of someone with your kind of experiences being in a simulation within these assumptions. If the probability is low, then our assumption will be self-consistent, but the if probability is sufficiently high, then it become probabilistically self-defeating. We would have to adopt different assumptions. And maybe the most sensible update would be to believe that we are in a simulation, but maybe it’d be more sensible to assume we were wrong about the properties of the universe. And maybe there’s still scope to argue that we should do the former.
This counterargument was suggested before by Danila Medvedev and it doesn’t’ work. The reasons are following: if we are in a simulation, we can’t say anything about the outside world—but we are still in simulation and this is what was needed to be proved.
“This is what was needed to be proved”—yeah, but we’ve undermined the proof. That’s why I backed up and reformulated the argument in the second paragraph.
One more way to prove simulation argument is a general observation that explanations which have lower computational cost are dominating my experience (that is, a variant of Occam Razor). If I see a nuclear explosion, it is more likely to be a dream, a movie or a photo. Thus cheap simulations should be more numerous than real worlds and we are likely to be in it.
It’s been a while since I read the paper, but wasn’t the whole argument around people wanting to simulate different versions of their world and population? There’s a baked in assumption that worlds similar to ones own are therefore more likely to be simulated.
Three levels of forgiveness—emotions, drives and obligations. The emotional level consists of your instinctual anger, rage, disappointment, betrayal, confusion or fear. This is about raw raws. The drives consists of your “need” for them to say sorry, make amends, regret their actions, have a conversation or emphasise with you. In other words, it’s about needing the situation to turn out a particular way. The obligations are very similar to the drives, except it is about their duty to perform these actions rather than your desire to make it happen.
Someone can forgive all of these levels. Suppose someone says that they are sorry and the other person “there is nothing to forgive”. Then perhaps they mean that there was no harm or that they have completely forgiven all levels.
Alternatively, someone might forgive on one-level, but not another. For example, it seems that most of the harm of holding onto a grudge comes from the emotional level and the drives level, but less from the duties level.
it seems that most of the harm of holding onto a grudge comes from the emotional level and the drives level, but less from the duties level.
The phrase “an eye for an eye” could be construed as duty—that the wrong another does you is a debt you have to repay. (Possibly inflated, or with interest. It’s also been argued that it’s about (motivating) recompense—you pay the price for taking another’s eye, or you lose yours.)
Interesting point, but you’re using duty differently than me. I’m talking about their duties towards you. Of course, we could have divided it another way or added extra levels.
Writing has been one of the best things for improving my thinking as it has forced me to solidify my ideas into a form that I’ve been able to come back to later and critique when I’m less enraptured by them. On the other hand, for some people it might be the worst thing for their thinking as it could force them to solidify their ideas into a form that they’ll later feel compelled to defend.
Plot summary: After a disastrous series of dates, autistic genetics professor Don Tilman decides that it’d be easier to just create a survey to eliminate all of the women who would be unsuitable for him. Soon after, he meets a barmaid called Rosie who is looking for help with finding out who her father is. Don agrees to help her, but over the course of the project Don finds himself increasingly attracted to her, even though the survey suggests that he is completely unsuitable. The story is narrated in Don’s voice. He tells us all about his social mishaps, while also providing some extremely straight-shooting observations on society
Should I read this?: If you’re on the fence, I recommend listening to a couple of minutes as the tone is remarkably consistent throughout, but without becoming stale
My thoughts: I found it to be very humorous. but without making fun of Don. We hear the story from his perspective and he manages to be a very sympathetic character. The romance manages to be relatively believable since Don manages to establish himself as having many attractive qualities despite his limited social sills. However, I couldn’t believe that he’d think of Rosie as “the most beautiful woman in the world”; that kind of romantic idealisation is just too inconsistent with his character. His ability to learn skills quickly also stretched credibility, but it felt more believable after he dramatically failed during one instance. I felt that Don’s character development was solid; I did think that he’d struggle more to change his schedule after keeping it rigid for so long, but that wasn’t a major issue for me. I appreciated that by the end he had made significant growth (less strict on his expectations for a partner, not sticking so rigidly to a schedule, being more accomodating of other people’s faults), but he was still largely himself.
I think I spent more time writing this than reading the book, as I find reviewing fiction much more difficult. I strongly recommend this book: it doesn’t take very long to read, but you may spend much longer trying to figure out what to make of it.
Book Review: The Stranger by Camus (Contains spoilers)
I’ve been wanting to read some existentialist writing for a while and it seemed reasonable to start with a short book like this one. The story is about a man who kills a man for what seems to be no real reason at all and who is then subsequently arrested and must come to terms with his fate. It grapples with issues such as the meaning of life, the inevitability of death and the expectations of society.
This book that works perfectly as an audio-book because it’s written in the first person and its a stream of consciousness. In particular, you can just let the thoughts wash over you then pass away, in a way that you can’t with a physical book.
This book starts with the death of Mersault’s mother and his resulting indifference. Mersault almost entirely lacks any direction or purpose in life—not caring about opportunities at work, Salamano abusing his dog or whether or not he marries Marie. Not much of a hint is given at his detachment, except him noting that he had a lot of ambition as a young man, but gave up on such dreams when he had to give up his education.
Despite his complete disillusionment, it’s not that he cares about nothing at all. Without optimism, he has no reason to plan for the future. Instead, he focuses almost exclusively on the moment—being friends with Raymond because he has no reason not to, being with Marie because she brings him pleasure in the present, more tragically, shooting the Arab for flashing the sun in his eyes with a knife.
In my interpretation, Mersault never formed a strong intent to kill him, but just drifted into it. He didn’t plan to have the gun with him, but simply took it to stop Raymond acting rashly. He hadn’t planned to create a confrontation; he just returned to the beach to cool off, then assumed that the Arab was far enough away to avoid any issues. When the Arab pulled out his knife, it must have seemed natural to pull out his gun. Then, with the heat clouding his judgement, his in-the-moment desire to make the situation go away; and his complete detachment from caring, he ends up killing a man when he didn’t need to as he was still far away. Then after he’s fired the first shot, he likely felt like he’d made his choice and that there was then nothing left to do but fire the next four.
While detachment involves no optimism in the emotional sense, in terms of logic it isn’t entirely pessimistic. After all, someone who is detached by their lack of care assumes that things cannot become significantly worse. Mersault falls victim to this trap and in the end it costs him dearly. This occurs not just when he shoots the Arab, but throughout the legal process where he shows what seems like a stunning naivety, completely unaware of what he has to lose until he is pretty much told he is to be executed.
I found his trial to be one of the most engaging parts of the book. A man is dead, but the circumstances relating to this death are almost tangential to the whole thing. Instead, the trial focuses much more on tangential factors such as whether he had felt a sufficient amount of grief for his mother and his association with a known low-life Raymond. This passage felt like a true illustration of human nature; in particular our tendency to fit everything into a particular narrative and also how “justice” can often end up being more about our disgust at the perpetrator as a person than about what they’ve done. Mersault undoubtedly deserves punishment for pulling the trigger early, but the trial he was given was a clear miscarriage of justice.
This book does a good job of illustrating the absurdity of life. How much of our daily lives are trivial, the contradictions in much of human behaviour, the irrationality of many of our social expectations and how our potential sources of meaning fail to be fundamentally meaningful. But then how also how we can find meaning in things that are meaningless.
Indeed, it is only his imprisonment that really makes him value life outside and it is only his impending execution that makes him value life itself. He survives prison by drawing pleasure from simple things, like seeing what tie his defence later will wear and that his happiness does not have to be constrained by his unfortunate circumstances. Mersault ultimately realises that he has to make his own purpose, instead of just expecting it to be out there in the universe.
Further thoughts: One of the most striking sub-plots in this book is that of Salamano and his dog. Salamano is constantly abusing his dog and complaining about how bad it’s behaviour is, but when the dog runs away, Salamano despairs about what will happen to him now that he no longer has the dog. This is a perfect example of just how absurd human actions can be both generally and particularly when we are in denial about our true feelings.
Pet theory about meditation: Lots of people say that if you do enough meditation that you will eventually realise that there isn’t a self. Having not experienced this myself, I am intensely curious about what people observe that persuades them to conclude this. I guess I get a sense that many people are being insufficiently skeptical. There’s a difference between there not appearing to be such a thing as a self and a self not existing. Indeed, how do we know meditation just doesn’t temporarily silence whatever part of our mind is responsible for self-hood?
Recently, I saw a quote from Sam Harris that makes me think I might (emphasis on might) finally know what people are experiencing. In a podcast with Eric Winstein he explains that he believes there isn’t a self because, “consciousness is an open space where everything is appearing—that doesn’t really answer to I or me”. The first part seems to mirror Global Workspace Theory, the idea (super roughly) that there is a part of the brain for synthesising thoughts from various parts of the brain which can only pay attention to one thought at a time.
The second part of Sam Harris’ sentence seems to say that this Global Workspace “doesn’t answer to I or me”. This is still vague, but it sounds like there is a part of the brain that identifies as “I or me” that is separate from this Global Workspace or that there are multiple parts that are separate from the Global Workspace and don’t identify as “I or me”. In the first of these sub-interpretations, “no-self” would merely mean that our “self” is just another sub-agent and not the whole of us. In the second of these sub-interpretations, it would additionally be true that we don’t have a unitary self, but multiple fragments of self-hood.
Anyway, as I said, I haven’t experienced no-self, but curious to see if this resonates with people who have.
Was thinking about entropy and the Waluigi effect (in a very broad, metaphorical sense).
The universe trends towards increasing entropy, in such an environment it is evolutionarily advantageous to have the ability to resist it. Notice though that life seems to have overshot and resulted in far more complex ordered systems (both biological or manmade) than what exists elsewhere.
It’s not entirely clear to me, but it seems at least somewhat plausible that if entropy were weaker, the evolutionary pressure would be weaker and the resulting life and systems produce by such life would ultimately be less complex than they are in our world.
Life happens within computations in datacenters. Separately, there are concerns about how well the datacenters will be doing when the universe is many OOMs older than today.
Confusing entropy arguments are suspicious (in terms of hope for ever making sense). That’s a sketch of how entropy in physics becomes clearly irrelevant for the content of everything of value (as opposed to amount). Waluigi effect is framing being stronger than direction within it, choice of representation more robust than what gets represented. How does natural selection enter into this?
On free will: I don’t endorse the claim that “we could have acted differently” as an unqualified statement.
However, I do believe that in order to talk about decisions, we do need to grant validity to a counterfactual view where we could have acted differently as a pragmatically useful fiction.
What’s the difference? Well, you can’t use the second to claim determinism is false.
This lack of contact with naive conception of possibility should be developed further, so that the reasons for temptation to use the word “fiction” dissolve. An object that captures a state of uncertainty doesn’t necessarily come with a set of concrete possibilities that are all “really possible”. The object itself is not “fictional”, and its shadows in the form of sets of possibilities were never claimed to either be “real possibilities” or to sum up the object, so there is no fiction to be found.
A central example of such an object is a program equipped with theorems about its “possible behaviors”. Are these behaviors “really possible”? Some of them might be, but the theorems don’t pin that down. Instead there are spaces on which the remaining possibilities are painted, shadows of behavior of the program as a whole, such as a set of possible tuples for a given pair of variables in the code. A theorem might say that reality lies within the particular part of the shadow pinned down by the theorem. One of those variables might’ve stood for your future decision. What “fiction”? All decision relevant possibility originates like that.
I argue that “I can do X” means “If I want to do X, I will do X”. This can be true (as an unqualified statement) even with determinism. It is different from saying that X is physically possible.
It seems as though it should be possible to remove the Waluigi effect[1] by appropriately training a model.
Particularly, some combination of:
Removing data from the training that matches this effect
Constructing new synthetic data which performs the opposite of the Waluigi effect
However, removing this effect might be problematic for certain situations where we want the ability to generate such content, for example, if we want it to write a story.
In this case, it might pay to add back the ability to generate such content within certain tags (ie. <story></story>), but train it not to produce such content otherwise.
I decided to split out some content from the end of my post The Nature of Counterfactuals because upon reflection I don’t feel it is as high quality as the core of the post.
I finished The Nature of Counterfactuals by noting that I was incredibly unsure of how we should handle circular epistemology. That said, there are a few ideas I want to offer up on how to approach this. The big challenge with counterfactuals is not imagining other states the universe could be in or how we could apply our “laws” of physics to discover the state of the universe at other points of time. Instead, the challenge comes when we want to construct a counterfactual representing someone choosing a different decision. After all, in a deterministic universe, someone could only have made a different choice if the universe were different, but then it’s not clear why we would care about the fact that someone in a different universe would have achieved a particular score when we just care about this universe.
I believe that answer to this question will be roughly that in certain circumstances we only care about particular things. For example, let’s suppose Omega is programmed in such a way that it would be impossible for Amy to choose box A without gaining 5 utility or choose box B without gaining 10 utility. Assume that in the universe Amy chooses box A and gains 5 utility. We’re tempted to say “If she had chosen box B she would have gained 10 utility” even though she would have to occupy a different mental state at the time of the decision and the past would be different because the model has been set up so that those factors are unimportant. Since those factors are the only difference between the state where she chooses A and the state where she chooses B we’re tempted to treat these possibilities as the same situation.
So naturally, this leads to a question, why should we build a model where those particular factors are unimportant? Does this lead to pure subjectivity? Well, the answer seems to be that often in practise such a heuristic tends to work well—agents that ignore such factors tend to perform pretty close to agents that account for them—and often better when we include time pressure in our model.
This is the point where the nature of counterfactuals becomes important—whether they are ontologically real or merely a way in which we structure our understanding of the universe. If we’re looking for something ontologically real, the fact that a heuristic is pragmatically useful provides quite limited information about what counterfactuals actually are.
On the other hand, if they’re a way of structuring our understanding, then we’re probably aiming to produce something consistent from our intuitions and our experience of the universe. And from this perspective, the mere fact that a heuristic is intuitively appealing counts as evidence for it.
I suspect that with a bit more work this kind of account could be enough to get a circular epistemology off the ground.
My position on Newcomb’s Problem in a sentence: Newcomb’s paradox results from attempting to model an agent as having access to multiple possible choices, whilst insisting it has a single pre-decision brain state.
Take for example concepts like courage, diligence and laziness. These concepts are considered thick concepts because they have both a descriptive component and a moral component. To be courageous is most often meant* not only to claim that the person undertook a great risk, but that it was morally praiseworthy. So the thick concept is often naturally modeled as a conjunction of a descriptive claim and a descriptive claim.
However, this isn’t the only way to understand these concepts. An alternate would be along the following lines: Imagine D+M>=10 with D>=3 and M>=3. So there would be a minimal amount that the descriptive claim has to fit and a minimal amount the moral claim has to fit and a minimal total. This doesn’t seem like an unreasonable model of how thick concepts might apply.
Alternatively, there might be an additional requirement that the satisfaction of the moral component is sufficiently related to the descriptive component. For example, suppose in order to be diligent you need to work hard in such a way that the hard work causes the action to be praiseworthy. Then consider the following situation. I bake you a cake and this action is praiseworthy because you really enjoy it. However, it would have been much easier for me to have bought you a cake—including the effort to earn the money—and you would actually have been happier had I done so. Further, assume that I knew all of this in advance. In this case, can we really say that you’ve demonstrated the virtue of diligence?
Maybe the best way to think about this is Wittgensteinian: that thick concepts only make sense from within a particular form of life and are not so easily reduced to their components as we might think.
I’ve always found the concept belief in belief slightly hard to parse cognitively. Here’s what finally satisfied my brain: whether you will be rewarded or punished in heaven is tied to whether or not God exists, whether or not you feel a push to go to church is tied to whether or not you believe in God. If you do go to church and want to go your brain will say, “See I really do believe” and it’ll do the reverse if you don’t go. However, it’ll only affect your belief in God indirectly through your “I believe in God” node. Putting it another way, going to church is evidence you believe in God, not evidence that God exists. Anyway, the result of all this is that your “I believe in God” node can become much stronger than your “God exists” node
EDT agents handle Newcomb’s problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box.
That’s the high-level description, but let’s break it down further. Unlike CDT, EDT doesn’t worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are comparable. In other words, any differences in hidden state, such as you being a different agent or money being placed in the box, are attributed to your decision (see my previous discussion here)
I’ve been thinking about Rousseau and his conception of freedom again because I’m not sure I hit the nail on the head last time. The most typical definition of freedom and that championed by libertarians focuses on an individual’s ability to make choices in their daily life. On the more libertarian end, the government is seen as an oppressor and a force of external compulsion.
On the other hand, Rousseau’s view focuses on “the people” and their freedom to choose the kind of society that they want to live in. Instead of being seen as an external entity, the government is seen as a vessel through which the people can express and realise this freedom (or at least as potentially becoming such a vessel).
I guess you could call this a notion of collective freedom, but at the same time this risks obscuring an important point: that at the same time it is an individual freedom as well. Part of it is that “the people” is made up of individual “people”, but it goes beyond this. The “will of the people” at least in its idealised form isn’t supposed to be about a mere numerical majority or some kind of averaging of perspectives or the kind of limited and indirect influence allowed in most representative democracies, but rather it is supposed to be about a broad consensus; a direct instantiation of the will of most individuals.
There is a clear tension between these kinds of freedom in that the more the government respects personal freedom that less control the people have over the kind of society they want to live in and the more the government focuses on achieving the “will of the people” the less freedom exists for those for whom this doesn’t sound so appealing.
I can’t recall the arguments Rousseau makes for this position, but I expect that they’d be similar to the arguments for positive freedoms. Proponents of positive freedom argue that theoretical freedoms, such as there being no legal restriction against gaining an education, are worthless if these opportunities aren’t actually accessible, say if this would cost more money than you could ever afford.
Similarly, proponents of Rousseau’s view could argue that freedom over your personal choices is worthless if you exist within a terrible society. Imagine there were no spam filters and so all of it made it through. Then the freedom to use email would be worthless without the freedom to choose to exist in a society without spam. Instead of characterising this as a trade-off between utility and freedom, Rousseau would see this as a trade-off between two different notions of freedom.
Now I’m not saying Rousseau’s views are correct—I mean the French revolution was heavily influenced by him and we all saw how that worked out. And it also depends on there being some kind of unified “will of the people”. But at the same time it’s an interesting perspective.
the French revolution was heavily influenced by him and we all saw how that worked out.
Can you make this a little more explicit? France is a pretty nice place—are you saying that the counterfactual world where there was no revolution would be significantly better?
Sure. I’m asking about the “we all saw how that worked out” portion of your comment. From what I can see, it worked out fairly well. Are you of the opinion that the French Revolution was an obvious and complete utilitarian failure?
What does it mean to define a word? There’s a sense in which definitions are entirely arbitrary and what word is assigned to what meaning lacks any importance. So it’s very easy to miss the importance of these definitions—emphasising a particular aspect and provides a particular lense with which to see the world.
For example, if define goodness as the ability to respond well to others, it emphasizes that different people have different needs. One person may want advice, while another simple encouragement. Or if we define love as acceptance of the other, it suggests that one of the most important aspects of love is the idea that true love should be somewhat resilient and not excessively conditional.
As I wrote before, evidential decision theory can be critiqued for failing to deal properly with situations where hidden state is correlated with decisions. EDT includes differences in hidden state as part of the impact of the decision, when in the case of the smoking lesion, we typically want to say that it is not.
However, Newcomb’s problem also has hidden state is correlated with your decision. And if we don’t want to count this when evaluating decisions in the case of the Smoking Lesion, perhaps we shouldn’t count this in the case of Newcomb’s? Or is there a distinction? I think I’ll try analysing this in terms of the erasure theory of coutnerfactuals at some point
There is a distinction in the correlation, but it’s somewhat subtle and I don’t fully understand it myself. One silly way to think about it that might be helpful is “how much does the past hinge on your decision?” In smoker’s lesion, it is clear the past is very fixed—even if you decide to not to smoke, that doesn’t affect the genetic code. But in Newcomb’s, the past hinges heavily on your decision: if you decide to one-box, it must have been the case that you could have been predicted to one-box, so it’s logically impossible for it to have gone the other way.
One intermediate example would be if Omega told you they had predicted you to two-box, and you had reason to fully trust this. In this case, I’m pretty sure you’d want to two-box, then immediately precommit to one-boxing in the future. (In this case, the past no longer hinges on your decision.) Another would be if Omega was predicting from your genetic code, which supposedly correlated highly with your decision but was causally separate. In this case, I think you again want to two-box if you have sufficient metacognition that you can actually uncorrelate your decision from genetics, but I’m not sure what you’d do if you can’t uncorrelate. (The difference again lies in how much Omega’s decision hinges on your actual decision.)
Yeah, FDT has a notion of subjunctive dependence. But the question becomes what does this mean? What precisely is the difference between the smoking lesion and Newcombs? I have some ideas and maybe I’ll write them up at some point.
I’m beginning to warm to the idea that the reason why we have evolved to think in terms of counterfactuals and probabilities is rooted in these are fundamental at the quantum-level. Normally I’m suspicious at rooting macro level claims in quantum level effects because at such a high level of abstraction it would be very easy for these effects to wash out, but the multi-world hypothesis is something that wouldn’t wash out. Otherwise it would seem to be all a bit too much of a coincidence.
(“Oh, so you believe that counterfactuals and probability are at least partly a human construct, but they just so happen to correspond with what seems to us to be the fundamental level of physics, not because there is a relation there, but because of pure happenstance. Seems a bit of a stretch)”
I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe. Far more likely to develop counterfactuals first, since it seems that agents on the level of dogs can imagine counterfactuals at least in the weak sense of “an expected event that didn’t actually happen”. Human-level counterfactual models are certainly more complex than that, but I don’t think they’re qualitatively different.
I think if there’s any evolution pressure toward ability to predict the environment, and the environment has a range of salient features that vary in complexity, there will be some agents that can model and predict the environment better than others regardless of whether that environment is fundamentally deterministic or not. In cases where evolution leads to sufficiently complex prediction, I think it will inevitably lead to some sort of counterfactuals.
The simplest predictive model can only be applied to sensory data directly. The agent gains a sense of what to expect next, and how much that differed from what actually happened. This can be used to update the model. This isn’t technically a counterfactual, but only through a quirk of language. In everything but name “what to expect next” is at least some weak form of counterfactual. It’s a model of an event that hasn’t happened and might not happen. But still, let’s just rule it out arbitrarily and continue on.
The next step is probably to be able to apply the same predictive model to memory as well, which for a model changing over time means that an agent can remember what they experienced, what they expected, and compare with what they would now expect to have happened in those circumstances. This is definitely a counterfactual. It might not be conscious, but it is a model of something in the past that never happened. It opens up a lot of capability for using a bunch of highly salient stored data to update the model instead of just the comparative trickle of new salient data that comes in over time.
There are still higher strengths and complexities of counterfactuals of course, but it seems to me that these are all based on the basic mechanism of a predictive model applied to different types of data.
None of this needs any reference to quantum mechanics, and nor does probability. All it needs is a universe too complex to be comprehended in its entirety, and agents that are capable of learning to imperfectly model parts of it that are relevant to themselves.
“I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe”
I’m actually trying to make a slightly unusual argument. My argument isn’t that we wouldn’t construct counterfactuals in a purely deterministic world operating similar to ours. My argument is involves:
a) Claiming that counterfactuals are at least partly constructed by humans (if you don’t understand why this might be reasonable, then it’ll be more of a challenge to understand the overall argument) b) Claiming that it would be a massive coincidence if something partly constructed by humans happened to correspond with fundamental structures in such a way unrelated to the fundamental structures c) Concluding that its likely that there is some as yet unspecified relation
To me the correspondence seems smaller, and therefore the coincidence less unlikely.
Many-world hypothesis assumes parallel worlds that obey exactly the same laws of physics. Anything can happen with astronomically tiny probability, but the vast majority of parallel worlds is just as boring as our world. The counterfactuals we imagine are not limited by the laws of physics.
Construction of counterfactuals is useful for reasoning with uncertainty. Quantum physics is a source of uncertainty, but there are also enough macroscopic sources of uncertainty (limited brain size, second law of thermodynamics). If an intelligent life evolved in a deterministic universe, I imagine it would also find counterfactual reasoning useful.
Not hugely. Quantum mechanics doesn’t have any counterfactuals in some interpretations. It has deterministic evolution of state (including entanglement), and then we interpret incomplete information about it as being probabilistic in nature. Just as we interpret incomplete information about everything else.
I’m not claiming that there’s a perfect correspondence between counterfactuals as different worlds in a multiverse vs. decision counterfactuals. Although maybe that’s enough the undermine any coincidence right there?
I don’t see how there is anything here other than equivocation of different meanings of “world”. Counterfactuals-as-worlds is not even a particularly convincing way of making sense of what counterfactuals are.
Hegel—A Very Short Introduction by Peter Singer—Book Review Part 1: Freedom
Hegel is a philosopher who is notorious for being incomprehensible. In fact, for one of his books he signed a contract that assigned a massive financial penalty for missing the publishing deadline, so the book ended up being a little rushed. While there was a time when he was dominant in German philosophy, he now seems to be held in relatively poor regard and his main importance is seen to be historical. So he’s not a philosopher that I was really planning to spend much time on.
Given this, I was quite pleased to discover this book promising to give me A Very Short Introduction, especially since it is written by Peter Singer, a philosopher who write and thinks rather clearly. After reading this book, I still believe that most of what Hegel wrote was pretentious nonsense, but the one idea that struck me as the most interesting was his conception of freedom.
A rough definition of freedom might be ensuring that people are able to pursue whatever it is that they prefer. Hegel is not a fan abstract definitions of freedom which treat all preferences the same and don’t enquire where they come from.
In his perspective, most of our preferences are purely a result of the context in which we exist and so such an abstract definition of freedom is merely the freedom to be subject to social and historical forces. Since we did not choose our desires, he argues that we are not free when we act from our desires. Hegel argues that, “every condition of comfort reveals in turn its discomfort, and these discoveries go on for ever”. One such example would be the marketing campaigns to convince us that sweating was embarrassing (https://www.smithsonianmag.com/…/how-advertisers-convinced…/).
This might help clarify further: Singer ties this to the more modern debate between Radical Economists and Liberal Economists. Liberal economists use how much people pay as a measure of how strong their preferences are and refuse to get into the question of whether any preferences are more valuable than any other seeing this as ideological. Radical economists argue that many of our desires are a result of capitalism. They would say that if I convince you that you are ugly and then I sell you $100 of beauty products to restore your confidence, then I haven’t created $100 worth of value. They argue that refusing to value any preference above any other preference is an ideological choice in and of itself; and that there is no way to step outside of ideology.
If pursuing our desires is not freedom, what is? Kant answers that freedom is following reason and performing your duty. This might not sound very much like freedom, quite the opposite, but for Kant not following your reason was allowing yourself to be a slave of your instincts. Here’s another argument: perhaps a purely rational being wouldn’t desire the freedom to shirk their duty, so insofar as this is freedom, it might not be of a particularly valuable kind and if you think this is imposing on your freedom this is because of your limited perspective.
Hegel thought that Kant’s answer was a substantial advance, but he also thought it was empty of content. Kant viewed duty in terms of the categorical imperative, “Do not act except if you could at the same time will that it would become a universal law”, but Hegel thought it was empty of content. Kant would say that you shouldn’t steal because you couldn’t will a world where everyone would steal from everyone else. But mightn’t some people be fine with such a world, particularly if they thought they might come out on top? Even if you don’t want to consider people with views that extreme, you can almost always find a universal to justify whatever action you want. Why should the universal that the thief would have to accept be, “Anyone can steal from another person” instead of, “Anyone can steal from someone who earned who doesn’t deserve their wealth?” (See section III of You Kant Dismiss Universalizability). Further, Kant’s absolutist form of morality (no lying even to save a friend from a murderer) seems to require us to completely sacrifice our natural desires.
Hegel’s solution to this was to suggest the need for what he calls an organic community; or a community that is united in its values. He argues that such communities shape people’s desires to such an extent that most people won’t even think about pursuing their own interests and that this resolves the opposition between morality and self-interest that Kant’s vision of freedom creates. However, unlike the old organic communities which had somewhat arbitrary values, Hegel argued that the advance of reason meant that the values of these communities also had to be based on reason, otherwise freethinking individuals wouldn’t align themselves with the community.
Indeed, this is the key part of his much-aligned argument that the Prussian State was the cumulation of history. He argued that the French revolution has resulted in such bloodshed because it was based on an abstract notion of freedom which was pursued to the extent that all the traditional institutions were bulldozed over. Hegel argued that the evolution of society should built upon what already exists and not ignore the character of the people or the institutions of society. For this reason, his ideal society would have maintained the monarchy, but with most of the actual power being delegated to the houses, except in certain extreme circumstances.
I tend to think of Hegel as primarily important for his contributions to the development of Western philosophy (so even if he was wrong on details he influenced and framed the work of many future philosophers by getting aspects of the framing right) and for his contributions to methodology (like standardizing the method of dialectic, which on one hand is “obvious” and people were doing it before Hegel, and on the other hand is mysterious and the work of experts until someone lays out what’s going on).
Which aspects of framing do you think he got right?
“In more simplistic terms, one can consider it thus: problem → reaction → solution. Although this model is often named after Hegel, he himself never used that specific formulation. Hegel ascribed that terminology to Kant. Carrying on Kant’s work, Fichte greatly elaborated on the synthesis model and popularized it.”—Wikipedia; so Hegel deserves less credit than he is originally granted.
Interesting.
I don’t recall anymore, it’s been too long for me to remember enough specifics to answer your question. It’s just an impression or cached thought I have that I carry around from past study.
Book Review: Communist Manifesto
“The history of all hitherto existing society is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman, in a word, oppressor and oppressed, stood in constant opposition to one another, carried on an uninterrupted, now hidden, now open fight, that each time ended, either in the revolutionary reconstitution of society at large, or in the common ruin of the contending classes”
Overall summary: Given the rise of socialism in recent years, now seemed like an appropriate time to review the Communist Manifesto. At times I felt that Marx’s writing was keenly insightful, at other times I felt he was in ignorance of basic facts and at other times I felt that he held views that were reasonable at the time, but for which the flaws are now obvious. In particular, I found the first-half much more engaging than I expected because, say what you like about Marx, he’s an engaged and poetic writer. Towards the end, the focused shifted into particular time-bounded political disputes for which I neither had the knowledge to understand nor the interest to acquire. At the start, I felt that I already had a decent grasp of the communist impulse and I haven’t become any more favourable to communism, but reading this rounded out a few more details of the communist critique of capitalism.
Capitalism: Despite being its most famous critic, Marx has a strong appreciation for the power of capitalism. He writes about it sweeping away all the old feudal bonds and how it draws even the most “barbarian” nations into civilisation. He writes about it stripping every occupation previously admired of its halo into its “paid wage labourers”; and undoubtedly some professions are affected far too much by market concerns, but this has to be weighed up against the increase in access that has been brought. He even writes that it has accomplished “wonders far exceeding the Egyptian Pyramids, Roman Acquaducts and Gothic Cathedrals” and his willingness to acknowledge this in such strong terms increased my respect for him. Marx can’t see capitalism as anything, but exploitation; for those who would answer that it lifts all boats, I don’t think he has a strong reply apart from denial that this occurs. To steelman him, even if people are better off financially, they can be worse off overall if they are now working only the simplest, most monotonous jobs. That would have been a stronger argument when much more work was in factories, but with increasing automation, these are precisely those jobs that are disappearing. Another argument would be that over time the capitalists who survive will be those who are best at lowering wage costs, by minimising the use of labour and ensuring that the work is set up to use as much unskilled labour as possible. So even if people were financially better off in the short term, they might be worse off over the long term. However, history seems to have shown the opposite, with modern wages far greater than in pre-industrial, pre-capitalist times.
Class warfare: Marx made several interesting comments on this. How the bourgeoise were often empowered by the monarchy to limit the power of the nobility. That the proletariat should be thought of as a new class, separate from the peasants, since their interests diverge with the later more likely to try rolling things back than to support creating a new order. How the bourgeois would seek help from the proletariat against aristocrats, dragging the proletariat into the political arena. How the proletariat were not unified in Marx’s time, but how improved communication provided the means for national unification. And that a section of the bourgeois who were threatened with falling into the proletariat would join with the proletariat. I definitely think class analysis has value, but I worry how Marxists often don’t be able to see things in any way other than class. We are members of classes; that is true; but we are also individuals and no one way of carving up the space captures all of reality. For example, Marx includes masters/apprentices in his oppressor/oppressed hierarchy, even the though most of the later will eventually become the former
Personal property: It was interesting hearing him talking about abolishing personal property as that is an element of the original communism that seems to be de-emphasised these days, with the focus more on seizing the means of production. I expect that this is related to a change in context; Marx was able to write that private property is done away with for 9/10s of the population, I don’t know how true it was at the time, but it certainly isn’t true today. Nonetheless, I found it interesting that his desire to abolish bourgeois property was similar to the bourgeois desire to abolish feudal property; both believe that the kind of property they want to abolish is based upon exploitation and unearned privilege.
False consciousness: For Marx, the ideas that are dominant in society are just the ideas of the elites. Law, morality and religion are just prejudices of the bourgeois. People don’t structure society based upon ideas, rather the ideas are determined by the structure of society and what allows society to be as productive as possible. Marx doesn’t provide an exact chain of causation, but perhaps he believes that the elites benefit from increases in production and therefore always push society in that direction, in order to realise their short-term interests. The question then arises: if everyone else has a false consciousness why then doesn’t Marx also? Again speculating, perhaps Marx would say when a system is on its last legs, the flaws and contradiction become too large for the elite ideology to remain cover up. Alternatively, perhaps it is only the dominant ideas in society that are determined by the structure of society and other ideas can exist, just without being allowed any real influence. I still feel Marx overstates the power of false consciousness, but at least I now have an answer to this question that’s somewhat reasonable.
It is not obvious to me from reading the text whether you are aware of the distinction between “private property” and “personal property” in Marxism. So, just to make sure: “private property” refers to the means of production (e.g. a factory), and “personal property” refers to things that are not means of production (e.g. a house where you live, clothes, food, toys).
The ownership of “private property” should be collectivized (according to Marx/ists), because… simply said, you can use the means of production to generate profit, then use that profit to buy more means of production, yadda yadda, the rich get exponentially richer on average and the poor get poorer.
With “personal property” this effect does not happen; if you have one table and I have two tables, there is no way for me to use this advantage to generate further tables, until I become the table-lord of the planet.
(There seem to be problems with this distinction. For example, things can be used either productively or unproductively; I can use my computer to create software or browse social networks. Some things can be used productively in unexpected ways; even the extra table could be used in a workshop to produce stuff. I am not a Marxist, but I suppose the answer would probably be something like “you are allowed to browse the web on your personal computer, but if we catch you privately producing and selling software, you get shot”.)
So, is this the confusion of Marxist terms, or do you mean that today more than 10% of people own means of production? In which sense? (Not sure if Marx would also count indirect ownership, such as having your money in an index fund, which buys shares of companies, which own the means of production.)
Did Marx actually argue for abolishing “personal proprety” (according to his definition, i.e. ownership of houses or food)?
For many people nowadays, their own brain is their means of production, often assisted by computers and their software, but those are cheap compared what what can be earned by using them. Marx did not know of such things, of course, but how do modern Marxists view this type of private ownership of means of production? For that matter, how did Marx view a village cobbler who owned his workshop and all his tools? Hated exploiter of his neighbours? How narrow was his motte here?
I once talked about this with a guy who identified as a Marxist, though I can’t say how much his opinions are representative for the rest of his tribe. Anyway… he told me that in the trichotomy of Capital / Land / Labor, human talent is economically most similar to the Land category. This is counter-intuitive if you take the three labels literally, but if you consider their supposed properties… well, it’s been a few decades since I studied economics, but roughly:
The defining property of Capital is fungibility. You can use money to buy a tech company, or an airplane factory, or a farm with cows. You can use it to start a company in USA, or in India. There is nothing that locks money to a specific industry or a specific place. Therefore, in a hypothetical perfectly free global market, the risk-adjusted profit rates would become the same globally. (Because if investing the money in cows gives you 5% per annum, but investing money in airplanes gives you 10%, people will start selling cow farms and buying airplane factories. This will reduce the number of cow farms, thus increasing their profit, and increase the competition in the airplane market, thus reducing their profit, until the numbers become equal.) If anything is fungible in the same way, you can classify it as Capital.
The archetypal example of Labor is a low-qualified worker, replaceable at any moment by a random member of the population. Which also means that in a free market, all workers would get the same wage; otherwise the employers would simply fire the more expensive ones and replace them with the cheaper ones. However, unlike money, workers are typically not free to move across borders, so you get different wages in different countries. (You can’t build a new factory in the middle of USA, and move ten thousand Indian workers there to work for you. You could do it the other way round: move the money, and build the factory in India instead. But if there are reasons to keep the factory in USA, you are stuck with American workers.) But within country it means that as long as a fraction of population is literally starving, you can hire them for the smallest amount of money they can survive with, which sets the equilibrium wage on that level. Because those starving ones won’t say no, and anyone who wants to be paid more will be replaced by those who accept the lower wage. Hypothetically, if you had more available job positions than workers, the wages would go up… but according to Malthus, this lucky generation of workers would simply have many kids, which would fix this exception in the next generation. -- Unless the number of job positions for low-qualified workers can keep growing faster than the population. But even in that case, the capitalists would probably successfully lobby the government to fix the problem by letting many immigrants in. Somewhere on the planet, there are enough starving people. Also, if the working people are paid just as much as they need to survive, they can hardly save money, so they can’t get out of this trap.
Now the category of Land contains everything that is scarce, so it usually goes to the highest bidder. But no matter how much rent you get for the land, you cannot use the rent to generate more of it. So, in long term the land will get even more expensive, and a lot of increased productivity will be captured by the land owners.
From this perspective, being born with a IQ 200 brain is like having inherited a gold mine, which would belong to the Land category. Some people need your for their business, and they can’t replace you with a random guy on the street. The number of potential jobs for IQ 200 people exceeds the number of IQ 200 people, so the employers must bid for your brain. But it is different from the land in the sense that it’s you who has to work using your brain; you can’t simply rent your brain to a factory and let some cheap worker operate it. Perhaps this would be equivalent to a magical gold mine, where only the owner can enter, so if he wants to profit from owning the gold mine, he has to also do all the work. Nonetheless, he gets extra profit from the fact that he owns the gold mine. So it’s like he offers the employer a package consisting of his time + his brain. And his salary could be interpreted as consisting of two parts: the wage, for the time he spends using his brain (which is numerically equivalent to how much money a worker would get for working the same amount of time); and the rent for the brain, that is the extra money compared to the worker. (For example, suppose that workers in your country are paid $500 monthly, and software developers are paid $2000 monthly. That would mean that for an individual software developer, the $500 is the wage for his work, and $1500 is the rent for using his brain.) That means that extraordinarily smart employees are (smaller) part working class, and (greater) part rentier class. They should be reminded that if, one day, enough people become equally smart (whether through eugenics, genetic engineering, selective immigration, etc.), their income will also drop to the smallest amount of money they can survive with.
As I said, no idea whether this is an orthodox or a heretical opinion within Marxism.
IANAM[1], but intuitively it seems to me that an exception ought to be made (given the basic idea of Marxist theory) for individuals who own means of production the use of which, however, does not involve any labor but their own.
So in the case of the village cobbler, sure, he owns the means of production, but he’s the only one mixing his labor with the use of those tools. Clearly, he can’t be exploiting anyone. Should the cobbler take on an assistant (continuing my intuitive take on the theory), said assistant would presumably have to now receive some suitable share in the ownership of the workshop/tools/etc., and in the profits from the business (rather than merely being paid a wage), as any other arrangement would constitute alienation from the fruits of his (the assistant’s) labor.
On this interpretation, there does not here seem to be any contradiction or inconsistency in the theory. (I make no comment, of course, on the theory’s overall plausibility, which is a different matter entirely.)
I Am Not A Marxist.
https://www.marxists.org/archive/marx/works/1847/communist-league/1850-ad1.htm
Thanks for clarifying this terminology, I wasn’t aware of this distinction when I wrote this post
Before I even got to your comment, I was thinking “You can pry my laptop out of my cold dead hands Marx!”
Thank you for this clarification on personal vs private property.
Book Review: So Good They Can’t Ignore You by Cal Newport:
This book makes an interesting contrast to The 4 Hour Workweek. Tim Ferris seems to believe that the purpose of work should be to make as much money as possible in the least amount of time and that meaning can then be pursued during your newly available free time. Tim gives you some productivity tips in the hope that it will make you valuable enough to negotiate flexibility in terms of how, when and where you complete your work, plus some dirty tricks as well.
Cal Newport’s book is similar in that it focuses on becoming valuable enough to negotiate a job that you’ll love and downplays the importance of pursuing your passions in your career. However, while Tim extolls the virtues of being a digital nomad, Cal Newport emphasises self-determination theory and autonomy, competence and relatedness. That is, the freedom to decide how you pursue your work, the satisfaction of doing a good job and the pleasure of working with people who you feel connected to. He argues that these traits are rare and valuable and so that if you want such a job you’ll need skills that rare and valuable to offer in return.
That’s the core of his argument against pre-existing passion; passions tend to cluster into a few fields such as music, arts or sports and only a very few people can ever make these the basis of their careers. Even for those who are interested in less insanely competitive pursuits such as becoming a yoga instructor or organic farmer, he cautions against pursuing the dream of just quitting your job one day. That would involve throwing away all of the career capital that you’ve accumulated and hence your negotiating power. Further, it can easily lead to restlessness, that is, jumping from career to career all the while searching for the “one” that meets an impossibly high bar.
Here are some examples of the kind of path he endorses:
Someone becoming an organic farmer after ten years of growing and selling food on the side, starting in high school. Lest this been seen as a confirmation of the passion hypothesis, this was initially just to make some money
A software tester making her way up to the head of testing to the point where she could demand that she reduce her hours to thirty per week and study philosophy
A marketer who gained such a strong reputation that he was able to form his own sub-agency within the bigger agency and then eventually form his own completely independent operation
Cal makes a very strong argument. When comparing pursuing a passion to more prosaic career paths, we often underestimate how fulfilling the later might eventually become if we work hard and use our accumulated career capital to negotiate the things that we truly want. This viewpoint resonates with me as I left software to study philosophy and psychology, without fully exploring options related to software. I now have a job that I really enjoy as it offers me a lot of freedom and flexibility.
One of the more compelling examples is Cal’s analysis of Steve Jobs. We tend to think of Job’s success as a prototypical case of following your passion, but his life shows otherwise. Jobs’ entry into technology (working for Atari) was based upon the promise of a quick buck. He’d been traversing around India and needed a real job. Jobs was then involved in a timesharing company, but he left for a commune without telling the others and was replaced by the time he made it back. So merely a year before he started Apple, he was hardly passionate about technology or entrepreneurship. This seems to have only occurred as he became more successful.
This is prototypical of Cal’s theory: instead of leveraging passion to become So Good They Can’t Ignore You (TM), he believes that if you become So Good They Can’t Ignore You (TM) that passion will follow. In evidence, Cal notes that people often passionate about many different things at different times, including things they definitely weren’t passionate about before. He suggests this is indicative of our ability to develop passions under the right circumstances.
Personally, I feel that the best approach will vary hugely depending on individual circumstance, but I suspect Cal is sadly right for most people. Nonetheless, Cal provides lists three exceptions. A job or career path is not suitable for his strategy if there aren’t opportunities to distinguish yourself, it is pointless or harmful to society or if it requires you to work with people you hate.
Towards the end of the book, Cal focuses on strategies for becoming good at what you do. While this section wasn’t bad, I didn’t find it particularly compelling either. I wish I’d just read the start of the book which covers his case against focusing on pre-existing passion, as that was by far the most insightful and original part of the book for me. Perhaps the most interesting aspect was how he found spending 14 hours of focused attention deconstructing a key paper in his field to have been a valuable use of time. I was surprised to hear that it paid off in terms of research opportunities, but I suppose it isn’t so implausible that such projects could pay off if you picked an especially important paper.
Further notes:
- If you are going to only read this or Four Hour Workweek, I’d suggest this one to most people. I feel that this one is less likely to be harmful and is applicable to a broader range of people, many who won’t immediately have the career capital to follow Tim’s advice. On the other hand, Tim’s book might be more useful if, unlike me, you don’t need to be convinced of Cal’s thesis.
- Cal points out that if you become valuable enough to negotiate more freedom, then you also become valuable enough that people will want to stop you. The challenge is figuring out whether you have sufficient career capital to overcome this resistance. Cal suggest not pursuing control without evidence people are willing to pay you either in money or with something else valuable; I find his position reductive and insufficiently justified.
- Cal believes that it is important to have a mission for your career, but that it is hard to pick a mission without already being deep inside a field. He notes that discoveries are often made independently and theorises that this is because often a discovery isn’t likely or even possible until certain prerequisites are in place, such as ideas, technologies or social needs. It’s only when you are at the frontier that you have sufficient knowledge to see and understand the next logical developments
FWIW I think this and maybe some of the other book review shortforms you’ve done would make fine top level posts.
Thanks, I’ll think about it. I invested more effort in this one, but for some of the others I was optimising for speed
+1 for book-distillation, probably the most underappreciated and important type of post.
As I said before, I’ll be posting book reviews. Please let me know if you have any questions and I’ll answer them to the best of my ability.
Book Review: The AI does not hate you by Tom Chivers
The title of this book comes from a quote by Elizier Yudkowsky which reads in full: “The AI does not hate you, nor does it love you, but you are made of atoms which it can use of something else”. This book covers not only potential risks from AI, but the rationalist community from which this evolved and also touches on the effective altruism movement.
This book fills something of a gap in the book market; when people are first learning about existential risks from AI I usually recommend the two-part Wait by Why post (https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) and then I’m not really sure what to recommend next. The sequences are ridiculously long and Bostrom’s Superintelligence is a challenging read for those not steeped in philosophy and computer science. In contrast, this book is much more accessible and provides the right level of detail for a first introduction, rather than someone who has already decided to try entering the field.
I mostly listened to this book to see if I could recommend it. Most of the material was familiar, but I was also pleasantly surprised a few times to hear a new take (at least to me). It was engaging and well-written throughout. Regarding what’s covered: there’s an excellent introduction to the alignment problem; the discussion of Less Wrong mostly focuses on cognitive biases, but also covers a few other key concepts like the Map and Territory and Bayesianism; the Center for Applied Rationality is mostly reduced to just double crux; Slatestarcodex is often quoted, but not a focus; and Effective Altruism isn’t the focus, but there’s a good general introduction. I also thought he dealt well with someone of the common criticisms of the community.
Even though there are notable omissions, these are understandable given the need to keep the book to a reasonable length. And it could have been possible to more fully capture the flavour of the community, but given how hard it is to describe the essence of a community with such broad interests, I think he did an admirable job. All in all, this is an excellent introduction to the topic if you’ve been hearing about AI Safety or Less Wrong and want to dive in more
There is a world that needs to be saved. Saving the world is a team sport. All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.
I have no interest in honor if it’s celebrated on a field of the dead. Virtue ethics is fine, as long as it’s not an excuse to not figure out what needs doing and how it’s going to get done.
Doing ones own part and trusting that the other parts are done by anonymous unknown others is a very silly coordination strategy. We need plans that amount to success, not just everyone doing whatever sounds nice to them.
Edit: I very much agree that saving the world is a team sport. Perhaps it’s relevant that successful teams always do some planning and coordinating.
It’s at times like these that I absolutely love the distinction between “karma” and “agreement” around here. +1 for the former, as per the overall sentiment. −1 for the latter, as per the sheer nonsensical-ity of the scale of the matter.
The “world” doesn’t need “saving”. Never did. Never will. If for no other reason than there is no “one” world, to begin with. What you think about when mentioning the “world” itself will drastically different from what I have in mind, from what Eliezer has in mind, from what anyone else around here has in mind.
Our brains can only ever hold such a tiny amount of information in our short-term storage, that to even hope it ever represents any significant portion of the “world” itself is laughable. Even your long-term storage / episodic + semantic memory only ever came in contact with such a tiny portion of the “world”.
You can’t “save” what you barely “know” to begin with.
Yet there’s a deeper rabbit hole still.
When you say “save the world” you likely mean either “saving our local ecosystem” (as in: all the biological forms of self-organizing matter, as you know it), “saving our species” (Homo Sapiens, first and foremost), or “saving your world” (as in: the part of reality you have personally grown up in, conditioned yourself to, assimilated with, and currently project onto the rest of real world as the likely only world, to begin with—a.k.a. Typical Mind Fallacy).
The “world” doesn’t need “saving”, though. It came before you. It will persist after you. Probably. Physics. Anyhow.
What may need some “help” is society. Not “the” abstract, ephemeral, all-encompassing, thus absolutely void of any and all meaning to begin, “society”. But the society, made out of “people”. As in: “individuals”. Living in their own “world”. Only ever coming in contact with <1% of information you’ve likely come into contact with, so far.
They don’t need your attempts at “saving” them, either. What they need is specific solutions to specific problems within specific domains of specific kind of relationship to the domains, closely/farther adjacent to it.
You will never solve any of them. Unless you stop throwing around phrases like “saving the world”, in the first place. The world came into being via a specific kind of process. It is now maintained by specific kind of feedback loops, recurrent cycles, incentive structures, reward/punishment mechanisms driving mostly unconscious decision making processes, and individual habits of each and every individual operating within their sphere of influence.
You want to help? Figure out what kind of incremental changes you can begin to introduce in any of them, in order to begin extinguishing the sort of problems you’ve now elevated to the rank of “saving-worthy” in your own head. Note that, in all likelihood, by extinguishing one you will merrily introduce a whole bunch of others—something you won’t get to discover until much later one. Yet that is, realistically, what you can actually go on to accomplish.
“Saving the world”? Please. Do you even know what’s exactly going on in the opposite side of the globe today?
Great sentiment. Horrible phrasing. Nothing personal. “Helping people” is a team’s sport.
Side note: are these quick takes turning into a new Twitter/X feed? Gosh, please don’t. Please!
I read this paragraph as saying ~the same thing as the original post in a different tone
We know well enough what people mean by “world”—the stuff they care about. The fact that physics keeps on happening if humanity is snuffed out is no comfort at all to me or to most humans.
Arguing epistemology is not going to prevent a nuclear apocalypse or us being wiped out by the new intelligent species we are inventing. The fact that you don’t know what’s happening on the other side of the world has no bearing on existential dangers facing those people. That’s what I mean by saving the world, and I expect what the author meant. This is a different thing than just helping people by your own values and estimates.
I very much agree that pithy mysterious statements for others to argue over is not a good use of the quick takes here.
Book Review: The 4 Hour Workweek
This is the kind of book that you either love or hate. I found value in it, but I can definitely understand the perspective of the haters. First off: the title. It’s probably one of the most blatant cases of over-promising that I’ve ever seen. Secondly, he’s kind of a jerk. A number of his tips involve lying and in school he had a strategy of interrogating his lecturers in detail when they gave him a bad mark so that they’d think very carefully assigning him a bad grade. And of course, while drop-shipping might have been an underexploited strategy at the time when he wrote the book, it’s now something of a saturated market.
On the plus side, Tim is very good at giving you specific advice. To give you the flavour, he advises the following policies for running an online store: avoid international orders, no expedited or overnight shipping, two options only—standard and premium; no cheque or Western union, no phone number if possible, minimum wholesale order with tax id and faxed in order form, ect. Tim is extremely process oriented and it’s clear that he has deep expertise here and is able to share it unusually well. I found it fascinating to see how he thought even though I don’t have any intention of going into this space.
This book covers a few different things:
- Firstly, he explains why you should aim to have control over when and where you work. Much of this is about cost, but it’s also about the ability to go on adventures, develop new skills and meet people you wouldn’t normally meet. He makes a good case and hopefully I can confirm whether it is as amazing as he says soon enough
- Tim’s philosophy of work is that you should try to find a way of living the life you want to live now. He’s not into long-term plans that, in his words, require you to sacrifice the best years of your life in order to obtain freedom later. He makes a good point for those with enough career capital to make it work, but it’s bad advice for many other who decide to just jump on the travel blogging or drop-shipping train without realistic expectations of how hard it is to make it in those industries
- Tim’s productivity advice focuses on ruthlessly (and I mean ruthlessly) minimising what he does to the most critical by applying the 80⁄20 rule. For example, he says that you should have a todo list and a not todo list. He says that your todo list shouldn’t have more than two items and you should ask yourself, “If this was the only thing I accomplished today, would I be satisfied?”.
- A large part of minimising your work involves delegating these tasks to other people and Tim goes into detail about how to do this. He is a big fan of virtual assistants, to the point ofc even delegating his email.
- Lots of this book is business advice. Unlike most businesses, Tim isn’t optimising for making the most money, but for making enough money to support his lifestyle while taking up the least amount of his time. I suspect that this would be great advice for many people who already own a business
- Tim also talks about how to figure out what to do with your spare time if you manage to obtain freedom. He advises chasing excitement instead of happiness. He finds happiness too vague, while excitement will motivation you to grow and develop. He suggests that it is fine to go wild at first, jumping from place to place, chasing whatever experiences you want, but at some point it’ll lose it’s appeal and you’ll want to find something more meaningful.
I’d recommend this book, but only to people with a healthy sense of skepticism. There’s lots of good advice in this book, but think very carefully before you become drop-shipper #2001. And remember that you don’t have to become a jerk just because he tells you to! That said, it’s not all about drop-shipping. A much wider variety of people probably could find a way to work remotely or reduce their hours than we normally think, although it might require some hard work to get there. In so far as the goal is to optimise for your own happiness, I generally agree with his idea of the good life.
Further highlights:
- Doing the unrealistic is easier than doing the realistic as there is less competition
- Leverage strengths, instead of fixing weakness. Multiplication of results beats incremental improvement
- Define your nightmare. Would it really be permanent? How could you get it back on track? What are the benefits of the more probable outcome?
- We encourage children to dream and adults to be realistic
Book Review: Civilization and its discontents
Freud is the most famous psychologist of all time and although many of his theories are now discredited or seem wildly implausible, I thought it’d be interesting to listen to him to try and understand why it sounded plausible in the first place.
At times Freud is insightful and engaging; at other times, he falls into psychoanalytic lingo in such a way that I couldn’t follow what he was trying to say. I suppose I can see why people might have assumed that the fault was with their failure to understand.
It’s a short read, so if you’re curious, there isn’t that much cost to going ahead and reading it, but this is one of those rare cases where you can really understand the core of what he was getting at from the summary on Wikipedia (https://en.m.wikipedia.org/wiki/Civilization_and_Its_Discontents)
Since Wikipedia has a summary, I’ll just add a few small remarks. This book focuses on a key paradox; our utter dependence on it for anything more than the most basic survival; but how it requires us to repress our own wants and desires so as to fit in with an ordered society. I find this to be an interesting answer to the question of why there is so much misery despite our material prosperity.
It’s interesting to re-examine this in light of the modern context. Society is much more liberal than it was in Freud’s time, but in recent years people have become more scared of speaking their minds. Repression still exists, it is just off a different form. If Freud is to be believed, we should expect this repression to result in all kinds of be psychological effects, many of which won’t appear linked on the surface.
Further thoughts:
- I liked his chapter on methods humans deal suffering and their limitations as it contained what seemed to be found evaluations. He points out that that the path of a yogi is at best the happiness of quietness, that love cannot be guaranteed to last, that sublimation through art is available only to a few and is even then only of limited strength, ect. He just didn’t think there was any good solution to this problem.
- Freud was sceptical of theories like communism because he didn’t believe that human nature could really change. He argued that aggression existed in the nursery and before the existence of property. He didn’t doubt that we could suppress urges, but he seemed to believe that it was much more costly than other people realised, and even then that it would likely come out in some other form
- Freud proposed his theory of the Narcissism of Small Differences, that the people who we hate most not those with values completely foreign to our own, but this who we are in close proximity to. He describes this as a form of narcissism since these conflicts can flare up over the most minor of differences.
- Freud suggested that those who struggled the most with temptation were saints, since their self-denial led to the constant frustration of their desires
- Freud noted how absurd, ” Love your neighbour as yourself” would sound to someone hearing it for the first time. He imagines that we’d skepticalky ask questions, “Why should I care about them just as much as my family?” and “Why should I love them if they are bad people or don’t love me?”. He actually goes further and argues that “a love that does not discriminate does injustice to its object”
Thoughts on the introduction of Goodhart’s. Currently, I’m more motivated by trying to make the leaderboard, so maybe that suggests that merely introducing a leaderboard, without actually paying people, would have had much the same effect. Then again, that might just be because I’m not that far off. And if there hadn’t been the payment, maybe I wouldn’t have ended up in the position where I’m not that far off.
I guess I feel incentivised to post a lot more than I would otherwise, but especially in the comments rather than the posts since if you post a lot of posts that likely suppresses the number of people reading your other posts. This probably isn’t a worthwhile tradeoff given that one post that does really well can easily outweight 4 or 5 posts that only do okay or ten posts that are meh.
Another thing: downvotes feel a lot more personal when it means that you miss out on landing on the leaderboard. This leads me to think that having a leaderboard for the long term would likely be negative and create division.
I really like the short-form feature because after I have articulated a thought my head feels much clearer. I suppose that I could have tried just writing it down in a journal or something; but for some reason I don’t feel quite the same effect unless I post it publicly.
This is the first classic that I’m reviewing. One of the challenges with figuring out which classics to read is that there are always people speaking very highly of it and in a vague enough manner that it makes it hard for you to decide whether to read it. Hopefully I can avoid this trap.
Book Review: Animal Farm
You probably already know the story. In a thinly veiled critique of the Russian Revolution, the animals in a farm decide to revolt against the farmer and run the the farm themselves. At start, the seven principles of Animalism are idealistically declared, but as time goes on, things increasingly seem to head downhill…
Why is this a classic?: This book was released at a time when the intellectual class was firmly sympathetic to the Soviets, ensuring controversy and then immortality when history proved it right.
Why you might want to read this: Short (only 112 pages or 3:11 on Audible), the story always moves along at a brisk pace, the writing is engaging and a few very emotionally impactful moments. The broader message of being wary of the promises made by idealistic movements still holds (especially “all animals are equal, but some animals are more equal than others”). This book does a good job illustrating many of the social dynamics that occur in totalitarianism, from the rewriting of history, to the false confessions, to the the cult of the individual.
Why you might not want to read this: The concrete anti-Soviet message is of little relevance now given that what happened is common knowledge. You can probably already guess how the story goes: the movement has a promising start, but with small red flags that become bigger over time. The animals are constantly unrealistically naive, maybe this strikes you as clumsy, or maybe you see that just as how satire is?
Wow, I’ve really been flying through books recently. Just thought I should mention that I’m looking for recommendations for audio books; bonus points for books that are short. Anyway....
Book Review: Zero to One
Peter Thiel is the most famous contrarian in Silicon Valley. I really enjoyed hearing someone argue against the common wisdom of the valley. Most people think in terms of beating the competition; Thiel thinks in terms of establishing a monopoly so that there is no competition. Agile methodology and the lean startup are all the rage, but Thiel argues that this only leads to incremental improvements and that truly changing the world requires you to commit to a vision. Most companies was to disrupt your competitors, but for Thiel this means that you’ve fallen into competition, instead of forging your own unique path. Most venture funds aim to diversify, but Thiel is more selective, only investing in companies that have billion dollar potential. Many startups spurn marketing, but Thiel argues that this is dishonest and that PR is also a form of marketing, even if that isn’t anyone’s job title. Everyone is betting on AI replacing humans, while Thiel is more optimistic about human/ai teams.
Some elaboration is in order, I’ll just mention that might prefer to read the review on Slatestarcodex instead of mine (https://slatestarcodex.com/20…/…/31/book-review-zero-to-one/)
• Aren’t monopolies bad? Thiel argues that monopoly power is what allows a corporation to survive the brutal world of competing to survive. This means that it can pay employees well, have social values other than making profit and invest in the future. Read Scott’s review for a discussion on how to build a company that truly is one of a kind.
• Thiel argues that monopolies try to hide that fact by presenting themselves as just one player in a larger industry (ie. Google presents itself as a tech company, instead of an internet advertising company, even that this aspect brings in essentially all the money), while those firms competing try to present themselves as having cornered an overly specific market (ie. isn’t clear that British food in Palo Alto is its own market as opposed to competing against all the other food chains)
• In addition to splitting people into optimists and pessimists, Thiel splits people into define and indefinite. You might think that a “definite optimist” would be someone who is an optimist and 100% certain the future will go well, but what he actually means is that they are an optimist and they have an idea of what the future will look like or could like like. In contrast, an indefinite optimist would be an optimist who has no idea how exactly the world might improve or could change.
• Thiel argues that startup returns are distributed according to a power law such that half of the return from a portfolio might be just one company. He applies it to life too; arguing that it’s better to set yourself up so that they’ll be one career that you’ll be amazing at, rather than studying generally so that there’ll be a dozen that you’d be only okay at.
• While many in the valley believe in just building a product and figuring out how to sell it later, Thiel argues that you don’t have a product if you don’t have a way of reaching customers
I’m not involved in startups, so I can’t vouch for how good his advice is, but given that caveat, I’d strongly recommend it for anyone thinking of going into that space since it’s always good to have your views challenged. But, I’d also recommend it as a general read, I think that there’s a lot that’d be interesting for a general audience, especially the argument against acquired a broad undifferentiated experience. I do think that in order to get the most out of this, you’d need to already be familiar with startup culture; ie. minimum viable products, the lean startup, ect. as he kind of assumes that you know this stuff.
So should you read the book or just Scott’s review? The main aspect Scott misses is the discussion of power law distributions. This discussion is basically the Pareto Principle on steroids; when a single billion-dollar company could make your more profit than the rest of your investments combined all that matters is whether a company could be a unicorn or not (the essay Prospecting for Gold makes a similar point for EA https://www.effectivealtruism.org/…/prospecting-for-gold-o…/) But besides from that, Scott’s review covers most of the main ideas well. So maybe you could skip the book, but if you’re like me you might find that you need to read the book in order to actually remember these ideas. Besides, it’s concise and well-written.
I think that there’s good reasons why the discussion on Less Wrong has turned increasingly towards AI Alignment, but I am also somewhat disappointed that there’s no longer a space focusing on rationality per se.
Just as the Alignment forum exists as a separate space that automatically cross-posts to LW, I’m starting to wonder if we need a rationality forum that exists as a separate space that cross-posts to LW, as if I were just interested in improving my rationality I don’t know if I’d come to Less Wrong.
(To clarify, unlike the Alignment Forum, I’d expect such a forum to be open-invite b/c the challenge would be gaining any content at all).
Alternatively, I think there is a way to hide the AI content on LW, but perhaps there should exist a very convenient and visible user interface for that. I would propose an extreme solution, like a banner on the top of the page containing a checkbox that hides all AI content. So that anyone, registered or not, could turn the AI content off in one click.
The Alignment forum works because there are a bunch of people who professionally pursue research over AI Alignment. There’s no similar group of people for whom that’s true for rationality.
I don’t know if you need professionals, just a bunch of people who are interested in discussing the topic. It wouldn’t need to use the Alignment Forum’s invite-only system.
Instead, it would just be a way to allow LW to cater to both audiences at the same time.
IIRC, you can get post on Alignment Forum only if you are invited or moderators crossposted it? The problem is that Alignment Forum is deliberately for some sort of professionals, but everyone wants to write about alignment. Maybe it would be better if we had “Alignment Forum for starters”.
One thing I’m finding quite surprising about shortform is how long some of these posts are. It seems that many people are using this feature to indicate that they’ve just written up these ideas quickly in the hope that the feedback is less harsh. This seems valuable; the feedback here can be incredibly harsh at times and I don’t doubt that this has discouraged many people from posting.
I pushed a bit for the name ‘scratchpad’ so that this use case was a bit clearer (or at least not subtly implied as “wrong”). Shortform had enough momentum as a name that it was a bit hard to change tho. (Meanwhile, I settled for ’shortform means either the writing is short, or it took a (relatively) short amount of time to write)
“I’m sorry, I didn’t have the time to write you a short email, so I wrote you a long one instead.”
Can confirm. I don’t post on normal lesswrong because the discourse is brutal.
I’ll post some extracts from the Seoul Summit. I can’t promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it’s helpful until someone publishes something that’s more polished:
Frontier AI Safety Commitments, AI Seoul Summit 2024
The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: “internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sharing; to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; to incentivize third-party discovery and reporting of issues and vulnerabilities; to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated; to publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use; to prioritize research on societal risks posed by frontier AI models and systems; and to develop and deploy frontier AI models and systems to help address the world’s greatest challenges”
″Risk assessments should consider model capabilities and the context in which they are developed and deployed”—I’d argue that the context in which it is deployed should account for whether it is open or closed source/weights
”They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk.”—always great to make policy concrete”
In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.”—Very important that when this is applied the ability to iterate on open-source/weight models is taken into account
https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024
Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders’ Session
Signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America.
”We support existing and ongoing efforts of the participants to this Declaration to create or expand AI safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations”—guess we should now go full-throttle and push for the creation of national AI Safety institutes
“We recognise the importance of interoperability between AI governance frameworks”—useful for arguing we should copy things that have been implemented overseas.
“We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments.”—Important as Frontier AI needs to be treated as different from regular AI.
https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-declaration-for-safe-innovative-and-inclusive-ai-by-participants-attending-the-leaders-session-ai-seoul-summit-21-may-2024
Seoul Statement of Intent toward International Cooperation on AI Safety Science
Signed by the same countries.
“We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems”—similar to what we listed above, but more specifically focused on AI Safety Institutes which is a great.
”We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety”—Really good! We don’t just want AIS Institutes to run current evaluation techniques on a bunch of models, but to be actively contributing to the development of AI safety as a science.
“We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety”—very important for them to share research among each other
https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex
Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity
Signed by: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union
“It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future”—considering future risks is a very basic, but core principle
”Interpretability and explainability”—Happy to interpretability explicitly listed
”Identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations”—important work, but could backfire if done poorly
”Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors.”—sensible, we need to ensure that the risks of open-sourcing and open-weight models are considered in terms of the ‘deployment context’ and ‘foreseeable uses and misuses’
”Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks,”—very pleased to see a focus beyond just deployment
”We further recognise that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organisations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight”—this is massive. There was a real risk that these issues were going to be ignored, but this is now seeming less likely.
”We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security.”—“Unique role”, this is even better!
”We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France”—even better than above b/c it commits to a specific action and timeline
https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024
I don’t want to comment on the whole Leverage Controversy as I’m far away enough from the action that other people are probably better positioned to sensemake here.
On the other hand, I have been watching some of Geoff Anders’ streams does seem pretty good at theorising by virtue of being able to live-stream this. I expect this to be a lot harder than it looks, when I’m trying to figure out my position on an issue, I often find myself going over the same ground again and again and again, until eventually I figure out a way of putting what I want to express into words.
That said, I’ve occasionally debated with some high-level debaters and given almost any topic they’re able to pretty much effortlessly generate a case and how the debate is likely to play out. I guess it seems on par with this.
So I think his ability to livestream demonstrates a certain level of skill, but I almost view it as speed-chess vs. chess, in that there’s only so much you can tell about a person’s ability in normal chess from how good they are at speed chess.
I think I’ve improved my own ability to theorise by watching the streams, but I wouldn’t be surprised if I improved similarly from watching Eliezer, Anna or Duncan livestream their attempts to think through an issue. I also expect that there’s a similar chance I would have gained a significant proportion of the benefit just from watching someone with my abilities or even slightly worse on the basis of a) understanding the theorising process from the outside b) noticing where they frame things differently than I would have.
Trying to think about what is required to be a good debater:
general intelligence—to quickly understand the situation and lay out your response;
“talking” skills—large vocabulary, talking clearly, not being shy, body language and other status signals;
background knowledge—knowing the models, facts, frequently used arguments, etc.;
precomputed results—if you already spent a lot of time thinking about a topic, maybe even debating it.
These do not work the same way, for example clear talking and good body language generalize well; having lots of precomputed results in one area will not help you much in other areas (unless you use a lot of analogies to the area you are familiar with—if you do this the first time, you may impress people, but if you do this repeatedly, they will notice that you are a one-topic person).
I believe that watching good debaters in action would help. It might be even better to focus on different aspects separately (observing their body language, listening to how they use their voice, understanding their frames, etc.).
Any in particular, or what most of them are like?
In what ways do you believe you improved?
I think I’m more likely to realise that I haven’t hit the nail on the head and so I go back and give it another go.
Random idea: A lot of people seem discouraged from doing anything about AI Safety because it seems like such a big overwhelming problem.
What if there was a competition to encourage people to engage in low-effort actions towards AI safety, such as hosting a dinner for people who are interested, volunteering to run a session on AI safety for their local EA group, answering a couple of questions on the stampy wiki, offering to proof-read a few people’s posts or offering a few free tutorial sessions to aspiring AI Safety Researchers.
I think there’s a decent chance I could get this funded (prize might be $1000 for the best action and up to 5 prizes of $100 for random actions above a certain bar)
Possible downsides: Would be bad if people reach out to important people or the media without fully thinking stuff through, but can be mitigated by excluding those kinds of actions/ adding guidelines
Keen for thoughts or feedback.
I like it, and it’s worth trying out.
Those don’t seem like very low effort to me, but they will to some. Do they seem to you like they are effective (or at least impactful commensurate with the effort)? How would you know which ones to continue and what other types of thing to encourage?
I fear that it has much of the same problem that any direct involvement in AI safety does: what’s the feedback loop for whether it’s actually making a difference? Your initial suggestions seem more like actions toward activism and pop awareness, rather than actions toward AI Safety.
The nice thing about prizes and compensation is that it moves the question from the performer to the payer—the payer has to decide if it’s a good value. Small prizes or low comp means BOTH buyer and worker have to make the decision of whether this is worthwhile.
Solving the productivity-measurement problem itself seems overwhelming—it hasn’t happened even for money-grubbing businesses, let alone long-term x-risk organizations. But any steps toward it will do more than anything else I can think of to get broader and more effective participation. Being able to show that what I do makes a measurable difference, even through my natural cynicism and imposter syndrome, is key to my involvement.
I am not typical, so don’t take my concerns as the final word—this seems promising and relatively cheap (in money; it will take a fair bit of effort in guiding the sessions and preparing materials for the tutoring. Honestly, that’s probably more important than the actual prizes).
I guess they just feel like as good a starting place as any and are unlikely to be net-negative. That’s more important than anything else. The point is to instill agency so that people start looking for further opportunities to make a difference. I might have to write a few paragraphs of guidelines/suggestions for some of the most common potential activities.
I hadn’t really thought too much about follow-up, but maybe I should think more about it.
Here’s a crazy[1] idea that I had. But I think it’s an interesting thought experiment.
What if we programmed an AGI had the goal of simulating the Earth, but with one minor modification? In the simulation, we would have access to some kind of unfair advantage, like an early Eliezer Yudkowsky getting a mysterious message dropped on his desk containing a bunch of the progress we’ve made in AI Alignment.
So we’d all die in real life when the AGI broke out of its box and turned the Earth into compute to better simulate us, but we might survive in virtual reality, at least if you believe simulations to be conscious.
In other news, I may have just spoiled a short story I was thinking of writing.
#Chris’Phoenix #ForgetRokosBasilisk
And probably not very good.
I really dislike the fiction that we’re all rational beings. We really need to accept that sometimes people can’t share things with us. Stronger: not just accept but appreciate people who make this choice for their wisdom and tact. ALL of us have ideas that will strongly trigger us and if we’re honest and open-minded, we’ll be able recall situations when we unfairly judged someone because of a view that they held. I certainty can, way too many times to list.
I say this as someone who has a really strong sense of curiosity, knowing that I’ll feel slightly miffed when someone doesn’t feel comfortable being open with me. But it’s my job to deal with that, not the other person.
Don’t get me wrong. Openness and vulnerability are important. Just not *all* the time. Just not *everything*.
When you start identifying as a rationalist, the most important habit is saying “no” whenever someone says: “As a rationalist, you have to do X” or “If you won’t do X, you are not a true rationalist” etc. It is not a coincidence that X usually means you have to do what the other person wants for straightforward reasons.
Because some people will try using this against you. Realize that this usually means nothing more then “you exposed a potential weakness, they tried to exploit it” and is completely unrelated to the art of rationality.
(You can consider the merits of the argument, of course, but you should do it later, alone, when you are not under pressure. Don’t forget to use the outside view; the easiest way is to ask a few independent people.)
I’ve recently been reading about ordinary language philosophy and I noticed that some of their views align quite significantly with LW. They believed that many traditional philosophical question only seemed troubling because of the philosophical tendency to assume words like “time” or “free will” necessarily referred to some kind of abstract entity when this wasn’t necessary at all. Instead they argued that by paying attention to how we used these words in ordinary, everyday situations we could see that the way people used these words didn’t need to assume these abstract entities and that we could dissolve the question.
I found it interesting that the comment thread on dissolving the question makes no reference to this movement. It doesn’t reference Wittgenstein either who also tried to dissolve questions.
(https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question)
Is that surprising? It’s not as if the rationalsphere performed some comprehensive survey of philosophy before announcing the superiority of its own methods.
From my perspective, saying that “this philosophical opinion is kinda like this Less Wrong article” sounds like “this prophecy by Nostradamus, if you squint hard enough, predicts coronavirus in 2020″. What I mean is that if you publish huge amounts of text open to interpretation, it is not surprising that you can find there analogies to many things. I would not be surprised to find something similar in the Bible; I am not surprised to find something similar in philosophy. (I would not be surprised to also find a famous philosopher who said the opposite.) In philosophy, the generation of text is distributed, so some philosophers likely have track record much better than the average of their discipline. Unfortunately—as far as I know—philosophy as a discipline doesn’t have a mechanism to say “these ideas of these philosophers are the good ones, and this is wrong”. At least my time at philosophy lessons was wasted listening to what Plato said, without a shred of ”...and according to our current scientific knowledge, this is true, and this is not”.
Also, it seems to me that philosophers were masters of clickbait millenia before clickbait was a thing. For example, a philosopher is rarely satisfied by saying things like “human bodies are composed of 80% water” or “most atoms in the universe are hydrogen atoms”. Instead, it is typically “everything is water”. (Or “everything is fire”. Or “everything is an interaction of quantum fields”… oops, the last one was actually not said by a philosopher; what a coincidence.) Perhaps this is selection bias. Maybe people who walked around ancient Greece half-naked and said things like “2/3 of everything is water” existed, but didn’t draw sufficient attention. But if this is true, it would mean that philosophy optimizes for shock value instead of truth value.
So, without having read Wittgenstein, my priors are that he most likely considered all words confused; yes, words like “time” and “free will”, but also words like “apple” and “five”. (And then there was Plato who assumed that there was a perfect idea of “apple” and a perfect idea of “time”.)
Now I am not saying that everything written by Wittgenstein (or other philosophers) is worthless. I am saying that in philosophy there are good ideas mixed with bad ones, and even the good ones are usually exaggerated. And unless someone does the hard work of separating the wheat from chaff, I’d rather ignore philosophy, and read sources that have better signal-to-noise ratio.
I won’t pretend that I have a strong understanding here, but as far as I can tell, (Later) Wittgenstein and the Ordinary Language Philosophers considered our conception of the number “five” existing as an abstract object as mistaken and would instead explain how it is used and consider that as a complete explanation. This isn’t an unreasonable position, like I honestly don’t know what numbers are and if we say they are an abstract entity it’s hard to say what kind of entity.
Regarding the word “apple” Wittgenstein would likely say attempts to give it a precise definition are doomed to failure because there are an almost infinite number of contexts or ways in which it can be used. We can strongly state “Apple!” as a kind of command to give us one, or shout it to indicate “Get out of the way, there is an apple coming towards you” or “Please I need an Apple to avoid starving”. But this is only saying attempts to spec out a precise definition are confused, not the underlying thing itself.
(Actually, apparently Wittgenstein consider attempts to talk about concepts like God or morality as necessarily confused, but thought that they could still be highly meaningful, possibly the most meaningful things)
These are all good points. I could agree that all words are to some degree confused, but I would insist that some of them are way more confused than others. Otherwise, the very act of explaining anything would be meaningless: we would explain one word by a bunch of words, equally confusing.
If the word “five” is nonsense, I can take the Wittgenstein’s essay explaining why it is nonsense, and say that each word in that essay is just a command that we can shout at someone, but otherwise is empty of meaning. This would seem to me like an example of intelligence defeating itself.
Wittgenstein didn’t think that everything was a command or request; his point was that making factual claims about the world is just one particular use of language that some philosophers (including early Wittgenstein) had hyper-focused on.
Anyway, his claim wasn’t that “five” was nonsense, just that when we understood how five was used there was nothing further for us to learn. I don’t know if he’d even say that the abstract concept five was nonsense, he might just say that any talk about the abstract concept would inevitably be nonsense or unjustified metaphysical speculation.
These are situations where I woud like to give a specific question to the philosopher. In this case it would be: “Is being a prime number a property of number five, or is it just that we decided to use it as a prime number?”
I honestly have no idea how he’d answer, but here’s one guess. Maybe we could tie prime numbers to one of a number of processes for determining primeness. We could observe that those processes always return true for 5, so in a sense primeness is a property of five.
Book Review: Waking Up by Sam Harris
This book aims to convince everyone, even skeptics and athiests, that there is value in some spiritual practises, particularly those related to meditation. Sam Harris argues that mediation doesn’t just help with concentration, but can also help us reach transcendental states that reveal the dissolution of the self. It mostly does a good job of what it sets out to do, but unfortunately I didn’t gain very much benefit from this book because it focused almost exclusively on persuading you that there is value here, which I already accepted, rather than providing practical instructions.
One area where I was less convinced was his claims about there not being a self. He writes that when meditating allows you to directly experience this, but worry he hasn’t applied sufficient skepticism. If you experience flying through space in an altered mental, it doesn’t mean that you are really flying through space. Similarly, how do we know that he is experiencing the lack of a self, rather than the illusion of there being no self?
I was surprised to see that Sam was skeptical of a common materialist belief that I had expected him to endorse. Many materialists argue against the notion of philosophical-zombies by arguing that if it seems conscious we should assume it is conscious. However, Sam Harris argues that the phenomenon of anaesthesia awareness, waking up completely paralysed during surgery, shows that there isn’t always a direct link between appearing conscious and actual consciousness. (Dreams seem to imply the same point, if less dramatically). Given the strength of this argument, I’m surprised that I haven’t heard it before.
Sam also argues that split-brain patients imply that consciousness is divisible. While split-brain patients actually still possess some level of connection between the two halves, I still consider this phenomenon to be persuasive evidence that this is the case. After all, it is possible for the two halves to have completely different beliefs and objectives without either side being aware of these.
On meditation, Sam is a fan of the Dzogchen approach that directly aims at experiencing no-self, rather than the slower, more gradual approaches. This is because waiting years for a payoff is incredibly discouraging and because practises like paying attention to the sensation of breath reinforce the notion of the self which meditation seeks to undermine. At the same time, he doesn’t fully embrace this style of teaching, arguing that the claim every realisation is permanent is dangerous as it leads to treating people as role models even when their practise is flawed.
Sam argues against the notion of gurus being perfect; they are just humans like the rest of us. He notes that is hard to draw the line between practises that lead to enlightenment and abuse; indeed he argues that a practise can provide spiritual insight AND be abusive. He notes that the reason why abuse seems to occur again and again is that when people seek out a guru it’s because they’ve arrived at the point where they realise that there is so much that they don’t know and they need the help of someone who does.
He also argues against assuming mediative experiences provide metaphysical insights. He points out that they are often the same experiences that people have on psychedelics. In fact, he argues that for some people having a psychedelic experience is vital for their spiritual development as it demonstrates that there really are other brain states out there. He also discusses near death experiences and again dismisses claims that they provide insight into the afterlife—they match experiences people have on drugs and they seem to vary by culture.
Further points:
- Sam talked about experiencing universal love while on DMT. Many religions contain this idea of universal love, but he couldn’t appreciate it until he had this experience
- He argues that it is impossible to stay angry for more than a few seconds without continuously thinking thoughts to keep us angry. To demonstrates this, he asks us to imagine that we receive an important phone call. Most likely we will put our anger aside.
Recommended reading:
- https://samharris.org/a-plea-for-spirituality/
- https://samharris.org/our-narrow-definition-of-science/
FWIW no self is a bad reification/translation of not self, and the overhwleming majority seem to be metaphysically confused about something that is just one more tool rather than some sort of central metaphysical doctrine. When directly questioned “is there such a thing as the self” the Buddha is famously mum.
What’s the difference between no self and not self?
No-self is an ontological claim about everyone’s phenomenology. Not self is a mental state that people can enter where they dis-identify with the contents of consciousness.
One of the problems with the general anti zombie principle, is that it makes much too strong a claim that what appears conscious, must be.
There appears to be something of a Sensemaking community developing on the internet, which could roughly be described as a spirituality-inspired attempt at epistemology. This includes Rebel Wisdom, Future Thinkers, Emerge and maybe you could even count post-rationality. While there are undoubtedly lots of critiques that could be made of their epistemics, I’d suggest watching this space as I think some interesting ideas will emerge out of it.
Review: Human-Compatible by Stuart Russell
I wasn’t a fan of this book, but maybe that’s just because I’m not in the target audience. As a first introduction to AI safety I recommend The AI Does Not Hate You by Tom Chivers (facebook.com/casebash/posts/10100403295741091) and for those who are interested in going deeper I’d recommend Superintelligence by Nick Bostrom. The strongest chapter was his assault on arguments against those who think we shouldn’t worry about superintelligence, but you can just read it here: https://spectrum.ieee.org/…/many-experts-say-we-shouldnt-wo…).
I learned barely anything that was new from this book. Even when it came to Russell’s own approach, Co-operative Reinforcement Learning, I felt that the treatment was shallow (I won’t write about this approach until I’ve had a chance to review it directly again). There were a few interesting ideas that I’ll list below, but I was surprised by how little I’d learned by the end. There’s a decent explanation of some very basic concepts within AI, but this was covered in a way that was far too shallow for me to recommend it.
Interesting ideas/quotes:
- More processing power won’t solve AI without better algorithms. It simply gets you the wrong answer faster
- Language bootstrapping: Comprehension is dependent on knowing facts and extracting facts is dependent on comprehension. You might think that we could bootstrap an AI using easy to comprehend text, but in practise we end up extracting incorrect facts that scrambled further comprehension
- We have an advantage with predicting humans as we have a human mind to simulate with; it’ll take longer for AIs to develop this ability
- He suggests that we have a right to mental security and that it is naive to trust that the truth will win out. Unfortunately, he doesn’t address any of the unfortunate concerns
- By default, a utility maximiser won’t want us to turn it off as that would interfere with its goals. We could reward it when we turn it off, but that could incentivise it to manipulate it to turn us off. Instead, if the utility maximiser is trying to optimise for our reward function and it is uncertain about what it is, then it would let us turn it off
- We might decide that we don’t want to satisfy all preferences, for example, we mightn’t feel any obligation to take into account preferences that are sadistic, vindictive or spiteful. But refusing to consider these preferences could unforeseen consequences, what if envy can’t be ignored as a factor without destroying our self-esteem?
- It’s hard to tell if an experience has taught someone more about preferences or changed their preferences (at least without looking into their brain. In either case the response is the same.
- We want robots to avoid interpreting commands too literally, as opposed to information about human preferences. For example, if I ask a robot to fetch a cup of coffee, I assume that the nearest outlet isn’t the next city over nor that it will cost $100. We don’t want the robot to fetch it at all costs.
Despite having read dozens of articles discussing Evidential Decision Theory (EDT), I’ve only just figured out a clear and concise explanation of what it is. Taking a step back, let’s look at how this is normally explained and one potential issue with this explanation. All major decision theories (EDT, CDT, FDT) rate potential decisions using expected value calculations where:
Each theory uses a different notion of probability for the outcomes
Each theory uses the same utility function for valuing the outcomes
So it should be just a simple matter of stating what the probability function is. EDT is normally explained as using P(O|S & D) where O is the outcome, S is the prior state and D is the decision. At this point it seems like this couldn’t possibly fail to be what we want. Indeed, if S described all state, then there wouldn’t be the possibility of making the smoking lesion argument.
However, that’s because it fails to differentiate between hidden state and visible state. EDT uses visible state, so we can write it as P(O|V & D). The probability distribution of O actually depends on H as well, ie. it is some function f(V, H, D). In most cases H is uncorrelated with D, but this isn’t always necessarily the case. So what might look like the direct effect of V and D on P might actually turn out to be the indirect effects of D affecting our expected distribution of H then affecting P. For example, in Smoking Lesion, we might see ourselves scoring poorly in the counterfactual where we smoke and we assume that this is because of our decision. However, this ignores the fact that when we smoke, H is likely to contain the lesion and also cancer. So we think we’ve set up a fair playing field for deciding between smoking and non-smoking, but we haven’t because of the differences in H.
Or to summarise: “The decision can correlate with hidden state, which can affect the probability distribution of outcomes”. Maybe this is already obvious to everyone, but this was the key I need to be able to internalise these ideas on an intuitive level.
Anti-induction and Self-Reinforcement
Induction is the belief that the more often a pattern happens the more likely it is to continue. Anti-induction is the opposite claim: the more likely a pattern happens the less likely future events are to follow it.
Somehow I seem to have gotten the idea in my head that anti-induction is self-reinforcing. The argument for it is as follows: Suppose we have a game where at each step a screen flashes an A or a B and we try to predict what it will show. Suppose that the screen always flashes A, but the agent initially thinks that the screen is more likely to display B. So it guesses B, observes that it guessed incorrectly and then, if it is an anti-inductive agent will increase it’s likelihood that the next symbol will be B because of anti-induction. So in this scenario your confidence that the next symbol will be B, despite the long stream of As, will keep increasing. This particular anti-inductive belief is self-reinforcing.
However, there is a sense in which anti-induction is contradictory—if you observe anti-induction working, then you should update towards it not working in the future. I suppose the distinction here is that we are using anti-induction to update our beliefs on anti-induction and not just our concrete beliefs. And each of these is a valid update rule: in the first we apply this update rule to everything including itself and in the other we apply this update rule to things other than itself. The idea of a rule applying to everything except itself feels suspicious, but is not invalid.
Also, it’s not that the anti-inductive belief that B will be next is self-reinforcing. After all, anti-induction given consistent As pushes you towards believing B more and more regardless of what you believe initially. In other words, it’s more of an attractor state.
The best reason to believe in anti-induction is that it’s never worked before. Discussed at a bit of depth in https://www.lesswrong.com/posts/zmSuDDFE4dicqd4Hg/you-only-need-faith-in-two-things .
Here’s one way of explaining this: it’s a contradiction to have a provable statement that is unprovable, but it’s not a contradiction for it to be provable that a statement is unprovable. Similarly, we can’t have a scenario that is simultaneously imagined and not imagined, but we can coherently imagine a scenario where things exist without being imagined by beings within that scenario.
Rob Besinger:
Inverted, by switching “provable” and “unprovable”:
It’s a contradiction to have an unprovable statement that is provable, but it’s not a contradiction for it to be unprovable that a statement is provable.
“It’s a contradiction to have a provable statement that is unprovable”—I meant it’s a contradiction for a statement to be both provable and unprovable.
“It’s not a contradiction for it to be provable that a statement is unprovable”—this isn’t a contradiction
You made a good point, so I inverted it. I think I agree with your statements in this thread completely. (So far, absent any future change.) My prior comment was not intended to indicate an error in your statements. (So far, in this thread.)
If there is a way I could make this more clear in the future, suggestions would be appreciated.
Elaborating on my prior comment via interpretation, so that it’s meaning is clear, if more specified*:
A’ is the same as A because:
While B is true, B’ seems false (unless I’m missing something). But in a different sense B’ could be true. What does it mean for something to be provable? It means that ‘it can be proved’. This gives two definitions:
a proof of X “exists”
it is possible to make a proof of X
Perhaps a proof may ‘exist’ such that it cannot exist (in this universe). That as a consequence of its length, and complexity, and bounds implied by the ‘laws of physics’* on what can be represented, constructing this proof is impossible. In this sense, X may be true, but if no proof of X may exist in this universe, then:
Something may have the property that it is “provable”, but impossible to prove (in this universe).**
*Other interpretations may exist, and as I am not aware of them, I think they’d be interesting.
**This is a conjecture.
Thanks for clarifying
Book Review: Awaken the Giant Within Audiobook by Tony Robbins
First things first, the audiobook isn’t the full book or anything close to it. The standard book is 544 pages, while the audiobook is a little over an hour and a half. The fact that it was abridged really wasn’t obvious.
We can split what he offers into two main categories: motivational speaking and his system itself. The motivational aspect of his speaking is very subjective, so I’ll leave it to you to evaluate yourself. You can find videos of his on Youtube and you should know within a few minutes whether you like his style.
Instead I’ll focus on reviewing his system. The first key aspect Robbins focuses on what he calls neuro-associations; that is what experiences we link pleasure and pain to. While we may be able to maintain a habit using willpower in the short-term, Robbins believes that in order to maintain it over the long term we need to change our neuro-associations to link please to actions that are good for us and pain to actions that are bad for us.
He argues that we can attach positive or negative neuro-associations to an action by making the advantages or disadvantages as salient as possible. The images on packs of cigarettes are a good example of that principle in action, as would be looking the scans of people who have lung cancer. In addition, we can reward ourselves for success (though he doesn’t discuss the possibility of punishing yourself for failure). This seems like a plausible method for affecting change and one that seems worthwhile experimenting with, although I’ve never experienced much motivation from rewarding myself as it doesn’t really feel like the action is connected to the reward.
The second key aspect of his system is to draw a distinction between decisions and preferences. Most of the time when we say that we’ve decided to do something, such as going to the gym, we’re only just saying that we were prefer that to happen. We haven’t really decided that we WILL do what we’ve said, come what may.
Robbins see the ability to make decisions that we are strongly committed to as key to success. For that reason he recommends practising using our “decision muscles” to strengthen them, so that they are ready when needed. This seems like good advice. Personally, I think it’s important to be honest with yourself about when you have a preference and when you’ve actually made a decision in Robbin’s sense. After all, committed decisions take energy and have a cost as sometimes you’ll commit to something that is a mistake, so it’s important to be selective about what you are truly committed to as otherwise you may end up committed to nothing at all.
There are lots more elements to his system, but those two particular ones are at the core and seemed to be the most distinctive aspects of this book. It’s hard to review such a system without having tried it, but my current position is as follows: I could see myself listening to another one of his audiobooks, although it isn’t really a priority for me.
The sad thing about philosophy is that as your answers become clearer, the questions become less mysterious and awe-inspiring. It’s easy to assume that an imposing question must have an impressive answer, but sometimes the truth is just simple and unimpressive and we miss this because we didn’t evolve for this kind of abstract reasoning.
Examples?
I used to find the discussion of free will interesting before I learned it was just people talking past each other. Same with “light is both a wave and a particle” until I understood that it just meant that sometimes the wave model is a good approximation and other times the particle model is. Debates about morality can be interesting, but much less so if you are a utilitarian or non-realist.
Semantic differences almost always happen, but are rarely the only problem.
There are certainly different definitions of free will, but even so problems, remain:-
There is still an open question as to whether compatibilist free will is the only kind anyone ever needed or believed in, and as to whether libertarian free will is possible at all.
The topic is interesting, but no discussion about it is interesting. These are not contradictory.
The open question about strong determinism vs libertarian free will is interesting, and there is a yet-unexplained contradiction between my felt experience (and others reported experiences) and my fundamental physical model of the universe. The fact that nobody has any alternative model or evidence (or even ideas about what evidence is possible) that helps with this interesting question makes the discussion uninteresting.
So Yudkowsky’s theory isn’t new?
Not new that I could tell—it is a refreshing clarity for strict determinism—free will is an illusion, and “possible” is in the map, not the territory. “Deciding” is how a brain feels as it executes it’s algorithm and takes the predetermined (but not previously known) path.
He does not resolve the conflict that it feels SOOO real as it happens.
That’s an odd thing to say since the feeling of free will is about the only thing be addresses.
I’m going to start writing up short book reviews as I know from past experience that it’s very easy to read a book and then come out a few years later with absolutely no knowledge of what was learned.
Book Review: Everything is F*cked: A Book About Hope
To be honest, the main reason why I read this book was because I had enjoyed his first and second books (Models and The Subtle Art of Not Giving A F*ck) and so I was willing to take a risk. There were definitely some interesting ideas here, but I’d already received many of these through other sources: Harrari, Buddhism, talks on Nietzsche, summaries of The True Believer; so I didn’t gain as much from this as I’d hoped.
It’s fascinating how a number of thinkers have recently converged on the lack of meaning within modern society. Yuval Harrari argues that modernity has essentially been a deal sacrificing meaning for power. He believes that the lack of meaning could eventually lead to societal breakdown and for this reason he argued that we need to embrace shared narratives that aren’t strictly true (religion without gods if you will; he personally follows Buddhism). Jordan Peterson also worries about a lack of meaning, but seeks to “revive God” as someone kind of metaphorical entity.
Mark Manson is much more skeptical, but his book does start asking similar lines. He tells the story of gaining meaning from his grandfather’s death by trying to make him proud although this was kind of silly as they hadn’t been particularly close or even talked recently. Nonetheless, he felt that this sense of purpose had made him a better person and improved his ability to achieve his goals. Mark argues that we can’t draw motivation from our thinking brain and that we need these kinds of narratives to reach our emotional brain instead.
However, he argues that there’s also a downside to hope. People who are dissatisfied with their lives can easily fall prey to ideological movements which promise a better future, especially when they feel a need for hope. In other words, there is both good and bad hope. It isn’t especially clear what the difference is in the book, but he explained to me in an email that his main concern was how movements cause people to detach from reality.
His solution is to embrace Nietzsche concept of Amor Fati—that is a love of one’s fate whatever it may be. Even though this is also a narrative itself, he believes that it isn’t so harmful as unlike other “religions” it doesn’t require us to detach from reality. My main takeaway was his framing of the need for hope as risky. Hope is normally assumed to be good; now I’m less likely to make this assumption.
It was fascinating to see how he put his own tact on this issue and it certainly isn’t a bad book, but there just wasn’t enough new content for me. Maybe others who haven’t been exposed to some of these ideas will be more enthused, but I’ve read his blog so most of the content wasn’t novel to me.
Further thoughts: After reading the story of his Grandfather, I honestly was expecting him to to propose avoiding sourcing our hope from big all-encapsulating narratives in favour of micro-narratives, but he didn’t end up going this direction.
Sharing this resource doc on AI Safety & Entrepreneurship that I created in case anyone finds this helpful:
https://docs.google.com/document/d/1m_5UUGf7do-H1yyl1uhcQ-O3EkWTwsHIxIQ1ooaxvEE/edit?usp=sharing
I was talking with Rupert McCallum about the simulation hypothesis yesterday. Rupert suggested that this argument is self-defeating; that is it pulls the rug from under its own feet. It assumes the universe has particular properties, then it tries to estimate the probability of being in a simulation from these properties and if the probability is sufficiently high, then we conclude that we are in a simulation. But if we are likely to be in a simulation, then our initial assumptions about the universe are likely to be false, so we’ve disproved the assumptions we relied on to obtain these probabilities.
This all seems correct to me, although I don’t see this as a fatal argument. Let’s suppose we start by assuming that the universe has particular properties AND that we are not a simulation. We can then estimate the odds of someone with your kind of experiences being in a simulation within these assumptions. If the probability is low, then our assumption will be self-consistent, but the if probability is sufficiently high, then it become probabilistically self-defeating. We would have to adopt different assumptions. And maybe the most sensible update would be to believe that we are in a simulation, but maybe it’d be more sensible to assume we were wrong about the properties of the universe. And maybe there’s still scope to argue that we should do the former.
This counterargument was suggested before by Danila Medvedev and it doesn’t’ work. The reasons are following: if we are in a simulation, we can’t say anything about the outside world—but we are still in simulation and this is what was needed to be proved.
“This is what was needed to be proved”—yeah, but we’ve undermined the proof. That’s why I backed up and reformulated the argument in the second paragraph.
One more way to prove simulation argument is a general observation that explanations which have lower computational cost are dominating my experience (that is, a variant of Occam Razor). If I see a nuclear explosion, it is more likely to be a dream, a movie or a photo. Thus cheap simulations should be more numerous than real worlds and we are likely to be in it.
It’s been a while since I read the paper, but wasn’t the whole argument around people wanting to simulate different versions of their world and population? There’s a baked in assumption that worlds similar to ones own are therefore more likely to be simulated.
Yeah, that’s possible. Good point!
Three levels of forgiveness—emotions, drives and obligations. The emotional level consists of your instinctual anger, rage, disappointment, betrayal, confusion or fear. This is about raw raws. The drives consists of your “need” for them to say sorry, make amends, regret their actions, have a conversation or emphasise with you. In other words, it’s about needing the situation to turn out a particular way. The obligations are very similar to the drives, except it is about their duty to perform these actions rather than your desire to make it happen.
Someone can forgive all of these levels. Suppose someone says that they are sorry and the other person “there is nothing to forgive”. Then perhaps they mean that there was no harm or that they have completely forgiven all levels.
Alternatively, someone might forgive on one-level, but not another. For example, it seems that most of the harm of holding onto a grudge comes from the emotional level and the drives level, but less from the duties level.
The phrase “an eye for an eye” could be construed as duty—that the wrong another does you is a debt you have to repay. (Possibly inflated, or with interest. It’s also been argued that it’s about (motivating) recompense—you pay the price for taking another’s eye, or you lose yours.)
Interesting point, but you’re using duty differently than me. I’m talking about their duties towards you. Of course, we could have divided it another way or added extra levels.
Your duties (towards others) may include what you are supposed to do if others don’t fulfill their duties (towards you).
Writing has been one of the best things for improving my thinking as it has forced me to solidify my ideas into a form that I’ve been able to come back to later and critique when I’m less enraptured by them. On the other hand, for some people it might be the worst thing for their thinking as it could force them to solidify their ideas into a form that they’ll later feel compelled to defend.
Book Review: The Rosie Project:
Plot summary: After a disastrous series of dates, autistic genetics professor Don Tilman decides that it’d be easier to just create a survey to eliminate all of the women who would be unsuitable for him. Soon after, he meets a barmaid called Rosie who is looking for help with finding out who her father is. Don agrees to help her, but over the course of the project Don finds himself increasingly attracted to her, even though the survey suggests that he is completely unsuitable. The story is narrated in Don’s voice. He tells us all about his social mishaps, while also providing some extremely straight-shooting observations on society
Should I read this?: If you’re on the fence, I recommend listening to a couple of minutes as the tone is remarkably consistent throughout, but without becoming stale
My thoughts: I found it to be very humorous. but without making fun of Don. We hear the story from his perspective and he manages to be a very sympathetic character. The romance manages to be relatively believable since Don manages to establish himself as having many attractive qualities despite his limited social sills. However, I couldn’t believe that he’d think of Rosie as “the most beautiful woman in the world”; that kind of romantic idealisation is just too inconsistent with his character. His ability to learn skills quickly also stretched credibility, but it felt more believable after he dramatically failed during one instance. I felt that Don’s character development was solid; I did think that he’d struggle more to change his schedule after keeping it rigid for so long, but that wasn’t a major issue for me. I appreciated that by the end he had made significant growth (less strict on his expectations for a partner, not sticking so rigidly to a schedule, being more accomodating of other people’s faults), but he was still largely himself.
Doublechecking, this is fiction?
Yep, fiction
I think I spent more time writing this than reading the book, as I find reviewing fiction much more difficult. I strongly recommend this book: it doesn’t take very long to read, but you may spend much longer trying to figure out what to make of it.
Book Review: The Stranger by Camus (Contains spoilers)
I’ve been wanting to read some existentialist writing for a while and it seemed reasonable to start with a short book like this one. The story is about a man who kills a man for what seems to be no real reason at all and who is then subsequently arrested and must come to terms with his fate. It grapples with issues such as the meaning of life, the inevitability of death and the expectations of society.
This book that works perfectly as an audio-book because it’s written in the first person and its a stream of consciousness. In particular, you can just let the thoughts wash over you then pass away, in a way that you can’t with a physical book.
This book starts with the death of Mersault’s mother and his resulting indifference. Mersault almost entirely lacks any direction or purpose in life—not caring about opportunities at work, Salamano abusing his dog or whether or not he marries Marie. Not much of a hint is given at his detachment, except him noting that he had a lot of ambition as a young man, but gave up on such dreams when he had to give up his education.
Despite his complete disillusionment, it’s not that he cares about nothing at all. Without optimism, he has no reason to plan for the future. Instead, he focuses almost exclusively on the moment—being friends with Raymond because he has no reason not to, being with Marie because she brings him pleasure in the present, more tragically, shooting the Arab for flashing the sun in his eyes with a knife.
In my interpretation, Mersault never formed a strong intent to kill him, but just drifted into it. He didn’t plan to have the gun with him, but simply took it to stop Raymond acting rashly. He hadn’t planned to create a confrontation; he just returned to the beach to cool off, then assumed that the Arab was far enough away to avoid any issues. When the Arab pulled out his knife, it must have seemed natural to pull out his gun. Then, with the heat clouding his judgement, his in-the-moment desire to make the situation go away; and his complete detachment from caring, he ends up killing a man when he didn’t need to as he was still far away. Then after he’s fired the first shot, he likely felt like he’d made his choice and that there was then nothing left to do but fire the next four.
While detachment involves no optimism in the emotional sense, in terms of logic it isn’t entirely pessimistic. After all, someone who is detached by their lack of care assumes that things cannot become significantly worse. Mersault falls victim to this trap and in the end it costs him dearly. This occurs not just when he shoots the Arab, but throughout the legal process where he shows what seems like a stunning naivety, completely unaware of what he has to lose until he is pretty much told he is to be executed.
I found his trial to be one of the most engaging parts of the book. A man is dead, but the circumstances relating to this death are almost tangential to the whole thing. Instead, the trial focuses much more on tangential factors such as whether he had felt a sufficient amount of grief for his mother and his association with a known low-life Raymond. This passage felt like a true illustration of human nature; in particular our tendency to fit everything into a particular narrative and also how “justice” can often end up being more about our disgust at the perpetrator as a person than about what they’ve done. Mersault undoubtedly deserves punishment for pulling the trigger early, but the trial he was given was a clear miscarriage of justice.
This book does a good job of illustrating the absurdity of life. How much of our daily lives are trivial, the contradictions in much of human behaviour, the irrationality of many of our social expectations and how our potential sources of meaning fail to be fundamentally meaningful. But then how also how we can find meaning in things that are meaningless.
Indeed, it is only his imprisonment that really makes him value life outside and it is only his impending execution that makes him value life itself. He survives prison by drawing pleasure from simple things, like seeing what tie his defence later will wear and that his happiness does not have to be constrained by his unfortunate circumstances. Mersault ultimately realises that he has to make his own purpose, instead of just expecting it to be out there in the universe.
Further thoughts: One of the most striking sub-plots in this book is that of Salamano and his dog. Salamano is constantly abusing his dog and complaining about how bad it’s behaviour is, but when the dog runs away, Salamano despairs about what will happen to him now that he no longer has the dog. This is a perfect example of just how absurd human actions can be both generally and particularly when we are in denial about our true feelings.
Pet theory about meditation: Lots of people say that if you do enough meditation that you will eventually realise that there isn’t a self. Having not experienced this myself, I am intensely curious about what people observe that persuades them to conclude this. I guess I get a sense that many people are being insufficiently skeptical. There’s a difference between there not appearing to be such a thing as a self and a self not existing. Indeed, how do we know meditation just doesn’t temporarily silence whatever part of our mind is responsible for self-hood?
Recently, I saw a quote from Sam Harris that makes me think I might (emphasis on might) finally know what people are experiencing. In a podcast with Eric Winstein he explains that he believes there isn’t a self because, “consciousness is an open space where everything is appearing—that doesn’t really answer to I or me”. The first part seems to mirror Global Workspace Theory, the idea (super roughly) that there is a part of the brain for synthesising thoughts from various parts of the brain which can only pay attention to one thought at a time.
The second part of Sam Harris’ sentence seems to say that this Global Workspace “doesn’t answer to I or me”. This is still vague, but it sounds like there is a part of the brain that identifies as “I or me” that is separate from this Global Workspace or that there are multiple parts that are separate from the Global Workspace and don’t identify as “I or me”. In the first of these sub-interpretations, “no-self” would merely mean that our “self” is just another sub-agent and not the whole of us. In the second of these sub-interpretations, it would additionally be true that we don’t have a unitary self, but multiple fragments of self-hood.
Anyway, as I said, I haven’t experienced no-self, but curious to see if this resonates with people who have.
Was thinking about entropy and the Waluigi effect (in a very broad, metaphorical sense).
The universe trends towards increasing entropy, in such an environment it is evolutionarily advantageous to have the ability to resist it. Notice though that life seems to have overshot and resulted in far more complex ordered systems (both biological or manmade) than what exists elsewhere.
It’s not entirely clear to me, but it seems at least somewhat plausible that if entropy were weaker, the evolutionary pressure would be weaker and the resulting life and systems produce by such life would ultimately be less complex than they are in our world.
Life happens within computations in datacenters. Separately, there are concerns about how well the datacenters will be doing when the universe is many OOMs older than today.
Sorry, I can’t quite follow how this connects. Any chance you could explain?
Confusing entropy arguments are suspicious (in terms of hope for ever making sense). That’s a sketch of how entropy in physics becomes clearly irrelevant for the content of everything of value (as opposed to amount). Waluigi effect is framing being stronger than direction within it, choice of representation more robust than what gets represented. How does natural selection enter into this?
Life evolves in response to pressure. Entropy is one such source of pressure.
On free will: I don’t endorse the claim that “we could have acted differently” as an unqualified statement.
However, I do believe that in order to talk about decisions, we do need to grant validity to a counterfactual view where we could have acted differently as a pragmatically useful fiction.
What’s the difference? Well, you can’t use the second to claim determinism is false.
This lack of contact with naive conception of possibility should be developed further, so that the reasons for temptation to use the word “fiction” dissolve. An object that captures a state of uncertainty doesn’t necessarily come with a set of concrete possibilities that are all “really possible”. The object itself is not “fictional”, and its shadows in the form of sets of possibilities were never claimed to either be “real possibilities” or to sum up the object, so there is no fiction to be found.
A central example of such an object is a program equipped with theorems about its “possible behaviors”. Are these behaviors “really possible”? Some of them might be, but the theorems don’t pin that down. Instead there are spaces on which the remaining possibilities are painted, shadows of behavior of the program as a whole, such as a set of possible tuples for a given pair of variables in the code. A theorem might say that reality lies within the particular part of the shadow pinned down by the theorem. One of those variables might’ve stood for your future decision. What “fiction”? All decision relevant possibility originates like that.
I argue that “I can do X” means “If I want to do X, I will do X”. This can be true (as an unqualified statement) even with determinism. It is different from saying that X is physically possible.
It seems as though it should be possible to remove the Waluigi effect[1] by appropriately training a model.
Particularly, some combination of:
Removing data from the training that matches this effect
Constructing new synthetic data which performs the opposite of the Waluigi effect
However, removing this effect might be problematic for certain situations where we want the ability to generate such content, for example, if we want it to write a story.
In this case, it might pay to add back the ability to generate such content within certain tags (ie. <story></story>), but train it not to produce such content otherwise.
Insofar as it exists. Surprisingly, appearing on Know Your Meme, does not count as very strong evidence
Speculation from The Nature of Counterfactuals
I decided to split out some content from the end of my post The Nature of Counterfactuals because upon reflection I don’t feel it is as high quality as the core of the post.
I finished The Nature of Counterfactuals by noting that I was incredibly unsure of how we should handle circular epistemology. That said, there are a few ideas I want to offer up on how to approach this. The big challenge with counterfactuals is not imagining other states the universe could be in or how we could apply our “laws” of physics to discover the state of the universe at other points of time. Instead, the challenge comes when we want to construct a counterfactual representing someone choosing a different decision. After all, in a deterministic universe, someone could only have made a different choice if the universe were different, but then it’s not clear why we would care about the fact that someone in a different universe would have achieved a particular score when we just care about this universe.
I believe that answer to this question will be roughly that in certain circumstances we only care about particular things. For example, let’s suppose Omega is programmed in such a way that it would be impossible for Amy to choose box A without gaining 5 utility or choose box B without gaining 10 utility. Assume that in the universe Amy chooses box A and gains 5 utility. We’re tempted to say “If she had chosen box B she would have gained 10 utility” even though she would have to occupy a different mental state at the time of the decision and the past would be different because the model has been set up so that those factors are unimportant. Since those factors are the only difference between the state where she chooses A and the state where she chooses B we’re tempted to treat these possibilities as the same situation.
So naturally, this leads to a question, why should we build a model where those particular factors are unimportant? Does this lead to pure subjectivity? Well, the answer seems to be that often in practise such a heuristic tends to work well—agents that ignore such factors tend to perform pretty close to agents that account for them—and often better when we include time pressure in our model.
This is the point where the nature of counterfactuals becomes important—whether they are ontologically real or merely a way in which we structure our understanding of the universe. If we’re looking for something ontologically real, the fact that a heuristic is pragmatically useful provides quite limited information about what counterfactuals actually are.
On the other hand, if they’re a way of structuring our understanding, then we’re probably aiming to produce something consistent from our intuitions and our experience of the universe. And from this perspective, the mere fact that a heuristic is intuitively appealing counts as evidence for it.
I suspect that with a bit more work this kind of account could be enough to get a circular epistemology off the ground.
My position on Newcomb’s Problem in a sentence: Newcomb’s paradox results from attempting to model an agent as having access to multiple possible choices, whilst insisting it has a single pre-decision brain state.
If anyone was planning on submitting something to this competition, I’ll give you another 48 hours to get it in—https://www.lesswrong.com/posts/Gzw6FwPD9FeL4GTWC/usd1000-usd-prize-circular-dependency-of-counterfactuals.
Thick and Thin Concepts
Take for example concepts like courage, diligence and laziness. These concepts are considered thick concepts because they have both a descriptive component and a moral component. To be courageous is most often meant* not only to claim that the person undertook a great risk, but that it was morally praiseworthy. So the thick concept is often naturally modeled as a conjunction of a descriptive claim and a descriptive claim.
However, this isn’t the only way to understand these concepts. An alternate would be along the following lines: Imagine D+M>=10 with D>=3 and M>=3. So there would be a minimal amount that the descriptive claim has to fit and a minimal amount the moral claim has to fit and a minimal total. This doesn’t seem like an unreasonable model of how thick concepts might apply.
Alternatively, there might be an additional requirement that the satisfaction of the moral component is sufficiently related to the descriptive component. For example, suppose in order to be diligent you need to work hard in such a way that the hard work causes the action to be praiseworthy. Then consider the following situation. I bake you a cake and this action is praiseworthy because you really enjoy it. However, it would have been much easier for me to have bought you a cake—including the effort to earn the money—and you would actually have been happier had I done so. Further, assume that I knew all of this in advance. In this case, can we really say that you’ve demonstrated the virtue of diligence?
Maybe the best way to think about this is Wittgensteinian: that thick concepts only make sense from within a particular form of life and are not so easily reduced to their components as we might think.
* This isn’t always the case though.
I’ve always found the concept belief in belief slightly hard to parse cognitively. Here’s what finally satisfied my brain: whether you will be rewarded or punished in heaven is tied to whether or not God exists, whether or not you feel a push to go to church is tied to whether or not you believe in God. If you do go to church and want to go your brain will say, “See I really do believe” and it’ll do the reverse if you don’t go. However, it’ll only affect your belief in God indirectly through your “I believe in God” node. Putting it another way, going to church is evidence you believe in God, not evidence that God exists. Anyway, the result of all this is that your “I believe in God” node can become much stronger than your “God exists” node
EDT agents handle Newcomb’s problem as follows: they observe that agents who encounter the problem and one-box do better on average than those who encounter the problem and two-box, so they one-box.
That’s the high-level description, but let’s break it down further. Unlike CDT, EDT doesn’t worry about the fact that their may be a correlation between your decision and hidden state. It assumes that if the visible state before you made your decision is the same, then the counterfactuals generated by considering your possible decisions are comparable. In other words, any differences in hidden state, such as you being a different agent or money being placed in the box, are attributed to your decision (see my previous discussion here)
I’ve been thinking about Rousseau and his conception of freedom again because I’m not sure I hit the nail on the head last time. The most typical definition of freedom and that championed by libertarians focuses on an individual’s ability to make choices in their daily life. On the more libertarian end, the government is seen as an oppressor and a force of external compulsion.
On the other hand, Rousseau’s view focuses on “the people” and their freedom to choose the kind of society that they want to live in. Instead of being seen as an external entity, the government is seen as a vessel through which the people can express and realise this freedom (or at least as potentially becoming such a vessel).
I guess you could call this a notion of collective freedom, but at the same time this risks obscuring an important point: that at the same time it is an individual freedom as well. Part of it is that “the people” is made up of individual “people”, but it goes beyond this. The “will of the people” at least in its idealised form isn’t supposed to be about a mere numerical majority or some kind of averaging of perspectives or the kind of limited and indirect influence allowed in most representative democracies, but rather it is supposed to be about a broad consensus; a direct instantiation of the will of most individuals.
There is a clear tension between these kinds of freedom in that the more the government respects personal freedom that less control the people have over the kind of society they want to live in and the more the government focuses on achieving the “will of the people” the less freedom exists for those for whom this doesn’t sound so appealing.
I can’t recall the arguments Rousseau makes for this position, but I expect that they’d be similar to the arguments for positive freedoms. Proponents of positive freedom argue that theoretical freedoms, such as there being no legal restriction against gaining an education, are worthless if these opportunities aren’t actually accessible, say if this would cost more money than you could ever afford.
Similarly, proponents of Rousseau’s view could argue that freedom over your personal choices is worthless if you exist within a terrible society. Imagine there were no spam filters and so all of it made it through. Then the freedom to use email would be worthless without the freedom to choose to exist in a society without spam. Instead of characterising this as a trade-off between utility and freedom, Rousseau would see this as a trade-off between two different notions of freedom.
Now I’m not saying Rousseau’s views are correct—I mean the French revolution was heavily influenced by him and we all saw how that worked out. And it also depends on there being some kind of unified “will of the people”. But at the same time it’s an interesting perspective.
Can you make this a little more explicit? France is a pretty nice place—are you saying that the counterfactual world where there was no revolution would be significantly better?
All the guillotining. And the necessity of that was in part justified with reference to Rousseau’s thought
Sure. I’m asking about the “we all saw how that worked out” portion of your comment. From what I can see, it worked out fairly well. Are you of the opinion that the French Revolution was an obvious and complete utilitarian failure?
I haven’t looked that much into French history, just think it is important to acknowledge where that line of thought can end up.
What does it mean to define a word? There’s a sense in which definitions are entirely arbitrary and what word is assigned to what meaning lacks any importance. So it’s very easy to miss the importance of these definitions—emphasising a particular aspect and provides a particular lense with which to see the world.
For example, if define goodness as the ability to respond well to others, it emphasizes that different people have different needs. One person may want advice, while another simple encouragement. Or if we define love as acceptance of the other, it suggests that one of the most important aspects of love is the idea that true love should be somewhat resilient and not excessively conditional.
As I wrote before, evidential decision theory can be critiqued for failing to deal properly with situations where hidden state is correlated with decisions. EDT includes differences in hidden state as part of the impact of the decision, when in the case of the smoking lesion, we typically want to say that it is not.
However, Newcomb’s problem also has hidden state is correlated with your decision. And if we don’t want to count this when evaluating decisions in the case of the Smoking Lesion, perhaps we shouldn’t count this in the case of Newcomb’s? Or is there a distinction? I think I’ll try analysing this in terms of the erasure theory of coutnerfactuals at some point
Does FDT make this any clearer for you?
There is a distinction in the correlation, but it’s somewhat subtle and I don’t fully understand it myself. One silly way to think about it that might be helpful is “how much does the past hinge on your decision?” In smoker’s lesion, it is clear the past is very fixed—even if you decide to not to smoke, that doesn’t affect the genetic code. But in Newcomb’s, the past hinges heavily on your decision: if you decide to one-box, it must have been the case that you could have been predicted to one-box, so it’s logically impossible for it to have gone the other way.
One intermediate example would be if Omega told you they had predicted you to two-box, and you had reason to fully trust this. In this case, I’m pretty sure you’d want to two-box, then immediately precommit to one-boxing in the future. (In this case, the past no longer hinges on your decision.) Another would be if Omega was predicting from your genetic code, which supposedly correlated highly with your decision but was causally separate. In this case, I think you again want to two-box if you have sufficient metacognition that you can actually uncorrelate your decision from genetics, but I’m not sure what you’d do if you can’t uncorrelate. (The difference again lies in how much Omega’s decision hinges on your actual decision.)
Yeah, FDT has a notion of subjunctive dependence. But the question becomes what does this mean? What precisely is the difference between the smoking lesion and Newcombs? I have some ideas and maybe I’ll write them up at some point.
I’m beginning to warm to the idea that the reason why we have evolved to think in terms of counterfactuals and probabilities is rooted in these are fundamental at the quantum-level. Normally I’m suspicious at rooting macro level claims in quantum level effects because at such a high level of abstraction it would be very easy for these effects to wash out, but the multi-world hypothesis is something that wouldn’t wash out. Otherwise it would seem to be all a bit too much of a coincidence.
(“Oh, so you believe that counterfactuals and probability are at least partly a human construct, but they just so happen to correspond with what seems to us to be the fundamental level of physics, not because there is a relation there, but because of pure happenstance. Seems a bit of a stretch)”
I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe. Far more likely to develop counterfactuals first, since it seems that agents on the level of dogs can imagine counterfactuals at least in the weak sense of “an expected event that didn’t actually happen”. Human-level counterfactual models are certainly more complex than that, but I don’t think they’re qualitatively different.
I think if there’s any evolution pressure toward ability to predict the environment, and the environment has a range of salient features that vary in complexity, there will be some agents that can model and predict the environment better than others regardless of whether that environment is fundamentally deterministic or not. In cases where evolution leads to sufficiently complex prediction, I think it will inevitably lead to some sort of counterfactuals.
The simplest predictive model can only be applied to sensory data directly. The agent gains a sense of what to expect next, and how much that differed from what actually happened. This can be used to update the model. This isn’t technically a counterfactual, but only through a quirk of language. In everything but name “what to expect next” is at least some weak form of counterfactual. It’s a model of an event that hasn’t happened and might not happen. But still, let’s just rule it out arbitrarily and continue on.
The next step is probably to be able to apply the same predictive model to memory as well, which for a model changing over time means that an agent can remember what they experienced, what they expected, and compare with what they would now expect to have happened in those circumstances. This is definitely a counterfactual. It might not be conscious, but it is a model of something in the past that never happened. It opens up a lot of capability for using a bunch of highly salient stored data to update the model instead of just the comparative trickle of new salient data that comes in over time.
There are still higher strengths and complexities of counterfactuals of course, but it seems to me that these are all based on the basic mechanism of a predictive model applied to different types of data.
None of this needs any reference to quantum mechanics, and nor does probability. All it needs is a universe too complex to be comprehended in its entirety, and agents that are capable of learning to imperfectly model parts of it that are relevant to themselves.
“I expect that agents evolved in a purely deterministic but similarly complex world would be no less likely to (eventually) construct counterfactuals and probabilities than those in a quantum sort of universe”
I’m actually trying to make a slightly unusual argument. My argument isn’t that we wouldn’t construct counterfactuals in a purely deterministic world operating similar to ours. My argument is involves:
a) Claiming that counterfactuals are at least partly constructed by humans (if you don’t understand why this might be reasonable, then it’ll be more of a challenge to understand the overall argument)
b) Claiming that it would be a massive coincidence if something partly constructed by humans happened to correspond with fundamental structures in such a way unrelated to the fundamental structures
c) Concluding that its likely that there is some as yet unspecified relation
Does this make sense?
To me the correspondence seems smaller, and therefore the coincidence less unlikely.
Many-world hypothesis assumes parallel worlds that obey exactly the same laws of physics. Anything can happen with astronomically tiny probability, but the vast majority of parallel worlds is just as boring as our world. The counterfactuals we imagine are not limited by the laws of physics.
Construction of counterfactuals is useful for reasoning with uncertainty. Quantum physics is a source of uncertainty, but there are also enough macroscopic sources of uncertainty (limited brain size, second law of thermodynamics). If an intelligent life evolved in a deterministic universe, I imagine it would also find counterfactual reasoning useful.
Yeah, that’s a reasonable position to take.
Not hugely. Quantum mechanics doesn’t have any counterfactuals in some interpretations. It has deterministic evolution of state (including entanglement), and then we interpret incomplete information about it as being probabilistic in nature. Just as we interpret incomplete information about everything else.
Hopefully one day I get a chance to look further into quantum mechanics
What correspondence? Counterfactuals-as-worlds have all laws of physics broken in them, including quantum mechanics.
I’m not claiming that there’s a perfect correspondence between counterfactuals as different worlds in a multiverse vs. decision counterfactuals. Although maybe that’s enough the undermine any coincidence right there?
I don’t see how there is anything here other than equivocation of different meanings of “world”. Counterfactuals-as-worlds is not even a particularly convincing way of making sense of what counterfactuals are.
If you’re interpreting me as defending something along the lines of David Lewis, then that’s actually not what I’m doing.
Says who?