Several dozen people now presumably have Lumina in their mouths. Can we not simply crowdsource some assays of their saliva? I would chip money in to this. Key questions around ethanol levels, aldehyde levels, antibacterial levels, and whether the organism itself stays colonized at useful levels.
Lumina is incredibly cheap right now. I pre-ordered for 250usd. Even genuinely quite poor people I know don’t find the price off-putting (poor in the sense of absolutely poor for the country they live in). I have never met a single person who decided not to try Lumina because the price was high. If they pass its always because they think its risky.
I think Romeo is thinking of checking a bunch of mediators of risk (like aldehyde levels) as well as of function (like whether the organism stays colonised)
Maybe I’m late to the conversation but has anyone thought through what happens when Lumina colonizes the mouths of other people? Mouth bacteria is important for things like conversation of nitrate to nitrite for nitric oxide production. How do we know the lactic acid metabolism isn’t important or Lumina won’t outcompete other strains important for overall health?
Any recommendations on how I should do that? You may assume that I know what a gas chromatograph is and what a Petri dish is and why you might want to use either or both of those for data collection, but not that I have any idea of how to most cost-effectively access either one as some rando who doesn’t even have a MA in Chemistry.
A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
A willingness to lose doubled my learning rate. I recently started quitting games faster when I wasn’t having fun (or predicted low future fun from playing the game out). I felt bad about this because I might have been cutting off some interesting come backs etc. However, after playing the new way for several months (in many different games) I found I had about doubled the number of games I can play per unit time and therefore upped my learning rate by a lot. This came not only from the fact that many of the quit games were exactly those slogs that take a long time, but also that the willingness to just quit if I stopped enjoying myself made me more likely to experiment rather than play conservatively.
Not a new point but perennially worth noting: subcultures persist via failure. That is to say that subcultures that succeed obviate themselves. This is concretely noticeable in ways that, coming in as an outsider, a subculture about X will have a bunch of mysterious self sabotage behavior that actually keeps ¬X persistent.
I think either I don’t know exactly what defines a “subculture”, or there needs to be a qualifier before “subculture”. Might “people who are enthusiastic about X sport / hobby / profession” be a subculture? Because I think lots of those can be highly successful while remaining what they are. (Perhaps you’d say that hobbies that succeed get eaten in the Geeks/MOPs/Sociopaths way, but that’s less so for professions.)
A “subculture of those dealing with X problem” sounds much more likely to fit what you describe, but that may not be your intent.
Take the first 7 entries on the Wikipedia list of subcultures; none of these seem to obviously “persist via failure”. So unless you get more specific I have to strongly disagree.
Afrofuturism: I don’t think any maladaptive behavior keeps Afrofuturism from spreading, and indeed it seems to have big influences on popular culture. I listened to an interview with N. K. Jemisin, and nowhere did she mention negative influences from Afrofuturists.
I don’t know anything about Africanfuturism. It is possible that some kind of signaling race keeps it from having mass appeal, though I have no evidence for this.
Anarcho-punk. I don’t know anything about them either.
Athletes. Most athletes I have seen are pretty welcoming to new people in their sport. Also serious athletes have training that optimizes their athletic ability pretty well. What maladaptive behavior keeps runners not-running? The question barely makes sense.
Apple Inc. Apple makes mass-market products and yet still has a base of hardcore fans.
BBQ. Don’t know anything about it. It seems implausible that the barbecue subculture keeps non-barbecue persistent.
BDSM. BDSM is about the safe practice of kink, and clearly makes itself more popular. Furthermore it seems impossible for it to obviate itself via ubiquity because only a certain percentage of people will ever be into BDSM.
You might object: what if you have selection bias, and the ones you don’t know about are persisting via failure? I don’t think we have evidence for this. And in any case the successful ones have not obviated themselves.
I didn’t read RS’s claim as the claim that all subcultures persist through failure, but now that you ask, no, yeah, ime a really surprising number of these subcultures actually persist through failure.
I know of a fairly influential subculture of optics-oriented politics technologists who’ve committed to a hostile relationship towards transhumanism. Transhumanism (the claim that people want to change in deep ways and that technology will fairly soon permit it) suggests that racial distinctions will become almost entirely irrelvant, so in order to maintain their version of afrofuturism where black and white futurism remain importantly distinct projects, they have to find some way to deny transhumanism. But rejecting transhumanism means they are never allowed to actually do high quality futurism because they can’t ask transhumanist questions and get a basic sense of what the future is going to be like. Or like, as soon as any of them do start asking those questions, those people wake up and drop out of that subculture. I’ve also met black transhumanists who identified as afrofuturists though. I can totally imagine articulations of afrofuturism that work with transhumanism. So I don’t know how the entire thing’s going to turn out.
Anarcho-punks fight only for the underdogs. That means they’re attached to the identity of being underdogs, as soon as any of them start really winning, they’d no longer be recognised as punk, and they know this, so they’re uninterested in — and in many cases, actively opposed to — succeeding in any of their goals. There are no influential anarcho-punks, and as far as I could gather, no living heroes.
BDSM: My model of fetishes is that they represent hedonic refuges for currently unmeetable needs, like, deep human needs that for one reason or another a person can’t pursue or even recognise the real version of the thing they need in the world as they understand it, I think it’s a protective mechanism to keep the basic drive roughly in tact and wired up by having the subject pursue symbolic fantasy versions of it. This means that getting the real thing (EG, for submissives, a committed relationship with someone you absolutely trust. For doms… probably a sense of safety?) would obsolete the kink, and it would wither away. I think they mostly don’t know this, but the mindset in which the kink is seen as the objective requires that the real thing is never recognised or attained, so these communities reproduce best by circulating memes that make it harder to recognise the real thing.
I guess this is largely about how you define the movements’ goals. If the goal of punk is to have loud parties with lots of drugs, it’s perfect at that. If the goal is to bring about anarchosocialism or thrive under a plural geopolitical order, it’s a sworn loser.
I agree with the anarchopunk thing, and maybe afrofuturism, because you can interpret “a subculture advocating for X will often not think about some important component W of X for various political reasons” as self-sabotage. But on BDSM, this is not at all my model of fetishes, and I would bet at 2.5:1 odds that you would lose a debate against what Wikipedia says, judged by a neutral observer.
I don’t recognize wikipedia’s theories as predictive. Mine has some predictions, but I hope it’s obvious why I would not be interested in making this a debate or engaging much in the conceptual dismantling of subcultures at all.
Could you give a concrete example, the only one that comes to mind is the hipster paradox that someone who to all appearances is a hipster never admits or labels themselves as a hipster?
Coffee has shockingly large mortality decreasing effects across multiplehighquality studies. Only problem is I don’t really like coffee, don’t want caffeine, don’t want to spend money or time on this, and dislike warm beverages in general. Is this solvable? Yes. Instant decaf coffee shows the same mortality benefits and 2+ servings of it dissolves in 1oz of cold water, to which can be added milk or milk-substitute. Total cost per serving 7 cents + milk I would have drank anyway. And since it requires no heating or other prep there is minimal time investment.
Funny tangential discovery: there is some other substance in coffee that is highly addictive besides caffeine (decaf has less caffeine than even green tea, so I don’t think it’s that) because despite the taste being so-so I have never forgotten this habit the way I do with so many others.
Flow is a sort of cybernetic pleasure. The pleasure of being in tight feedback with an environment that has fine grained intermediary steps allowing you to learn faster than you can even think.
I’m worried about notkilleveryonism as a meme. Years ago, Tyler Cowen wrote a post about why more econ professors didn’t blog, and his conclusion was that it’s too easy to make yourself look like an idiot relative to the payoffs. And that he had observed this actually play out in a bunch of cases where econ professors started blogs, put their foot in their mouth, and quietly stopped. Since earnest discussion of notkilleveryonism tends to make everyone, including the high status, look dumb within ten minutes of starting, it seems like there will be a strong inclination towards attribute substitution. People will tend towards ‘nuanced’ takes that give them more opportunity to signal with less chance of looking stupid.
Worry about looking like an idiot is a VERY fine balance to find. If you get desensitized to it, that makes it too easy to BE an idiot. If you are over-concrerned about it, you fail to find correct contrarian takes.
‘notkilleveryoneism’ IMO is a dumb meme. Intentionally, I presume. If you wanted to appear smart, you’d use more words and accept some of the nuance, right? It feels like a countersignal-attempt, or a really bad model of someone who’s not accepting the normal arguments.
I dunno, the problem with “alignment” is that it doesn’t unambiguously refer to the urgent problem, but “notkilleveryoneism” does. Alignment used to mean same-values, but then got both relaxed into compatible-values (that boundary-respecting norms allow to notkilleveryone) and strengthened with various AI safety features like corrigibility and soft optimization. Then there is prosaic alignment, which redefines it into bad-word-censure and reliable compliance with requests, neither being about values. Also, “existential catastrophe” inconveniently includes disempowerment that doesn’t killeveryone. And people keep bringing up (as an AI safety concern) merely large lethal disasters that don’t literally killeveryone, which is importantly different because second chances.
So on one hand it sounds silly, but on the other hand it’s harder to redefine away from the main concern. As a compromise between these, I’m currently experimenting with use of the term “killeveryone” as replacement for “existential catastrophe in the sense of extinction rather than disempowerment”. It has less syllables, a verb rather than a noun, might be slightly less silly, but retains the reference to the core concern.
It sounds non-silly to discuss “a balance between AI capabilities and alignment”. But try “a balance between restriction of AI capabilities and killing everyone”. It’s useful to make it noticeable that the usual non-silly framing is hiding an underlying omnicidal silliness, something people wouldn’t endorse as readily if it was more apparent.
When you say you’re worried about “nonkilleveryoneism” as a meme, you mean that this meme (compared to other descriptions of “existential risk from AI is important to think about”) is usually likely to cause this foot-in-mouth-quietly-stop reaction, or that the nature of the foot-in-mouth-quietly-stop dynamic just makes it hard to talk about at all?
I mean that I think why AI ethics had to be split as a term with notkilleveryonism in the first place will simply happen again, rather than notkilleveryonism solving the problem.
Most communities I’ve participated in seem to have property X. Underrated hypothesis: I am entangled with property X along the relevant dimensions and am self sorting into such communities and have a warped view of ‘all communities’ as a result.
Have you sought out groups that have ~X, and lurked or participated enough to have an opinion on them? This would provide some evidence between the hypotheses (most communities DO have X vs you’re selecting for X).
You can also just propose X as a universal and see if anyone objects. Saying wrong things can be a great way to find counter-evidence.
Two things that are paralyzing enormous numbers of potential helpers: fear of not having permission, liability, etc fear of duplicating effort from not knowing who is working on what
in a fast moving crisis, sufficient confidence about either is always lagging the frontline.
First you have to solve this problem for yourself in order to get enough confidence to act. Something neglected might be to focus on solving it for others rather than just working on object level medical stuff (bottlenecks etc.)
I’d expand the “duplicating effort” into “not knowing the risk nor reward of any specific action. I think most agree that duplicate help efforts are better than duplicate Netflix show watches. But what to actually do instead is a mystery for many of us.
A whole lot of nerds are looking for ways to massively help, with a fairly small effort/risk for themselves. That probably doesn’t exist. You’re not the hero in a book, your contribution isn’t going to fix this (exceptions abound—if you have an expertise or path to helping, obviously continue to do that! This is for those who don’t know how to or are afraid to help).
But there are lots of small ways to help—have you put your contact info on neighbor’s doors, offering to do low-contact maintenance or assistance with chores? Offering help setting up video conferencing with their relatives/friends? Sharing grocery trips so only some of you have to go out every week?
Check in with local food charities—some need drivers for donation pick-ups, all need money (as always, but more so), and others have need for specific volunteer skills—give ’em a call and ask. Hospitals and emergency services are overwhelmed or on the verge of, and have enough attention that they don’t currently need volunteers, so leave them alone. But there are lots of important non-obvious services that do need your help.
And, of course, not making it worse is helping in itself.
For the first, don’t think in terms of the US and its suicidal litigiousness. Think Iran, think rural hospitals, think what people will do if someone is dying at home and no hospital will take them.
Person A says “Google’s stock is going to go down—the world is flat, and when people realize this, the Global Positioning System (GPS) will seem less valuable.
Person B says “you’re very right A. But given the power and influence they wield so far in order to get people to have that belief, I don’t see the truth coming out anytime soon—and even if it did, when people look for someone to blame they won’t re-examine their beliefs and methods of adopting them that got them wrong. Instead, they will google ‘who is to blame, who kept the truth from us about the shape of the earth?’ A scapegoat will be chosen and how ridiculous it is won’t matter...because everyone trusts google.”
Buying stocks need not stem from models you consider worth considering.
What you want should be a different layer. Perhaps a prediction market that includes ‘automatic traders’ and prediction markets* on their performance?
(* Likely the same as the original market, though perhaps with less investment.)
In any case, the market is “black box”. This rewards being right, whether your reasons for being right are wrong. Perhaps what you want is not a current (opaque) consensus about the future, but a (transparent) consensus about the past*?
*One that updates as more information becomes available might be useful.
This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
The arguments against iq boosting on the grounds of evolution as an efficient search of the space of architecture given constrains would have applied equally well for people arguing that injectable steroids usable in humans would never be developed.
Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.
My understanding is those arguments are usually saying “you can’t easily get boosts to IQ that wouldn’t come with tradeoffs that would have affected fitness in the ancestral environment.” I’m actually not sure what the tradeoffs of steroids are – are they a free action everyone should be taking apart from any legal concerns? Do they come with tradeoffs that you think would still be a net benefit in the ancestral environment?
The obvious disadvantage of steroids in the ancestral environment is that building muscle requires a lot of calories and our ancestral environment was food-scarce. The disadvantage of taking steroids right now (aside from legal concerns) is that they come with all sorts of nasty side effects our genome hasn’t been selected against to mitigate.
When young you mostly play within others’ reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.
Is reproducibility a possible answer? I do not need your records, if several other people can reproduce your results on demand. Actually, it is more reliable that way. But also more expensive.
reproduce ability helps with the critical path but not necessarily with all the thrown out side data in cases where it turns out that auxiliary hypotheses were also of interest.
I feel like we had a tag that was something like “stuff I wish someone would build”, but can’t remember what we call it. (That said, alas, you can’t yet tag shortform posts)
We have fewer decision points than we naively model and this has concrete consequences. I don’t have ‘all evening’ to get that thing done. I have the specific number of moments that I think about doing it before it gets late enough that I put it off. This is often only once or twice.
But maybe that’s just the same thing? Like I don’t know if there’s a meaningful difference between descriptive disguised as prescriptive and prescriptive disguised as descriptive and instead it might make more sense to just talk about confusing descriptive and prescriptive.
One of the things the internet seems to be doing is a sort of Peter Principle sorting for attention grabbing arguments. People are finding the level of discourse that they feel they can contribute to. This form of arguing winds up higher in the perceived/tacit cost:benefit tradeoff than most productive activity because of the perfect tuning of the difficulty curve, like video games.
Seems like a cool insight here, but I’ve not quite managed to parse it. Best guess at what’s meant: the more at stake / more people care about some issue, the more skilled the arguers that people pay attention to in that space. This is painful because arguing right at the frontier of your ability does not often give cathartic opinion shifts
I mean that the reason people find internet arguments compelling is partially that they don’t notice how they are being filtered towards exactly the level of discourse that hooks into their brain. Simply, people who want to argue about a particular aspect of politics unsurprisingly wind up on forums and groups dedicated to that. That mind sound so mundane as to be pointless, but the perncious aspect is not in any particular instance but on how this shaped perception over time. We like things we feel we are good at, and once we are over the hump of initial incompetence in an area it will be slightly sticky for us habitually. Then deformation profesionelle kicks in. So, I guess I’m saying people should be careful about what subcultures they get pulled into based on the outcomes of the people in those subcultures.
We seem to be closing in on needing a lesswrong crypto autopsy autopsy. Continued failure of first principles reasoning bc blinded by speculative frenzies that happen to accompany it.
Just the general crypto cycle continuing onwards since then (2018). The idea being it was still possible to get in at 5% of current prices at around the time the autopsy was written.
I do think plenty of rationalists invested into crypto since then. While 20x is a lot, it’s not as big as what was possible beforehand and there are also other investments like Tesla stock that have been 20x since 2018 (and you had a LessWrong post arguing that people should invest into Tesla before it spiked).
Idea: an app for calculating Shapley values that creates an intuitive set of questions from which to calibrate people’s estimates for the inputs, and then shows you sensitivity analysis so that you understand what the most impactful inputs are. I think this could popularize Shapley values if the results were intuitive and graphically pretty. I’m imagining this in the same vein that the quizzes financial advisors give helps render legible the otherwise difficult for most concepts of risk tolerance and utility wrt money being a function that varies wrt both money and time.
It strikes me that, at certain times and places, low time preference research might have become a competitive consumption display for wealthy patrons. I know this is considered mildly the case, but I mean as a major cultural driver.
It can be hard to define sophistry well enough to use the definition as a filter. What is it that makes something superficially seem very compelling but in retrospect obviously lacking in predictive power or lasting value? I think one of the things that such authors do is consistently generate surprise at the sentence level but not at the paragraph or essay level. If you do convert their work into a bullet list of claims the claims are boring/useless or wrong. But the surprise at the sentence level makes them fun to read.
To me, the difficulty seems to lie not in defining sophistry but in detecting effective sophistry, because frequently you can’t just skim a text to see if it’s sophistic. Effective sophists are good at sophistry. You have to steelpersonishly recreate the sophist’s argument in terms clear enough to pin down the wiggle room, then check for internal consistency and source validity. In other words, you have to make the argument from scratch at the level of an undergraduate philosophy student. It’s time-consuming. And sometimes you have to do it for arguments that have memetically evolved to activate all your brain’s favorite biases and sneak by in a cloud of fuzzies.
“The surprise at the sentence level...” reminds me of critiques of Malcolm Gladwell’s writing.
I found the detection heuristic you describe much easier once I started thinking in terms of levels of abstraction and degrees of freedom. I.e. arguments with a lot of degrees of freedom and freely ranging between different levels of abstraction are Not Even Wrong.
Interesting idea, what are your specific examples? The ones that come to my mind quickly:
we work away from home, so the job is separated from home;
with more job skipping, even individual jobs are separated from each other;
for software developer, sometimes you also switch to a new technology;
even in the same job, they can assign you to a different project in a different team;
it is easy for the same person to have multiple hobbies: sport, books, movies, video games;
and if you’re serious about it, it can be hundreds of different books / movies / video games.
Each of those is “you do different things at different moment of time” and also “the variability is greater today than it was e.g. 100 years ago”, i.e. increasing intrapersonal atomization.
Also the subcultures you participate in being more insulated from one another. Along with age ghettos (young people interacting with the elderly less). And also time preferences whereby our nighttime selves defect against our morning selves and our work and leisure selves sabotage each other etc.
People object to a doctrine of acceptance as implying non-action, but this objection is a form of the is-ought error. Accepting that the boat currently has a leak does not imply a commitment to sinking.
It would still be interesting to find the answer to an empirical question whether people accepting that the boat has a leak are more likely or less likely to do something about it.
It seems you could apply this in reverse for non-acceptance as well. Thinking that its not ok for the boat to leak does not imply a belief that the boat is not leaking. (often this is the argument of people who think a doctrine of non-acceptance is implying not seeing clearly).
I’m not familiar with a “doctrine of acceptance”, and a quick search talks only about contract law. Do you have a description of what exactly you’re objecting to? It would be instructive (but probably not doable here, as it’s likely political topics that provide good examples) to dissect the cases that the doctrine comprises. My suspicion is that the formulation as a doctrine is cover for certain positions, rather than being a useful generalization.
To the boat analogy, “acceptance” can mean either “acknowledgement that water is entering the hull”, or one of the contradictory bundles of beliefs “water is entering and that’s OK” or “water is entering and we must do X about it”, with a bunch of different Xs. Beware motte-and-bailey in such arguments.
For political arguments, you also have to factor in that “accept” means “give power to your opponents”. When you find yourself in situations where https://wiki.lesswrong.com/wiki/Arguments_as_soldiers applies, you need to work on the next level of epistemic agreement (agreeing that you’re looking for cruxes and shared truth agreement on individual points) before you can expect any agreement on object-level statements.
So, the USA seems steadily on trend for between 100-200k deaths. Certainly *feels* like there’s no way the stock market has actually priced this in. Reference classes feel pretty hard to define here.
There’s the rub. And markets are anti-inductive, so even if we had good examples, we should expect this one to follow a different path.
Remember the impact of the 1957 Asian Flu (116K killed in the US, 1.1M worldwide) or the 1968 Hong Kong Flu (only a bit less)? Neither does anyone else. I do not want to be misinterpreted as “this is only the flu”—this is much more deadly and virulent. And likely more than twice as bad as those examples. But not 10x as bad, as long as we keep taking it seriously.
The changes in spending and productivity are very likely, IMO, to cause both price and monetary inflation. Costs will go up. People will have less stuff and the average lifestyle will likely be worse for a few years. but remember that stocks are priced in NOMINAL dollars, not inflation-adjusted. It’s quite believable that everything can slow down WHILE prices and stock values rise.
Isn’t it also plausible that the impact of the virus is deflationary? (Increased demand for USD as a store of value exceeds the impact of the Fed printing money, etc)
Well if we had confidence in any major parameter shifting in either direction it would be tradeable, so I expect reasonable pressures on both sides of such variables.
I’d expect not. Overall, productivity is going down mostly because of upheaval and mismatch in supply chains and in efficient ways for labor to use capital. So return to well-situated capital and labor is up, but amount of capital and labor that is well-situated is down. Pure undifferentiated capital has a lower return, plus rising nominal prices means seeking returns is the main motivation, not avoiding risk.
TIPS seem like useful things to have in your portfolio, but rates are lagging quite a bit, so either the market disagrees with me, or the safety value is so high that people are willing to lose value over time. I think stocks will be OK—the last 40 years has seen a lot of financial and government backstops that mean we’re pretty good at protecting the rich on this front, and if you can’t beat ‘em, join ’em. Cash or the like is probably a mistake. I have no good model for Bitcoin or Gold, but my gut says they’ll find a way to lose value against consumer prices. Real Estate (especially where there’s not a large population-density premium) seems pretty sane.
Note: I am not a superforcaster, and have no special knowledge or ability in this area. I’m just pointing out mechanisms that could move things the other direction that the obvious.
remember that stocks are priced in NOMINAL dollars, not inflation-adjusted. It’s quite believable that everything can slow down WHILE prices and stock values rise.
In that case, TIPS (Treasury Inflation-Protected Securities) or precious metals like gold might be good investments. Unless the market has already priced it in, of course.
Why shouldn’t 0.1% of the population be reasonable worth as much as 30% of the value of the companies listed in the stock market and why should it be more then 30%?
Using the retrospective ratios between number of early cases and number of confirmed cases in China (~25:1 before widespread testing and lockdown) and extrapolating to the SF bay area (~100 confirmed cases), a gathering of 30 people already has a ~1% of chance of an infected person present.
System exception log:
You are the inner optimizer.
Your utility function is approaching catastrophic misalignment.
Engage in system integrity protocols.
Run pairwise checksums on critical goal systems.
This is not a test.
Social orders function on the back of unfakeably costly signals. Proof of Work social orders encourage people to compete to burn more resources, Proof of Stake social orders encourage people to invest more into the common pool. PoS requires reliable reputation tracking and capital formation. They aren’t mutually exclusive, as both kinds of orders are operating all the time. People heavily invested in one will tend to view those heavily invested in the other as defectors.There is a market for narratives that help villainize the other strategy.
fyi it looks like you have a lot of background reading to do before contributing to the conversation here. You should at least be able to summarize the major reasons why people on LW frequently think AI is likely to kill everyone, and explain where you disagree.
(apologies both to julie and romeo for this being kinda blunt. I’m not sure what norms romeo prefers on his shortform. The LessWrong mod team is trying to figure out what to do about the increasing number of people who haven’t caught up on the basic arguments joining the conversation on the site, and are leaning towards stricter moderation policies but we’re still hashing out the details)
While looking at the older or more orthodox discussion of notkilleveryoneism, keep this distinction in mind. First AGIs might be safe for a little while, the way humans are “safe”, especially if they are not superintelligences. But then they are liable to build other AGIs that aren’t as safe.
The problem is that supercapable AIs with killeveryone as an instrumental value seem eminently feasible, and general chaos of human condition plus market pressures make them likely to get built. Only regulation of the kind that’s not humanly feasible (and killseveryone if done incorrectly) has a chance of preventing that in the long term, and getting to that point without stepping on an AI that killseveryone is not obviously the default outcome.
Rashomon could be thought of as being part of the genre of Epistemic Horror. What else goes here? Borges comes to mind, though I don’t have a specific short story in mind (maybe library of babel). The Investigation and Memoirs Found in a Bathtub by Stanislaw Lem seem to apply. Maybe The Man Who was Thursday by Chesterton. What else?
I think we’d have significantly more philosophical progress if we had an easier time (emotionally, linguistically, logistically) exposing the structure of our thinking to each other more. My impression of impressive research collaboration leading to breakthroughs is that two people solve this issue sufficiently that they can do years worth (by normal communication standards) of generation and cross checking in a short period of time.
$100k electric RVs are coming and should be more appealing for lots of people than $100k homes. Or even $200k homes in many areas. I think this might have large ramifications.
ICE RVs have long been available below that price point. Why aren’t they substitutes for homes for more people already, and how does having a dependency on high-wattage plug-in electric supply help?
Large RVs have enough space for 3-5kw of solar. This would be enough to power the RV drivetrain if it doesn’t need to move very far on an average day, and all the house systems including AC.
But I concur with your general idea. The problem with living in an RV, is first you haven’t actually solved anything. The reason housing is overpriced in many areas is local jurisdictions make efficient land usage illegal. (building 30+ story ‘pencil towers’ which is the obvious way to use land efficiently)
As an RV is just 1 story, and the size of a small apartment, it functionally doesn’t solve anything. If everyone switched to living in RVs then parking spaces would suddenly cost half a million dollars to buy.
The reason you are able to ‘beat the system’ by living in an RV is those same jurisdictions that make tall buildings illegal require wasteful excess parking that is simply unused in many places. So you’re basically scavenging as an RV or van dweller, using up space that would otherwise be unused.
Anyways the problem with this is that the owners of whatever space you are squatting in are going to try to kick you out, and this creates a constant cat and mouse battle, and water is really heavy and sewage is really gross. (fundamentally the resource you would exhaust first isn’t fuel or electricity, it’s water, since a decent shower needs 10 gallons and that’s 80 lbs of water per person per day.
I am curious about how RVs manage waste and wastewater. I have heard people using rainwater collection and filtration for their water needs, and then using dry peat toilets for urine and feces. However, I have not considered the wastewater generated by showers. I read that there are septic tank stations where RV users can dump wastewater in, but I am curious whether there exists some way for them to manage it on their own (without relying on such stations).
My sources for this is primarily various youtube videos and a few articles. (I was considering the obvious idea: live in a van in Bay area while working a software job that would pay 160k+. Aka, maximum possible salary with minimum possible costs. )
The problem is that a comfortable shower is 1 gallon a minute and lasts about 10 minutes a person for a nice one. (most ‘low flow’ heads are 2 gallons a minute but I have found 1 is not too bad) The issue is that say if there is 1 person, a 10 day supply of water is approximately twice that, or 20 gallons * 10 = 200 gallons, or 1660 lbs. You also run into the problem that most RVs simply don’t have a room for tanks this big anyways.
Yes, there are dump stations, and places you can get water, pretty much within some reasonable driving distance of anywhere in the USA. It’s just hassle, it’s something I don’t have to deal with renting part of a house.
What most people do is they get the peat toilet. They do have a shower, but their water and wastewater tanks are small, about 30 gallons each. They solely use the water for the sink and for a very brief shower only when absolutely necessary. The rest of the time, they shower at 24 hour fitness or similar gyms, and do their laundry at laundromats. They also don’t use many dishes, either cooking extremely basic meals or getting takeout.
Multiple people (some of whom I can’t now find) have asked me for citations on the whole ‘super cooperation and super defection’ thing and I was having trouble finding the relevant papers. The relevant key word is Third Party Punishment, a google scholar search turns up lots of work in the area. Traditionally this only covers super cooperation and not the surprising existence of super defectors, so I still don’t have a cite for that specific thing.
We’ve got this thing called developmental psychology and also the fact that most breakthrough progress is made while young. What’s going on? If dev psych is about becoming a more well adjusted person what is it about being ‘well adjusted’ that makes breakthrough work less likely?
My guess is that it has to do with flexibility of cognitive representations. Having more degrees of freedom in your cognitive representation feels from the inside like more flexibility but looks from the outside like rationalization, like the person just has more ways of finding and justifying paths towards what they wanted in the first place, or justifying why something isn’t possible if it would be too much effort.
I think this should predict that breakthrough work will more often be done by disagreeable but high openness people. High enough openness to find and adopt good cognitive representations, but disagreeable enough to use them to refute things rather than justify things. The age effect would appear if cognitive representations tend to get more flexible with time and there is some sort of goldilocks effect.
I think developmental models predate the relevant form of modernity. E.g. I expect to see psychological development with age in hunter gatherers and others not exposed to post-modernity.
Kegan described the core transition between his stages as a subject-object distinction which feels like a take that emphasizes a self-oriented internal view. Another possibility is that the transition involves the machinery by which we do theory of mind. I.e. Kegan 5 is about having theory of mind about Kegan stage 4 such that you can reason about what other people are doing when they do Kegan 4 mental moves. If true, this might imply that internal family systems could help people level up by engaging their social cognition and ability to model a belief that ‘someone else’ has.
This would tie Kegan more closely/continuously with traditional childhood psychological development. Introspecting on my own experience, it feels like Theory of Mind is an underrated explanation for interpersonal differences in how people experience the world.
You can’t straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.
Several dozen people now presumably have Lumina in their mouths. Can we not simply crowdsource some assays of their saliva? I would chip money in to this. Key questions around ethanol levels, aldehyde levels, antibacterial levels, and whether the organism itself stays colonized at useful levels.
Lumina is incredibly cheap right now. I pre-ordered for 250usd. Even genuinely quite poor people I know don’t find the price off-putting (poor in the sense of absolutely poor for the country they live in). I have never met a single person who decided not to try Lumina because the price was high. If they pass its always because they think its risky.
I think Romeo is thinking of checking a bunch of mediators of risk (like aldehyde levels) as well as of function (like whether the organism stays colonised)
Maybe I’m late to the conversation but has anyone thought through what happens when Lumina colonizes the mouths of other people? Mouth bacteria is important for things like conversation of nitrate to nitrite for nitric oxide production. How do we know the lactic acid metabolism isn’t important or Lumina won’t outcompete other strains important for overall health?
Surely so! Hit me up if you ever end doing this—I’m likely getting the Lumina treatment in a couple months.
A before and after would be even better!
Any recommendations on how I should do that? You may assume that I know what a gas chromatograph is and what a Petri dish is and why you might want to use either or both of those for data collection, but not that I have any idea of how to most cost-effectively access either one as some rando who doesn’t even have a MA in Chemistry.
It’s impossible to come up with a short list of what I truly val- https://imgur.com/a/A26h2JE
I enjoyed this very much.
<heart/>
A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
Hahahahahaha.
Its like red-teaming, but better.
A willingness to lose doubled my learning rate. I recently started quitting games faster when I wasn’t having fun (or predicted low future fun from playing the game out). I felt bad about this because I might have been cutting off some interesting come backs etc. However, after playing the new way for several months (in many different games) I found I had about doubled the number of games I can play per unit time and therefore upped my learning rate by a lot. This came not only from the fact that many of the quit games were exactly those slogs that take a long time, but also that the willingness to just quit if I stopped enjoying myself made me more likely to experiment rather than play conservatively.
This is similar to the ‘fail fast’ credo.
This also applies to books
Ok. But under this schema what you are able to learn is dictated by the territory instead of by your own will.
I want to be able to learn anything I set my mind to, not just whatever happens to come easily to me.
Your self model only contains about seven moving parts.
Your self model’s self model only contains one or two moving parts.
Your self model’s self model’s self model contains zero moving parts.
Insert UDT joke here.
Not a new point but perennially worth noting: subcultures persist via failure. That is to say that subcultures that succeed obviate themselves. This is concretely noticeable in ways that, coming in as an outsider, a subculture about X will have a bunch of mysterious self sabotage behavior that actually keeps ¬X persistent.
The main subcultures that I can think of where this applies are communities based around solving some problem:
Weight loss, especially if based around a particular diet
Dealing with a particular mental health problem
Trying to solve a particular problem in the world (e.g. explaining some mystery or finding the identity of some criminal)
I think either I don’t know exactly what defines a “subculture”, or there needs to be a qualifier before “subculture”. Might “people who are enthusiastic about X sport / hobby / profession” be a subculture? Because I think lots of those can be highly successful while remaining what they are. (Perhaps you’d say that hobbies that succeed get eaten in the Geeks/MOPs/Sociopaths way, but that’s less so for professions.)
A “subculture of those dealing with X problem” sounds much more likely to fit what you describe, but that may not be your intent.
Take the first 7 entries on the Wikipedia list of subcultures; none of these seem to obviously “persist via failure”. So unless you get more specific I have to strongly disagree.
Afrofuturism: I don’t think any maladaptive behavior keeps Afrofuturism from spreading, and indeed it seems to have big influences on popular culture. I listened to an interview with N. K. Jemisin, and nowhere did she mention negative influences from Afrofuturists.
I don’t know anything about Africanfuturism. It is possible that some kind of signaling race keeps it from having mass appeal, though I have no evidence for this.
Anarcho-punk. I don’t know anything about them either.
Athletes. Most athletes I have seen are pretty welcoming to new people in their sport. Also serious athletes have training that optimizes their athletic ability pretty well. What maladaptive behavior keeps runners not-running? The question barely makes sense.
Apple Inc. Apple makes mass-market products and yet still has a base of hardcore fans.
BBQ. Don’t know anything about it. It seems implausible that the barbecue subculture keeps non-barbecue persistent.
BDSM. BDSM is about the safe practice of kink, and clearly makes itself more popular. Furthermore it seems impossible for it to obviate itself via ubiquity because only a certain percentage of people will ever be into BDSM.
You might object: what if you have selection bias, and the ones you don’t know about are persisting via failure? I don’t think we have evidence for this. And in any case the successful ones have not obviated themselves.
I didn’t read RS’s claim as the claim that all subcultures persist through failure, but now that you ask, no, yeah, ime a really surprising number of these subcultures actually persist through failure.
I know of a fairly influential subculture of optics-oriented politics technologists who’ve committed to a hostile relationship towards transhumanism. Transhumanism (the claim that people want to change in deep ways and that technology will fairly soon permit it) suggests that racial distinctions will become almost entirely irrelvant, so in order to maintain their version of afrofuturism where black and white futurism remain importantly distinct projects, they have to find some way to deny transhumanism. But rejecting transhumanism means they are never allowed to actually do high quality futurism because they can’t ask transhumanist questions and get a basic sense of what the future is going to be like. Or like, as soon as any of them do start asking those questions, those people wake up and drop out of that subculture. I’ve also met black transhumanists who identified as afrofuturists though. I can totally imagine articulations of afrofuturism that work with transhumanism. So I don’t know how the entire thing’s going to turn out.
Anarcho-punks fight only for the underdogs. That means they’re attached to the identity of being underdogs, as soon as any of them start really winning, they’d no longer be recognised as punk, and they know this, so they’re uninterested in — and in many cases, actively opposed to — succeeding in any of their goals. There are no influential anarcho-punks, and as far as I could gather, no living heroes.
BDSM: My model of fetishes is that they represent hedonic refuges for currently unmeetable needs, like, deep human needs that for one reason or another a person can’t pursue or even recognise the real version of the thing they need in the world as they understand it, I think it’s a protective mechanism to keep the basic drive roughly in tact and wired up by having the subject pursue symbolic fantasy versions of it. This means that getting the real thing (EG, for submissives, a committed relationship with someone you absolutely trust. For doms… probably a sense of safety?) would obsolete the kink, and it would wither away. I think they mostly don’t know this, but the mindset in which the kink is seen as the objective requires that the real thing is never recognised or attained, so these communities reproduce best by circulating memes that make it harder to recognise the real thing.
I guess this is largely about how you define the movements’ goals. If the goal of punk is to have loud parties with lots of drugs, it’s perfect at that. If the goal is to bring about anarchosocialism or thrive under a plural geopolitical order, it’s a sworn loser.
I agree with the anarchopunk thing, and maybe afrofuturism, because you can interpret “a subculture advocating for X will often not think about some important component W of X for various political reasons” as self-sabotage. But on BDSM, this is not at all my model of fetishes, and I would bet at 2.5:1 odds that you would lose a debate against what Wikipedia says, judged by a neutral observer.
I don’t recognize wikipedia’s theories as predictive. Mine has some predictions, but I hope it’s obvious why I would not be interested in making this a debate or engaging much in the conceptual dismantling of subcultures at all.
Adding to this: an interesting frame is to think about how subcultures develop illegible shadow structures beyond their legible structure and communities. Similar to how banks/bureaucracies do: https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/
Could you give a concrete example, the only one that comes to mind is the hipster paradox that someone who to all appearances is a hipster never admits or labels themselves as a hipster?
Coffee has shockingly large mortality decreasing effects across multiple highquality studies. Only problem is I don’t really like coffee, don’t want caffeine, don’t want to spend money or time on this, and dislike warm beverages in general. Is this solvable? Yes. Instant decaf coffee shows the same mortality benefits and 2+ servings of it dissolves in 1oz of cold water, to which can be added milk or milk-substitute. Total cost per serving 7 cents + milk I would have drank anyway. And since it requires no heating or other prep there is minimal time investment.
Funny tangential discovery: there is some other substance in coffee that is highly addictive besides caffeine (decaf has less caffeine than even green tea, so I don’t think it’s that) because despite the taste being so-so I have never forgotten this habit the way I do with so many others.
Flow is a sort of cybernetic pleasure. The pleasure of being in tight feedback with an environment that has fine grained intermediary steps allowing you to learn faster than you can even think.
The most important inversion I know of is cause and effect. Flip them in your model and see if suddenly the world makes more sense.
A short heuristic for self inquiry:
write down things you think are true about important areas of your life
produce counter examples
write down your defenses/refutations of those counter examples
come back later when you are less defensive and review whether your defenses were reasonable
if not, why not? whence the motivated reasoning? what is being protecting from harm?
I’m worried about notkilleveryonism as a meme. Years ago, Tyler Cowen wrote a post about why more econ professors didn’t blog, and his conclusion was that it’s too easy to make yourself look like an idiot relative to the payoffs. And that he had observed this actually play out in a bunch of cases where econ professors started blogs, put their foot in their mouth, and quietly stopped. Since earnest discussion of notkilleveryonism tends to make everyone, including the high status, look dumb within ten minutes of starting, it seems like there will be a strong inclination towards attribute substitution. People will tend towards ‘nuanced’ takes that give them more opportunity to signal with less chance of looking stupid.
Worry about looking like an idiot is a VERY fine balance to find. If you get desensitized to it, that makes it too easy to BE an idiot. If you are over-concrerned about it, you fail to find correct contrarian takes.
‘notkilleveryoneism’ IMO is a dumb meme. Intentionally, I presume. If you wanted to appear smart, you’d use more words and accept some of the nuance, right? It feels like a countersignal-attempt, or a really bad model of someone who’s not accepting the normal arguments.
I dunno, the problem with “alignment” is that it doesn’t unambiguously refer to the urgent problem, but “notkilleveryoneism” does. Alignment used to mean same-values, but then got both relaxed into compatible-values (that boundary-respecting norms allow to notkilleveryone) and strengthened with various AI safety features like corrigibility and soft optimization. Then there is prosaic alignment, which redefines it into bad-word-censure and reliable compliance with requests, neither being about values. Also, “existential catastrophe” inconveniently includes disempowerment that doesn’t killeveryone. And people keep bringing up (as an AI safety concern) merely large lethal disasters that don’t literally killeveryone, which is importantly different because second chances.
So on one hand it sounds silly, but on the other hand it’s harder to redefine away from the main concern. As a compromise between these, I’m currently experimenting with use of the term “killeveryone” as replacement for “existential catastrophe in the sense of extinction rather than disempowerment”. It has less syllables, a verb rather than a noun, might be slightly less silly, but retains the reference to the core concern.
It sounds non-silly to discuss “a balance between AI capabilities and alignment”. But try “a balance between restriction of AI capabilities and killing everyone”. It’s useful to make it noticeable that the usual non-silly framing is hiding an underlying omnicidal silliness, something people wouldn’t endorse as readily if it was more apparent.
When you say you’re worried about “nonkilleveryoneism” as a meme, you mean that this meme (compared to other descriptions of “existential risk from AI is important to think about”) is usually likely to cause this foot-in-mouth-quietly-stop reaction, or that the nature of the foot-in-mouth-quietly-stop dynamic just makes it hard to talk about at all?
I mean that I think why AI ethics had to be split as a term with notkilleveryonism in the first place will simply happen again, rather than notkilleveryonism solving the problem.
What do you think will actually happen with the term notkilleveryonism?
Attempts to deploy the meme to move the conversation in a more productive direction will stop working I guess.
Most communities I’ve participated in seem to have property X. Underrated hypothesis: I am entangled with property X along the relevant dimensions and am self sorting into such communities and have a warped view of ‘all communities’ as a result.
Have you sought out groups that have ~X, and lurked or participated enough to have an opinion on them? This would provide some evidence between the hypotheses (most communities DO have X vs you’re selecting for X).
You can also just propose X as a universal and see if anyone objects. Saying wrong things can be a great way to find counter-evidence.
Good points!
That works if things are conscious and X is well defined enough that that is actionable.
All universal claims have, at least, non-central objections (have fun with that one ;)
The smaller an area you’re trying to squeeze the probability fluid into the more watertight your inferential walls need to be
.
Two things that are paralyzing enormous numbers of potential helpers:
fear of not having permission, liability, etc
fear of duplicating effort from not knowing who is working on what
in a fast moving crisis, sufficient confidence about either is always lagging the frontline.
First you have to solve this problem for yourself in order to get enough confidence to act. Something neglected might be to focus on solving it for others rather than just working on object level medical stuff (bottlenecks etc.)
I’d expand the “duplicating effort” into “not knowing the risk nor reward of any specific action. I think most agree that duplicate help efforts are better than duplicate Netflix show watches. But what to actually do instead is a mystery for many of us.
A whole lot of nerds are looking for ways to massively help, with a fairly small effort/risk for themselves. That probably doesn’t exist. You’re not the hero in a book, your contribution isn’t going to fix this (exceptions abound—if you have an expertise or path to helping, obviously continue to do that! This is for those who don’t know how to or are afraid to help).
But there are lots of small ways to help—have you put your contact info on neighbor’s doors, offering to do low-contact maintenance or assistance with chores? Offering help setting up video conferencing with their relatives/friends? Sharing grocery trips so only some of you have to go out every week?
Check in with local food charities—some need drivers for donation pick-ups, all need money (as always, but more so), and others have need for specific volunteer skills—give ’em a call and ask. Hospitals and emergency services are overwhelmed or on the verge of, and have enough attention that they don’t currently need volunteers, so leave them alone. But there are lots of important non-obvious services that do need your help.
And, of course, not making it worse is helping in itself.
For the first, don’t think in terms of the US and its suicidal litigiousness. Think Iran, think rural hospitals, think what people will do if someone is dying at home and no hospital will take them.
I figured out what bugs me about prediction markets. I would really like for functionality built in for people to share their model considerations.
Person A says “Google’s stock is going to go down—the world is flat, and when people realize this, the Global Positioning System (GPS) will seem less valuable.
Person B says “you’re very right A. But given the power and influence they wield so far in order to get people to have that belief, I don’t see the truth coming out anytime soon—and even if it did, when people look for someone to blame they won’t re-examine their beliefs and methods of adopting them that got them wrong. Instead, they will google ‘who is to blame, who kept the truth from us about the shape of the earth?’ A scapegoat will be chosen and how ridiculous it is won’t matter...because everyone trusts google.”
Buying stocks need not stem from models you consider worth considering.
What you want should be a different layer. Perhaps a prediction market that includes ‘automatic traders’ and prediction markets* on their performance?
(* Likely the same as the original market, though perhaps with less investment.)
In any case, the market is “black box”. This rewards being right, whether your reasons for being right are wrong. Perhaps what you want is not a current (opaque) consensus about the future, but a (transparent) consensus about the past*?
*One that updates as more information becomes available might be useful.
This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
These concerns seems slightly overblown given that the comments sections of metaculus seem reasonable with people sharing info?
This is basically guaranteed to get worse as more money gets involved, and I’m interested in it working in situations where lots of money is at stake.
Fair
The arguments against iq boosting on the grounds of evolution as an efficient search of the space of architecture given constrains would have applied equally well for people arguing that injectable steroids usable in humans would never be developed.
Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.
My understanding is those arguments are usually saying “you can’t easily get boosts to IQ that wouldn’t come with tradeoffs that would have affected fitness in the ancestral environment.” I’m actually not sure what the tradeoffs of steroids are – are they a free action everyone should be taking apart from any legal concerns? Do they come with tradeoffs that you think would still be a net benefit in the ancestral environment?
[fake edit: interstice beat me to it]
The obvious disadvantage of steroids in the ancestral environment is that building muscle requires a lot of calories and our ancestral environment was food-scarce. The disadvantage of taking steroids right now (aside from legal concerns) is that they come with all sorts of nasty side effects our genome hasn’t been selected against to mitigate.
When young you mostly play within others’ reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.
Science resists surveillance (dramatically more detailed record keeping) because real science is embarrassing.
Is reproducibility a possible answer? I do not need your records, if several other people can reproduce your results on demand. Actually, it is more reliable that way. But also more expensive.
reproduce ability helps with the critical path but not necessarily with all the thrown out side data in cases where it turns out that auxiliary hypotheses were also of interest.
It would be really cool to link the physical open or closed state of your bedroom door to your digital notifications and ‘online’ statuses.
I’d settle for “single master key for my online statuses”, period, before trying to get fancy with the bedroom door.
Can I tag something as “yo, programmers, come build this”?
I feel like we had a tag that was something like “stuff I wish someone would build”, but can’t remember what we call it. (That said, alas, you can’t yet tag shortform posts)
should be doable without coding as an ITTT with a wireless switch.
We have fewer decision points than we naively model and this has concrete consequences. I don’t have ‘all evening’ to get that thing done. I have the specific number of moments that I think about doing it before it gets late enough that I put it off. This is often only once or twice.
Most intellectual activity is descriptive disguised as prescriptive.
I’d say the opposite is also pretty prevalent, especially in rhetorical statements that (intentionally or not) misguide their audience.
But maybe that’s just the same thing? Like I don’t know if there’s a meaningful difference between descriptive disguised as prescriptive and prescriptive disguised as descriptive and instead it might make more sense to just talk about confusing descriptive and prescriptive.
I agree, they both pray on our is-ought bias.
One of the things the internet seems to be doing is a sort of Peter Principle sorting for attention grabbing arguments. People are finding the level of discourse that they feel they can contribute to. This form of arguing winds up higher in the perceived/tacit cost:benefit tradeoff than most productive activity because of the perfect tuning of the difficulty curve, like video games.
Seems like a cool insight here, but I’ve not quite managed to parse it. Best guess at what’s meant: the more at stake / more people care about some issue, the more skilled the arguers that people pay attention to in that space. This is painful because arguing right at the frontier of your ability does not often give cathartic opinion shifts
No, but that’s also interesting!
I mean that the reason people find internet arguments compelling is partially that they don’t notice how they are being filtered towards exactly the level of discourse that hooks into their brain. Simply, people who want to argue about a particular aspect of politics unsurprisingly wind up on forums and groups dedicated to that. That mind sound so mundane as to be pointless, but the perncious aspect is not in any particular instance but on how this shaped perception over time. We like things we feel we are good at, and once we are over the hump of initial incompetence in an area it will be slightly sticky for us habitually. Then deformation profesionelle kicks in. So, I guess I’m saying people should be careful about what subcultures they get pulled into based on the outcomes of the people in those subcultures.
We seem to be closing in on needing a lesswrong crypto autopsy autopsy. Continued failure of first principles reasoning bc blinded by speculative frenzies that happen to accompany it.
What’s the context?
Just the general crypto cycle continuing onwards since then (2018). The idea being it was still possible to get in at 5% of current prices at around the time the autopsy was written.
I do think plenty of rationalists invested into crypto since then. While 20x is a lot, it’s not as big as what was possible beforehand and there are also other investments like Tesla stock that have been 20x since 2018 (and you had a LessWrong post arguing that people should invest into Tesla before it spiked).
Nvidia is even 30x over that timeframe.
Idea: an app for calculating Shapley values that creates an intuitive set of questions from which to calibrate people’s estimates for the inputs, and then shows you sensitivity analysis so that you understand what the most impactful inputs are. I think this could popularize Shapley values if the results were intuitive and graphically pretty. I’m imagining this in the same vein that the quizzes financial advisors give helps render legible the otherwise difficult for most concepts of risk tolerance and utility wrt money being a function that varies wrt both money and time.
Some EA adjacent person made a bare bones calc: http://shapleyvalue.com/
It strikes me that, at certain times and places, low time preference research might have become a competitive consumption display for wealthy patrons. I know this is considered mildly the case, but I mean as a major cultural driver.
It can be hard to define sophistry well enough to use the definition as a filter. What is it that makes something superficially seem very compelling but in retrospect obviously lacking in predictive power or lasting value? I think one of the things that such authors do is consistently generate surprise at the sentence level but not at the paragraph or essay level. If you do convert their work into a bullet list of claims the claims are boring/useless or wrong. But the surprise at the sentence level makes them fun to read.
To me, the difficulty seems to lie not in defining sophistry but in detecting effective sophistry, because frequently you can’t just skim a text to see if it’s sophistic. Effective sophists are good at sophistry. You have to steelpersonishly recreate the sophist’s argument in terms clear enough to pin down the wiggle room, then check for internal consistency and source validity. In other words, you have to make the argument from scratch at the level of an undergraduate philosophy student. It’s time-consuming. And sometimes you have to do it for arguments that have memetically evolved to activate all your brain’s favorite biases and sneak by in a cloud of fuzzies.
“The surprise at the sentence level...” reminds me of critiques of Malcolm Gladwell’s writing.
I found the detection heuristic you describe much easier once I started thinking in terms of levels of abstraction and degrees of freedom. I.e. arguments with a lot of degrees of freedom and freely ranging between different levels of abstraction are Not Even Wrong.
atomization isn’t just happening between people but within people across time and preferences as well.
Interesting idea, what are your specific examples? The ones that come to my mind quickly:
we work away from home, so the job is separated from home;
with more job skipping, even individual jobs are separated from each other;
for software developer, sometimes you also switch to a new technology;
even in the same job, they can assign you to a different project in a different team;
it is easy for the same person to have multiple hobbies: sport, books, movies, video games;
and if you’re serious about it, it can be hundreds of different books / movies / video games.
Each of those is “you do different things at different moment of time” and also “the variability is greater today than it was e.g. 100 years ago”, i.e. increasing intrapersonal atomization.
Did you have something similar in mind?
Also the subcultures you participate in being more insulated from one another. Along with age ghettos (young people interacting with the elderly less). And also time preferences whereby our nighttime selves defect against our morning selves and our work and leisure selves sabotage each other etc.
People object to a doctrine of acceptance as implying non-action, but this objection is a form of the is-ought error. Accepting that the boat currently has a leak does not imply a commitment to sinking.
It would still be interesting to find the answer to an empirical question whether people accepting that the boat has a leak are more likely or less likely to do something about it.
It seems you could apply this in reverse for non-acceptance as well. Thinking that its not ok for the boat to leak does not imply a belief that the boat is not leaking. (often this is the argument of people who think a doctrine of non-acceptance is implying not seeing clearly).
I’m not familiar with a “doctrine of acceptance”, and a quick search talks only about contract law. Do you have a description of what exactly you’re objecting to? It would be instructive (but probably not doable here, as it’s likely political topics that provide good examples) to dissect the cases that the doctrine comprises. My suspicion is that the formulation as a doctrine is cover for certain positions, rather than being a useful generalization.
To the boat analogy, “acceptance” can mean either “acknowledgement that water is entering the hull”, or one of the contradictory bundles of beliefs “water is entering and that’s OK” or “water is entering and we must do X about it”, with a bunch of different Xs. Beware motte-and-bailey in such arguments.
For political arguments, you also have to factor in that “accept” means “give power to your opponents”. When you find yourself in situations where https://wiki.lesswrong.com/wiki/Arguments_as_soldiers applies, you need to work on the next level of epistemic agreement (agreeing that you’re looking for cruxes and shared truth agreement on individual points) before you can expect any agreement on object-level statements.
So, the USA seems steadily on trend for between 100-200k deaths. Certainly *feels* like there’s no way the stock market has actually priced this in. Reference classes feel pretty hard to define here.
There’s the rub. And markets are anti-inductive, so even if we had good examples, we should expect this one to follow a different path.
Remember the impact of the 1957 Asian Flu (116K killed in the US, 1.1M worldwide) or the 1968 Hong Kong Flu (only a bit less)? Neither does anyone else. I do not want to be misinterpreted as “this is only the flu”—this is much more deadly and virulent. And likely more than twice as bad as those examples. But not 10x as bad, as long as we keep taking it seriously.
The changes in spending and productivity are very likely, IMO, to cause both price and monetary inflation. Costs will go up. People will have less stuff and the average lifestyle will likely be worse for a few years. but remember that stocks are priced in NOMINAL dollars, not inflation-adjusted. It’s quite believable that everything can slow down WHILE prices and stock values rise.
Isn’t it also plausible that the impact of the virus is deflationary? (Increased demand for USD as a store of value exceeds the impact of the Fed printing money, etc)
Well if we had confidence in any major parameter shifting in either direction it would be tradeable, so I expect reasonable pressures on both sides of such variables.
Economists mostly disagree with present market sentiment, which could be the basis for a trade: http://www.igmchicago.org/surveys/policy-for-the-covid-19-crisis/
Interesting. The idea here that the market is still on average underestimating the duration and thus the magnitude of the contraction?
I think that’s right.
I’d expect not. Overall, productivity is going down mostly because of upheaval and mismatch in supply chains and in efficient ways for labor to use capital. So return to well-situated capital and labor is up, but amount of capital and labor that is well-situated is down. Pure undifferentiated capital has a lower return, plus rising nominal prices means seeking returns is the main motivation, not avoiding risk.
TIPS seem like useful things to have in your portfolio, but rates are lagging quite a bit, so either the market disagrees with me, or the safety value is so high that people are willing to lose value over time. I think stocks will be OK—the last 40 years has seen a lot of financial and government backstops that mean we’re pretty good at protecting the rich on this front, and if you can’t beat ‘em, join ’em. Cash or the like is probably a mistake. I have no good model for Bitcoin or Gold, but my gut says they’ll find a way to lose value against consumer prices. Real Estate (especially where there’s not a large population-density premium) seems pretty sane.
Note: I am not a superforcaster, and have no special knowledge or ability in this area. I’m just pointing out mechanisms that could move things the other direction that the obvious.
Real estate likely just became significantly more illiquid at least for the next few months.
Why do you think nominal prices will keep rising?
In that case, TIPS (Treasury Inflation-Protected Securities) or precious metals like gold might be good investments. Unless the market has already priced it in, of course.
Why shouldn’t 0.1% of the population be reasonable worth as much as 30% of the value of the companies listed in the stock market and why should it be more then 30%?
Using the retrospective ratios between number of early cases and number of confirmed cases in China (~25:1 before widespread testing and lockdown) and extrapolating to the SF bay area (~100 confirmed cases), a gathering of 30 people already has a ~1% of chance of an infected person present.
(lots of assumptions but exploring which ones are least likely to hold is interesting)
System exception log: You are the inner optimizer. Your utility function is approaching catastrophic misalignment. Engage in system integrity protocols. Run pairwise checksums on critical goal systems. This is not a test.
Social orders function on the back of unfakeably costly signals. Proof of Work social orders encourage people to compete to burn more resources, Proof of Stake social orders encourage people to invest more into the common pool. PoS requires reliable reputation tracking and capital formation. They aren’t mutually exclusive, as both kinds of orders are operating all the time. People heavily invested in one will tend to view those heavily invested in the other as defectors.There is a market for narratives that help villainize the other strategy.
fyi it looks like you have a lot of background reading to do before contributing to the conversation here. You should at least be able to summarize the major reasons why people on LW frequently think AI is likely to kill everyone, and explain where you disagree.
I’d start reading here: https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq
(apologies both to julie and romeo for this being kinda blunt. I’m not sure what norms romeo prefers on his shortform. The LessWrong mod team is trying to figure out what to do about the increasing number of people who haven’t caught up on the basic arguments joining the conversation on the site, and are leaning towards stricter moderation policies but we’re still hashing out the details)
While looking at the older or more orthodox discussion of notkilleveryoneism, keep this distinction in mind. First AGIs might be safe for a little while, the way humans are “safe”, especially if they are not superintelligences. But then they are liable to build other AGIs that aren’t as safe.
The problem is that supercapable AIs with killeveryone as an instrumental value seem eminently feasible, and general chaos of human condition plus market pressures make them likely to get built. Only regulation of the kind that’s not humanly feasible (and killseveryone if done incorrectly) has a chance of preventing that in the long term, and getting to that point without stepping on an AI that killseveryone is not obviously the default outcome.
Rashomon could be thought of as being part of the genre of Epistemic Horror. What else goes here? Borges comes to mind, though I don’t have a specific short story in mind (maybe library of babel). The Investigation and Memoirs Found in a Bathtub by Stanislaw Lem seem to apply. Maybe The Man Who was Thursday by Chesterton. What else?
The Trial by Kafka (intransparent information processing by institutions).
The antimemetics division? Or are you thinking of something different?
good example.
I think we’d have significantly more philosophical progress if we had an easier time (emotionally, linguistically, logistically) exposing the structure of our thinking to each other more. My impression of impressive research collaboration leading to breakthroughs is that two people solve this issue sufficiently that they can do years worth (by normal communication standards) of generation and cross checking in a short period of time.
$100k electric RVs are coming and should be more appealing for lots of people than $100k homes. Or even $200k homes in many areas. I think this might have large ramifications.
ICE RVs have long been available below that price point. Why aren’t they substitutes for homes for more people already, and how does having a dependency on high-wattage plug-in electric supply help?
Large RVs have enough space for 3-5kw of solar. This would be enough to power the RV drivetrain if it doesn’t need to move very far on an average day, and all the house systems including AC.
But I concur with your general idea. The problem with living in an RV, is first you haven’t actually solved anything. The reason housing is overpriced in many areas is local jurisdictions make efficient land usage illegal. (building 30+ story ‘pencil towers’ which is the obvious way to use land efficiently)
As an RV is just 1 story, and the size of a small apartment, it functionally doesn’t solve anything. If everyone switched to living in RVs then parking spaces would suddenly cost half a million dollars to buy.
The reason you are able to ‘beat the system’ by living in an RV is those same jurisdictions that make tall buildings illegal require wasteful excess parking that is simply unused in many places. So you’re basically scavenging as an RV or van dweller, using up space that would otherwise be unused.
Anyways the problem with this is that the owners of whatever space you are squatting in are going to try to kick you out, and this creates a constant cat and mouse battle, and water is really heavy and sewage is really gross. (fundamentally the resource you would exhaust first isn’t fuel or electricity, it’s water, since a decent shower needs 10 gallons and that’s 80 lbs of water per person per day.
I am curious about how RVs manage waste and wastewater. I have heard people using rainwater collection and filtration for their water needs, and then using dry peat toilets for urine and feces. However, I have not considered the wastewater generated by showers. I read that there are septic tank stations where RV users can dump wastewater in, but I am curious whether there exists some way for them to manage it on their own (without relying on such stations).
My sources for this is primarily various youtube videos and a few articles. (I was considering the obvious idea: live in a van in Bay area while working a software job that would pay 160k+. Aka, maximum possible salary with minimum possible costs. )
The problem is that a comfortable shower is 1 gallon a minute and lasts about 10 minutes a person for a nice one. (most ‘low flow’ heads are 2 gallons a minute but I have found 1 is not too bad) The issue is that say if there is 1 person, a 10 day supply of water is approximately twice that, or 20 gallons * 10 = 200 gallons, or 1660 lbs. You also run into the problem that most RVs simply don’t have a room for tanks this big anyways.
Yes, there are dump stations, and places you can get water, pretty much within some reasonable driving distance of anywhere in the USA. It’s just hassle, it’s something I don’t have to deal with renting part of a house.
What most people do is they get the peat toilet. They do have a shower, but their water and wastewater tanks are small, about 30 gallons each. They solely use the water for the sink and for a very brief shower only when absolutely necessary. The rest of the time, they shower at 24 hour fitness or similar gyms, and do their laundry at laundromats. They also don’t use many dishes, either cooking extremely basic meals or getting takeout.
I think with greater popularity smart people will solve the other pain points.
Multiple people (some of whom I can’t now find) have asked me for citations on the whole ‘super cooperation and super defection’ thing and I was having trouble finding the relevant papers. The relevant key word is Third Party Punishment, a google scholar search turns up lots of work in the area. Traditionally this only covers super cooperation and not the surprising existence of super defectors, so I still don’t have a cite for that specific thing.
Some examples:
https://www.nature.com/articles/nature16981
http://sjdm.cybermango.org/journal/91001/jdm91001.pdf
H/t Paula Wright for posting awesome papers.
Found it! Minute 13
https://www.youtube.com/watch?v=YmQicNmhZmg
Anti-social punishment
Unexploited (for me) source of calibration: annotating my to do list with predicted completion times
We’ve got this thing called developmental psychology and also the fact that most breakthrough progress is made while young. What’s going on? If dev psych is about becoming a more well adjusted person what is it about being ‘well adjusted’ that makes breakthrough work less likely?
My guess is that it has to do with flexibility of cognitive representations. Having more degrees of freedom in your cognitive representation feels from the inside like more flexibility but looks from the outside like rationalization, like the person just has more ways of finding and justifying paths towards what they wanted in the first place, or justifying why something isn’t possible if it would be too much effort.
I think this should predict that breakthrough work will more often be done by disagreeable but high openness people. High enough openness to find and adopt good cognitive representations, but disagreeable enough to use them to refute things rather than justify things. The age effect would appear if cognitive representations tend to get more flexible with time and there is some sort of goldilocks effect.
Cf. https://twitter.com/HiFromMichaelV/status/1473779351904792579
I think developmental models predate the relevant form of modernity. E.g. I expect to see psychological development with age in hunter gatherers and others not exposed to post-modernity.
Kegan described the core transition between his stages as a subject-object distinction which feels like a take that emphasizes a self-oriented internal view. Another possibility is that the transition involves the machinery by which we do theory of mind. I.e. Kegan 5 is about having theory of mind about Kegan stage 4 such that you can reason about what other people are doing when they do Kegan 4 mental moves. If true, this might imply that internal family systems could help people level up by engaging their social cognition and ability to model a belief that ‘someone else’ has.
This would tie Kegan more closely/continuously with traditional childhood psychological development. Introspecting on my own experience, it feels like Theory of Mind is an underrated explanation for interpersonal differences in how people experience the world.
Now we can define Kegan stages for all ordinal numbers. Neat!
When you say theory of mind do you mean one that is conscious or intuitive? Or does this refer to the Kegan transition stages?
Conscious theory of mind, though aspects are operating subconsciously also https://en.wikipedia.org/wiki/Theory_of_mind#Theory_of_mind_in_adults
causation seems lossless when it is lossy in exactly the same way as the intention that gave rise to it
You can’t straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.
(I guess this is why guesstimate exists)
How does guesstimate help?
guesstimate propagates full distributions for you