If you dig deep enough, temperatures should be much cooler than on / near the surface of the earth. (Unless the heat gets very intense. I don’t know enough to rule that out). How much digging that deep (as opposed to the depths we usually did to) would cost, though
MikkW
(The mentioned ACX post is https://www.astralcodexten.com/p/a-theoretical-case-against-education )
A recent Astral Codex Ten post contained this bit:
Fewer than 50% (ie worse than chance) can correctly answer a true-false question about whether electrons are bigger than atoms.
The linked source seems to indicate that the survey’s expected answer to the question “electrons are smaller than atoms” is “true”. However, I think this is likely based on a faulty understanding of reality, and in any case the question has a trickier nature than the survey or Scott Alexander give it credit for.
There’s a common misconception that electrons (as well as e.g. protons and neutrons) are point particles, that is to say, that they can be said to exist at some precise location, just like a dot on a piece of graph paper.
Even when people talk about the uncertainty principle, they often lean into this misconception by suggesting that the wave function indicates “the probability that the (point-like) particle is found at a given location”.
However, an electron is not a point, but rather a wavefunction which has a wide extent in space. If you were to examine the electron at the tip of my pinky finger, there is in fact a non-zero (but very, very small) part of that electron that can be found at the base of the flag which Neil Armstrong planted on the moon (n.b. I’m not looking up whether it was actually Armstrong who did that), or even all the way out in the core of Alpha Centauri.
We could still try to talk about the size of an electron (and the size of an atom, which is a similar question) by considering the volume that contains 99% of the electron (and likewise a volume that contains 99% of a proton or neutron).
Considering this volume, the largest part of any given atom would be an electron, with the nuclear particles occupying a much larger volume (something something strong force). In this sense, the size of the atom is in fact coextensive with the size of its “largest” electron, and that electron is by no means smaller than the atom. There are of course in many atoms multiple electrons, and some of these may be “smaller” than the largest electron. However, I do not think the survey had this in mind as the justification for the answer it considered “correct” for the question.
I think the appropriate next step for Scott Alexander is to retract the relevant sentence from his post.
“authors will get hurt by people not appreciating their work” is something we just have to accept, even if it’s very harsh
I don’t really agree with this. Sure, some people are going to write stuff that’s not very good, but that doesn’t mean that we have to go overboard on negative feedback, or be stingy with positive feedback.
Humans are animals which learn by reinforcement learning, and the lesson they learn when punished is often “stay away from the thing / person / group that gave the punishment”, much more strongly than “don’t do the thing that made that person / thing / group punish me”.
Wheras when they are rewarded, the lesson is “seek out the circumstances / context that let me be rewarded (and also do the thing that will make it reward me)”. Nobody is born writing amazingly, they have to learn it over time, and it comes more naturally to some, less to others.
I don’t want bad writers (who are otherwise intelligent and intellectually engaged, which describes almost everybody who posts on LW) to learn the lesson “stay away from LW”. I want them to receive encouragement (mostly in forms other than karma, e.g. encouraging comments, or inclusion in the community, etc.), leading them to be more motivated to figure out the norms of LW and the art of writing, and try again, with new learning and experience behind them.
I think the threshold of 0 is largely arbitrary
It’s not all that arbitrary. Besides the fact that it’s one of the simplest numbers, which makes for an easy to remember / communicate heuristic (a great reason that isn’t arbitrary), I actually think it’s quite defensible as a threshold. If I write a post that has a +6 starting karma, and I see it drop down to 1 or 2 (or, yeah, −1), my thought is “that kinda sucked, but whatever, I’ll learn from my mistake and do better next time”.
But if I see it drop down to, say, −5 or −6, my thought starts to become “why am I even posting on this stupid website that’s so full of anti-social jerks?”. And then I have to talk myself down from deleting my account and removing LW and the associated community from my life.
(Not that I think LW is actually so full of jerks. There’s a lot of lovable people here who talk about interesting things, and I believe in LW’s raison d’etre, which is why I keep forcing myself to come back)
An Invitation to Refrain from Downvoting Posts into Net-Negative Karma
I would like to make a meta-comment, not directly related to this post.
When I came upon this post, it had a negative karma score. I don’t think it’s good form to have posts receiving negative net karma (except in extreme cases), so I upvoted to provide this with a positive net karma.
It is unpleasant for an author when they receive a negative karma score on a post which they spent time and effort to make (even when that effort was relatively small), much more so than receiving no karma beyond the starting score. This makes the author less likely to post again in the future, which prevents communication of ideas, and keeps the author from getting better at writing. In particular this creates a risk of LessWrong becoming more like a bubble chamber (which I don’t think is desirable), and makes the community less likely to hear valuable ideas that go against the grain of the local culture.
A writer who is encouraged to write more will become more clear in their communication, as well as in their thoughts. And they will also get more used to the particular expectations of the culture of LessWrong- norms that have good reason to exist, but which also go against some people’s intuitions or what has worked well for them in other, more “normie” contexts.
Karma serves as a valuable signal to authors about the extent to which they are doing a good job of writing clearly about interesting topics in a way that provides value to members of the community, but the range of positive integers provides enough signal. There isn’t much lost in excluding the negative range (except in extreme cases).
Let’s be nice to people who are still figuring writing out, I encourage you to refrain from downvoting them into negative karma.
That statement of fact is indeed true. Would you mind saying more about your thoughts regarding it? There seems to be an unstated implication that this is bad. There is a part of me that agrees with that implication, but there are also parts of me that want to say “so what? that’s irrelevant”. (I feel ⌞explaining what the second set of shards is pointing to, would take more time and energy to write up than I am prepared to take right now⌝)
On the other side, there’s the cost of ~10min of boredom, for every passenger, on every flight. Instead of playing games, watching movies, or reading, people would mostly be talking, looking out the window, or staring off into space.
Tangent: I’m not completely sure that this is actually a cost and not an unintended benefit
Sharing my impression of the comic:
Insofar as it supports sides, I’d say the first part of the meme is criticism of Eliezer
The comic does not parse (to my eyes and probably most people’s) as the author intending to criticize Eliezer at any point
Insofar as it supports sides, I’d say [...] the last part is criticism of those who reject His message
Only in the most strawman way. It basically feels equivalent to me to “They disagree with the guy I like, therefore they’re dumb / unsympathetic”. There’s basically no meat on the bones of the criticism
This subjectively seems to me to be the case.
The board’s statement doesn’t mention them having made such a request to Altman which was denied, that’s a strong signal against things having played out that way.
In the case of the lawyers, this is actually not an example of non-niceness being good for society. The defense attorney who defends a guilty party, their job is not to be a jerk to the prosecutor or to the judge. It is to, as you say, provide the judge with information (including counter-arguments to the other side’s arguments). While his job involves working in an opposite direction from his counterpart, it does not involve being non-nice to his counterpart (and it is indeed most pro-society if the two sides treat eachother well / nicely outside of their equal-and-opposite professional duties), and it does not involve being non-nice to the judge, whose job the attorney (as you point is) is actually assisting with. Again, society expects maximum niceness from both attorneys towards the judge outside of ⌞their professional duty to imperfectly represent the truth⌝.
Society expects niceness to be provided from each of these parties to each of the others: {the judge, the defense attorney, the prosecution attorney}
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
What’s different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.
I am curious to hear people’s opinions, for my reference:
Is epistemic rationality or instrumental rationality more important?
Do you believe epistemic rationality is a requirement for instrumental rationality?
Not directly tied to the core of what you’re saying, but I will note that I am example of someone who doesn’t strongly prefer such foods warm. I do weakly prefer it being warm, as long as it’s not too hot (that’s worse than it being cold, because it hurts / causes minor injury), but I’m happy eating it room temperature or a bit cold (not necessarily cold steak though)
My model says that a lot of the changing occurs by gradient descent, which can be interrupted randomly without causing problems. And there’s enough redundancy that the reorganization part can be interrupted without the core information being removed completely from the brain, and the redundancy will be replenished (one of copies I imagine is “locked” while the reorganization happens, and is later reorganized later with another copy “locked”). I also expect this replenishing can happen during awakeness, though not as ideally as when asleep.
But I will also note that forgetting is a thing that happens, which is indistinguishable from “data corruption”. We’re actually quite good at forgetting things.
Choosing non-ambiguous pointers to values is likely to not be possible
I had previously posted thoughts that suggested that the main psychoactive effect of chocolate is due to theobromine (which is chemically similar to caffeine). In the interests of publicly saying “oops”:
Chocolate also contains substantial amounts of caffeine, and caffeine has a stronger effect per gram, so most of the caffeine-adjacent effect of chocolate comes from caffeine rather than theobromine.
Theobromine may still contribute to chocolate hitting differently than other caffeinated substances, though I expect there are also other chemicals that also contribute to the effect of chocolate. I assign less than 50% probablity to ⌞theobromine being the cause of the bulk of the difference between chocolate’s psychoactive effects vs. pure caffeine⌝
I strong-downvoted this post because sentences like
use these insights to derive two methods for provably avoiding Goodharting
Tend to be misleading, pretending that mathematical precision describes the complex and chaotic nature of the real world, where it shouldn’t be assumed to (see John Wentworth’s comment), and in this case it could potentially lead to very bad consequences if misunderstood.
This post does a good job of laying out compelling arguments for thoughts adjacent to areas I’ve previously already enjoyed thinking about.
For the record, this sentence popped into my head while reading this: “Wait, but what if I’m Omega-V, and [Valentine] is a two boxer?”
(Edit: the context for this thought is my previous thoughts having read other posts by Valentine, which I find both quite elucidating, but also somehow have left me feeling a bit creeped out; that being said, my opinion about this post itself is strongly positive)