LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh
In your climate, defection from the natural gas and electric grid is very far from being economical, because the peak energy demand for the year is dominated by heating, and solar peaks in the summer, so you would need to have extreme oversizing of the panels to provide sufficient energy in the winter.
I think the prediction here is that people will detach only from the electric grid, not from the natural gas grid. If you use natural gas heat instead of a heat pump for part of the winter, then you don’t need to oversize your solar panels as much.
If you set aside the pricing structure and just look at the underlying economics, the power grid will still be definitely needed for all the loads that are too dense for rooftop solar, ie industry, car chargers, office buildings, apartment buildings, and some commercial buildings. If every suburban house detached from the grid, these consumers would see big increases in their transmission costs, but they wouldn’t have much choice but to pay them. This might lead to a world where downtown areas and cities have electric grids, but rural areas and the sparser parts of suburbs don’t.
There’s an additional backup-power option not mentioned here, which is that some electric cars can feed their battery back to a house. So if there’s a long string of cloudy days but the roads are still usable, you can transport power from the grid to an off-grid house by charging at a public charger, and discharging at home. This might be a better option than a natural-gas generator, especially if it only comes up rarely.
If rural areas switch to a regime where everyone has solar+batteries, and the power grid only reaches downtown and industrial areas… that actually seems like it might just be optimal? The price of disributed generation and storage falls over time, but the cost of power lines doesn’t, so there should be a crossover point somewhere where the power lines aren’t worth it. Maybe net-metering will cause the switchover to happen too soon, but it does seem like a switchover should happen eventually.
Many people seem to have a single bucket in their thinking, which merges “moral condemnation” and “negative product review”. This produces weird effects, like writing angry callout posts for a business having high prices.
I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after “business has high prices” is “shop elsewhere” rather than “coordinate punishment”.
Nope, that’s more than enough. Caleb Ditchfield, you are seriously mentally ill, and your delusions are causing you to exhibit a pattern of unethical behavior. This is not a place where you will be able to find help or support with your mental illness. Based on skimming your Twitter history, I believe your mental illness is caused by (or exacerbated by) abusing Adderall.
You have already been banned from numerous community events and spaces. I’m banning you from LW, too.
Worth noting explicitly: while there weren’t any logs left of prompts or completions, there were logs of API invocations and errors, which contained indications that whatever this was, it was still under development and not an already-scaled setup. Eg we saw API calls fail with invalid-arguments, then get retried successfully after a delay.
The indicators-of-compromise aren’t a good match between the Permiso blog post and what we see in logs; in particular we see the user agent string
Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/python#3.12.4 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.32.7
which is not mentioned. While I haven’t checked all the IPs, I checked a sampling and they didn’t overlap. (The IPs are a very weak signal, however, since they were definitely botnet IPs and botnets can be large.)
Ah, sorry that one went unfixed for as long as it did; a fix is now written and should be deployed pretty soon.
This is a bug and we’re looking into it. It appears to be specific to Safari on iOS (Chrome on iOS is a Safari skin); it doesn’t affect desktop browsers, Android/Chrome, or Android/Firefox, which is why we didn’t notice earlier. This most likely started with a change on desktop where clicking on a post (without modifiers) opens when you press the mouse button, rather than when you release it.
Standardized tests work, within the range they’re testing for. You don’t need to overthink that part. If you want to make people’s intelligence more legible and more provable, what you have is more of a social and logistical issue: how do you convince people to publish their test scores, get people to care about those scores, and ensure that the scores they publish are real and not the result of cheating?
And the only practical way to realize this, that I can think of now, is by predicting the largest stock markets such as the NYSE, via some kind of options trading, many many many times within say a calendar year, and then showing their average rate of their returns is significantly above random chance.
The threshold for doing this isn’t being above average relative to human individuals, it’s being close to the top relative to specialized institutions. That can occasionally be achievable, but usually it isn’t.
The first time you came to my attention was in May. I had posted something about how Facebook’s notification system works. You cold-messaged me to say you had gotten duplicate notifications from Facebook, and you thought this meant that your phone was hacked. Prior to this, I don’t recall us having ever interacted or having heard you mentioned. During that conversation, you came across to me as paranoid-delusional. You mentioned Duncan’s name once, and I didn’t think anything of it at the time.
Less than a week later, someone (not mentioned or participating in this thread) messaged me to say that you were having a psychotic episode, and since we were Facebook friends maybe I could check up on you? I said I didn’t really know you, so wasn’t able to do that.
Months later, Duncan reported that you were harrassing him. Some time after that (when it hadn’t stopped), he wrote up a doc. It looks like at some point you formed an obsession about Duncan, reacted negatively to him blocking you, and started escalating. (Duncan has a reputation for blocking a lot of people. I have made the joke that his MtG card says “~ can block any number of creatures”.)
But, here’s the thing: Duncan’s testimony is not the only (or even main) reason why you look like a dangerous person to me. There are subtle cues about the shape of your mental illness strewn through most of what you write, including the public stuff. People are going to react to that by protecting themselves.
I hope that you recover, mental-health-wise. But hanging around this community is not going to help you do that. If anything, I expect lingering here to exacerbate your problems. Both because you’re surrounded by burn bridges, and also because the local memeplex has a reputation for having worsened people’s mental illness in other, unrelated cases.
A news article reports on a crime. In the replies, one person calls the crime “awful”, one person calls it “evil”, and one person calls it “disgusting”.
I think that, on average, the person who called it “disgusting” is a worse person than the other two. While I think there are many people using it unreflectively as a generic word for “bad”, I think many people are honestly signaling that they had a disgust reaction, and that this was the deciding element of their response. But disgust-emotion is less correlated with morality than other ways of evaluating things.
The correlation gets stronger if we shift from talk about actions to talk about people, and stronger again if we shift from talk about people to talk about groups.
LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we’re defaulting it to on because when looking at older posts, most of the time it seems like an improvement.
Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.
Feedback welcome!
LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there’s demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they
They start out closed; when open, they look like this:
When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn’t, and there’s a bug where if your cursor is inside a collapsible section, when you click outside the editor, eg to edit the post title, the cursor will move back. They will probably work on third-party readers like GreaterWrong, but this hasn’t been tested yet.
The Elicit integrations aren’t working. I’m looking into it; it looks like we attempted to migrate away from the Elicit API 7 months ago and make the polls be self-hosted on LW, but left the UI for creating Elicit polls in place in a way where it would produce broken polls. Argh.
I can find the polls this article uses, but unfortunately I can’t link to them; Elicit’s question-permalink route is broken? Here’s what should have been a permalink to the first question: link.
This is a hit piece. Maybe there are legitimate criticisms in there, but it tells you right off the bat that it’s egregiously untrustworthy with the first paragraph:
I like to think of the Bay Area intellectual culture as the equivalent of the Vogons’ in Hitchhiker’s Guide to the Galaxy. The Vogons, if you don’t remember, are an alien species who demolish Earth to build an interstellar highway. Similarly, Bay Area intellectuals tend to see some goal in the future that they want to get to and they make a straight line for it, tunneling through anything in their way.
This is tragic, but seems to have been inevitable for awhile; an institution cannot survive under a parent institution that’s so hostile as to ban it from fundraising and hiring.
I took a look at the list of other research centers within Oxford. There seems to be some overlap in scope with the Institute for Ethics in AI. But I don’t think they do the same sort of research or do research on the same tier; there are many important concepts and important papers that come to mind having come from FHI (and Nick Bostrom in particular), I can’t think of a single idea or paper that affected my thinking that came from IEAI.
That story doesn’t describe a gray-market source, it describes a compounding pharmacy that screwed up.
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that’s more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI’s perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I’d trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).
(The same reasoning applies to the weights of AIs which aren’t destined for deployment, and some intermediate artifacts in the training process.)
It seems to me we can reconcile preservation with privacy risks by sealing logs, rather than deleting them. By which I mean: encrypt logs behind some computation which definitely won’t allow decryption in the near future, but will allow decryption by a superintelligence later. That could either involve splitting the key between entities that agree not to share the key with each other, splitting the key and hiding the pieces in places that are extremely impractical to retrieve such as random spots on the ocean floor, or using a computation that requires a few orders of magnitude more energy than humanity currently produces per decade.
This seems pretty straightforward to implement, lessens future AGI’s incentive to misbehave, and also seems straightforwardly morally correct. Are there any obstacles to implementing this that I’m not seeing?
Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.
That doesn’t mean LLMs don’t have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that’s important feedback.