Milton Friedman teaspoon joke
Total tangent: this article from 2011 attributes the quote to a bunch of people, and finds an early instance in a 1901 newspaper article.
Milton Friedman teaspoon joke
Total tangent: this article from 2011 attributes the quote to a bunch of people, and finds an early instance in a 1901 newspaper article.
Law question: would such a promise among businesses, rather than an agreement mandated by / negotiated with governments, run afoul of laws related to monopolies, collusion, price gouging, or similar?
I like Yudkowsky’s toy example of tasking an AGI to copy a single strawberry, on a molecular level, without destroying the world as a side-effect.
You’re making a very generous offer of your time and expertise here. However, to me your post still feels way, way more confusing than it should be.
Suggestions & feedback:
Title: “Get your math consultations here!” → “I’m offering free math consultations for programmers!” or similar.
Or something else entirely. I’m particularly confused how your title (math consultations) leads into the rest of the post (debuggers and programming).
First paragraph: As your first sentence, mention your actual, concrete offer (something like “You screenshare as you do your daily tinkering, I watch for algorithmic or theoretical squiggles that cost you compute or accuracy or maintainability.” from your original post, though ideally with much less jargon). Also your target audience: math people? Programmers? AI safety people? Others?
“click the free https://calendly.com/gurkenglas/consultation link” → What you mean is: “click this link for my free consultations”. What I read is a dark pattern à la: “this link is free, but the consultations are paid”. Suggested phrasing: something like “you can book a free consultation with me at this link”
Overall writing quality
Assuming all your users would be as happy as the commenters you mentioned, it seems to me like the writing quality of these posts of yours might be several levels below your skill as a programmer and teacher. In which case it’s no wonder that you don’t get more uptake.
Suggestion 1: feed the post into an LLM and ask it for writing feedback.
Suggestion 2: imagine you’re a LW user in your target audience, whoever that is, and you’re seeing the post “Get your math consultations here!” in the LW homepage feed, written by an unknown author. Do people in your target audience understand what your post is about, enough to click on the post if they would benefit from it? Then once they click and read the first paragraph, do they understand what it’s about and click on the link if they would benefit from it? Etc.
Are you saying that the 1 aligned mind design in the space of all potential mind designs is an easier target than the subspace composed of mind designs that does not destroy the world?
I didn’t mean that there’s only one aligned mind design, merely that almost all (99.999999...%) conceivable mind designs are unaligned by default, so the only way to survive is if the first AGI is designed to be aligned, there’s no hope that a random AGI just happens to be aligned. And since we’re heading for the latter scenario, it would be very surprising to me if we managed to design a partially aligned AGI and lose that way.
No, because the you who can ask (the persons in power) is themselves misaligned with the 1 alignment target that perfectly captures all our preferences.
I expect the people in power are worrying about this way more than they worry about the overwhelming difficulty of building an aligned AGI in the first place. (Case in point: the manufactured AI race with China.) As a result I expect they’ll succeed at building a by-default-unaligned AGI and driving themselves and us to extinction. So I’m not worried about instead ending up in a dystopia ruled by some government or AI lab owner.
Have donated $400. I appreciate the site and its team for all it’s done over the years. I’m not optimistic about the future wrt to AI (I’m firmly on the AGI doom side), but I nonetheless think that LW made a positive contribution on the topic.
Anecdote: In 2014 I was on a LW Community Weekend retreat in Berlin which Habryka either organized or did a whole bunch of rationality-themed presentations in. My main impression of him was that he was the most agentic person in the room by far. Based on that experience I fully expected him to eventually accomplish some arbitrary impressive thing, though it still took me by surprise to see him specifically move to the US and eventually become the new admin/site owner of LW.
Recommendation: make the “Last updated” timestamp on these pages way more prominent, e.g. by moving them to the top below the page title. (Like what most news websites nowadays do for SEO, or like where timestamps are located on LW posts.) Otherwise absolutely no-one will know that you do this, or that these resources are not outdated but are actually up-to-date.
The current timestamp location is so unusual that I only noticed it by accident, and was in fact about to write a comment suggesting you add a timestamp at all.
The frustrating thing is that in some ways this is exactly right (humanity is okay at resolving problems iff we get frequent feedback) and in other ways exactly wrong (one major argument for AI doom is that you can’t learn from the feedback of having destroyed the world).
The implication is that you absolutely can’t take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don’t have a public track record of manipulation like Altman.
My current model is that ML experiments are bottlenecked not on software-engineer hours, but on compute. See Ilya Sutskever’s claim here
That claim is from 2017. Does Ilya even still endorse it?
I guess we could in theory fail and only achieve partial alignment, but that seems like a weird scenario to imagine. Like shooting for a 1 in big_number target (= an aligned mind design in the space of all potential mind designs) and then only grazing it. How would that happen in practice?
And what does it even mean for a superintelligence to be “only misaligned when it comes to issues of wealth distribution”? Can’t you then just ask your pretty-much-perfectly-aligned entity to align itself on that remaining question?
The default outcome is an unaligned superintelligence singleton destroying the world and not caring about human concepts like property rights. Whereas an aligned superintelligence can create a far more utopian future than a human could come up with, and cares about capitalism and property rights only to the extent that that’s what it was designed to care about.
So I indeed don’t get your perspective. Why are humans still appearing as agents or decision-makers in your post-superintelligence scenario at all? If the superintelligence for some unlikely reason wants a human to stick around and to do something, then it doesn’t need to pay them. And if a superintelligence wants a resource, it can just take it, no need to pay for anything.
Another issue is the Eternal September issue where LW membership has grown a ton due to the AI boom (see the LW site metrics in the recent fundraiser post), so as one might expect, most new users haven’t read the old stuff on the site. There are various ways in which the LW team tries to encourage them to read those, but nevertheless.
I guess part of the issue is that in any discussion, people don’t use the same terms in the same way. Some people call present-day AI capabilities by terms like “superintelligent” in a specific domain. Which is not how I understand the term, but I understand where the idea to call it that comes from. But of course such mismatched definitions make discussions really hard. Seeing stuff like that makes it very understandable why Yudkowsky wrote the LW Sequences...
Anyway, here is an example of a recent shortform post which grapples with the same issue that vague terms are confusing.
I appreciate the link and the caveats!
Re: “the total number of pages does sometimes decrease”, it’s not clear to me that that’s the case. These plots show “number of pages published annually”, after all. And even if that number is an imperfect proxy for the regulatory burden of that year, what we actually care about is in any case not the regulatory burden of a year, but the cumulative regulatory burden. That cannot possibly have stayed flat for 2000~2012, right? So that can’t be what the final plot in the pdf is saying.
I don’t think elite behavior is at all well-characterized by assuming they’re trying to strike a sensible tradeoff here. For example, there are occasionally attempts to cut outdated regulations, and these never find any traction (e.g. the total number of pages of legislation only grows, but never shrinks). Which isn’t particularly surprising insofar as the power of legislatives is to pass new legislation, so removing old legislation doesn’t seem appealing at all.
Sure, but they should instead be surprised by large-scale failures of non-libertarian policy elsewhere, the archetypal case being NIMBY policies which restrict housing supply and thus push up housing prices. Or perhaps an even clearer case is rent control policies pushing up prices.
As mentioned at the top, this video game is inspired by the board game Zendo, which is a bit like what you propose, and which I’ve seen played at rationalist meetups.
Zendo is a game of inductive logic in which one player, the Moderator, creates a secret rule that the rest of the players, try to figure out by building and studying configurations of the game pieces. The first player to correctly guess the rule wins.
For games with similar themes, Wikipedia also suggests the games Eleusis (with standard playing cards) and Penultima (with standard chess pieces).
Yes, while there are limits to what kinds of tasks can be delegated, web hosting is not exactly a domain lacking in adequate service providers.
Yeah. Though as a counterpoint, something I picked up from IIRC Scott Alexander or Marginal Revolution is that the FDA is not great about accepting foreign clinical trials, or demands that they always be supplemented by trials of Americans, or similar.