Unrelated to this particular post, I’ve seen a couple people mention that all your ideas as of late are somewhat scattered and unorganized, and in need of some unification. You’ve put out a lot of content here, but I think people would definitely appreciate some synthesis work, as well as directly addressing established ideas about these subproblems as a way of grounding your ideas a bit more. “Sixteen main ideas” is probably in need of synthesis or merger.
taygetea
Link: The Cook and the Chef: Musk’s Secret Sauce—Wait But Why
To correct one thing here, the Bussard ramjet has drag effects. It can only get you to about 0.2c, making it pretty pointless to bother if you have that kind of command over fusion power.
I would not call this rudimentary! This is excellent. I’ll be using this.
Didn’t someone also do this for each post in the sequences a while back?
There’s been quite a bit of talk about partitioning channels. And the #lesswrong sidechannels sort of handle it. But it’s nowhere near as good. I’m starting to have ideas for a Slack-style interface in a terminal… but that would be a large project I don’t have time for.
Alright, I’ll be a little more clear. I’m looking for someone’s mixed deck, on multiple topics, and I’m looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.
I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I’m looking for how other people do the process of creating cards for new knowledge.
I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don’t expect people to really be able to answer my questions on it, and I don’t expect that I’ve gotten every specification. Which is why I wanted the example deck.
Edit: So I can’t pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.
Is anyone willing to share an Anki deck with me? I’m trying to start using it. I’m running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.
Does anyone have or know anyone with a magnetic finger implant who can compare experiences? I’ve been considering the implant. If the ring isn’t much weaker, that would be a good alternative.
So, to my understanding, doing this in 2015 instead of 2018 is more or less exactly the sort of thing that gets talked about when people refer to a large-scale necessity to “get there first”. This is what it looks like to push for the sort of first-mover advantage everyone knows MIRI needs to succeed.
It seems like a few people I’ve talked to missed that connection, but they support the requirement for having a first-mover advantage. They support a MIRI-influenced value alignment research community, but then they perceive you asking for more money than you need! Making an effort to remind people more explicitly why MIRI needs to grow quickly may be valuable. Link the effect of ‘fundraiser’ to the cause of ‘value learning first-mover’.
That’s a pretty large question. I’d love to, but I’m not sure where to start. I’ll describe my experience in broad strokes to start.
Whenever I do anything, I quickly acclimate to it. It’s very difficult to remember that things I know how to do aren’t trivial for other people. It’s way more complex than that… but I’ve been sitting on this text box for a few hours. So, ask a more detailed question?
This month (and a half), I dropped out of community college, raised money as investment in what I’ll do in the future, moved to Berkeley, got very involved in the rationalist community here, smashed a bunch of impostor syndrome, wrote a bunch of code, got into several extremely promising and potentially impactful projects, read several MIRI papers and kept being urged to involve myself with their research further.
I took several levels of agency.
Hi. I don’t post much, but if anyone who knows me can vouch for me here, I would appreciate it.
I have a bit of a Situation, and I would like some help. I’m fairly sure it will be positive utility, not just positive fuzzies. Doesn’t stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4
I’ve begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn’t seem like a confirmation bias/Baader-Meinhoff effect, not really. It’s quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?
Typical Mind Fallacy. Allows people to actually cooperate for once. One of the things I’ve been thinking about is how one person’s fundamental mind structure is interpreted by another as an obvious status grab. I want humans to better approximate Aumann’s Agreement Theorem. Solve the coordination problem, solve everything.
Determining the language to use is a classic case of premature optimization. No matter what the case, it will have to be provably free of ambiguities, which leaves us programming languages. In addition, in terms of the math of FAI, we’re still at the “is this Turing complete” sort of stage in development. So it doesn’t really matter yet. I guess one consideration is that the algorithm design is going to take way more time and effort than the programming, and the program has essentially no room for bugs (Corrigibility is an effort to make it easier to test an AI without it resisting). So in that sense, it could be argued that the lower level the language, the better.
Directly programming human values into an AI has always been the worst option, partially for your reason. In addition, the religious concept you gave can be trivially broken by two different beings having different or conflicting utility functions, and so acting as if they were the same is a bad outcome. A better option is to construct a scheme so that the smarter the AI gets, the better it approximates human values, by using its own intelligence to determine them, as in coherent extrapolated volition.
I think I see the problem. Tell me what your response to this article is. Do you see messy self-modification in pursuit of goals at the expense of a bit of epistemic rationality to be a valid option to take? Is Dark == Bad? In your post, you say that it is generally better not to believe falsehoods. My response to that is that things which depend on what you expect to happen are the exception to that heuristic.
Life outcomes are in large part determined by your background that you can’t change, but expecting to be able to change that will lead you to ignore fewer opportunities to get out of that situation. This post about luck is also relevant.
I can’t say much about the consequences of this, but it appears to me that both democracy and futarchy are efforts to more closely approximate something along the lines of a CEV for humanity. They have the same problems, in fact. How do you reconcile mutually exclusive goals of the people involved?
In any case, that isn’t directly relevant, but linking futarchy with AI caused me to notice that. Perhaps that sort of optimization style, of getting at what we “truly want” once we’ve cleared up all the conflicting meta-levels of “want-to-want”, is something that the same sorts of people tend to promote.
Nitpick: BTC can be worth effectively less than $0 if you buy some then the price drops. But in a Pascalian scenario, that’s a rounding error.
More generally, the difference between a Mugging and a Wager is that the wager has low opportunity cost for a low chance of a large positive outcome, and the Mugging is avoiding a negative outcome. So, unless you’ve bet all the money you have on Bitcoin, it maps much better to a Wager scenario than a Mugging. This is played out in the common reasoning of “There’s a low chance of this becoming extremely valuable. I will buy a small amount corresponding to the EV of that chance, just in case”.
Edit: I may have misread, but just to make sure, you were making the gold comparison as a way to determine the scale of the mentioned large positive outcome, correct? And my jump to individual investing wasn’t a misinterpretation?
The entire point of “politics is the mind-killer” is that no, even here is not immune to tribalistic idea-warfare politics. The politics just get more complicated. And the stopgap solution until we figure out a way around that tendency, which doesn’t appear reliably avoidable, is to sandbox the topic and keep it limited. You should have a high prior that a belief that you can be “strong” is Dunning-Kruger talking.
There are a few people who could respond who are both heavily involved in CFAR and have been to Landmark. I don’t think Alyssa was intending for a response to be well-justified data, just an estimate. Which there is enough information for.