Tenoke
He comes out pretty unsympathetic and stubborn.
Did any of your views of him change?
I’m sympathetic to some of your arguments but even if we accept that the current paradigm will lead us to an AI that is pretty similar to a human mind, and even in the best case I’m already not super optimistic that a scaled up random almost human is a great outcome. I simply disagree where you say this:
>For example, humans are not perfectly robust. I claim that for any human, no matter how moral, there exist adversarial sensory inputs that would cause them to act badly. Such inputs might involve extreme pain, starvation, exhaustion, etc. I don’t think the mere existence of such inputs means that all humans are unaligned.
Humans aren’t that aligned at the extreme and the extreme matters when talking about the smartest entity making every important decision about everything.
Also, your general arguments about the current paradigms being not that bad are reasonable but again, I think our situation is a lot closer to all or nothing—if we get pretty far with RLHF or whatever, scale up the model until it’s extremely smart and thus eventually making every decision of consequence then unless you got the alignment near perfectly the chance that the remaining problematic parts screw us over seems uncomfortably high to me.
I can’t even get a good answer of “What’s the GiveWell of AI Safety” so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I’m not very optimistic ordinary less convinced people who want to help are having an easier time.
It seems quite different. Tha main argument in that article is that Climate Change wouldn’t make the lives of readers’ children much worse or shorter and that’s not the case for AI.
Do you have any evidence for this?
My prior is that other things are less effective and you need evidence to show they are more effective not vice versa.
Not all EA’s are longtermists.
Of course. I’m saying it doesn’t even get to make that argument which can sometimes muddy the waters enough to make some odd-seeming causes look at least plausibly effective.
I’m impressed how modern EAs manage to spin any cause into being supposedly EA.
There’s just no way that things like this are remotely as effective as say GiveWell causes (though it wouldn’t even meet a much lower bar) and it barely even has longtermist points for it that can make me see why there’s at least a chance it could be worth it.
EA’s whlole brand is massively diluted by all these causes and I don’t think they are remotely as effective as other places where your money can go, nor that they help the general message.
It’s like people get into EA, realize it’s a good idea but then want to participate in the community and not just donate so everyone tries to come up with new clearly ineffective (compared to alternatives) causes and spin them as EA.
While NVDA is naively the most obvious play—the vast majority of GPU-based AI systems use them, I fail to see why you’d expect it will outperform the market, at least in the medium term. Even if you don’t believe in the EMH, I assume you acknowledge things can be more or less priced-in? Well, NVDA’s such an obvious choice that it does seem like all the main arguments for it are priced-in which has helped get it to a PE ratio of 55.
I also don’t see OpenAI making a huge dent on MSFT’s numbers anytime soon. Almost all of MSFT’s price is going to be determined by the rest of their business. Quick googling suggests revenue of 3m for OpenAI, and 168b total for MSFT for 2021. If OpenAI was already 100 times larger I still wouldn’t see how a bet on MSFT just because of it is justified. It seems like this was chosen just because OpenAI is popular and not out of any real analysis beyond it. Can you explain what I’m missing?
I do like your first 3 choices of TSM, Google and Samsung (is that really much of an AI play though).
No, it’s the blockchain Terra (with Luna being its main token).
There is little reason to think that’s a big issue. A lot of data is semi-tagged, some of the ML-generated data can be removed either that way or by being detected by newer models. And in general as long as the ‘good’ type of data is also increasing model quality will also keep increasing even if you have some extra noise.
What’s the GiveWell/AMF of AI Safety? I’d like to occasionally donate. In the past I’ve only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.
In Bulgaria (where cyrilic was invented) writing in Latin is common (especially before cyrilic support was good) but frowned upon as it is considered uneducated and ugly. The way we do it is just replace each letter with the equivalent latin letter one to one and do whatever with the few which don’t fit (eg just use y for ъ but some might use a, ч is just ch etc). So молоко is just moloko. Водка is vodka. Стол is stol etc. This is also exactly how it works on my keyboard with the phonetic layout.
Everyone else who uses cyrilic online seems to get it when you write like that in my experience though nowadays it’s rarer.
I’ve been considering for years that I should write more, and save more of my messages and activities purely so I can constrain the mindspace for a future AI to recreate a version of me as approximate to my current self as years ago me is. As far as I can tell, this is fairly low effort, and the more information you have the closer you can get.
I just don’t see an obvious refutation for why an advanced AI optimizing for creating a person that would write/do/etc. all the things I have with the highest probability it can would be that different from me.
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it’s typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
Why are so many resources being sunk into this specifically? I just don’t understand how it makes sense, what the motivation is and how they arrived at the idea. Maybe there is a great explanation and thought process which I am missing.
From my point of view, there is little demand for it and the main motivation might plausibly have been “we want to say we’ve published a book” rather than something that people want or need.
Having said that, I’d rather get an answer to my initial comment—why it makes sense to you/them—rather than me having to give reasons why I don’t see how it makes sense.
Thanks for the reply. That seems like a sensible position.
It sounds like maybe you were less involved in this than some of the 7(is that right?) other employees/admins so I’m very curious to hear their take, too.
Printing costs are hardly the only or even main issue, and I hadn’t even mentioned them. You are right though, those costs make the insistence on publishing a book make even less sense.
I’m confused by this. Why would only voters be interested in the books?
Because I doubt there are all that much more people interested in these than the number of voters. Even at 1000 it doesn’t seem like a book makes all that much sense. In fact, I still don’t get why turning them into a book is even considered.
It seems like very few people voted overall if the average is “10-20” voters per post. I hope they are buying 50+ books each otherwise I don’t see how the book part is remotely worth it.
The voting was broken in multiple ways—you could spend as many points as possible, but instead of a cut-off, your vote was just cast out due to the organizers’ mistake to allow it.
The voting was broken in the way described in the post, too.
People didn’t understand how the voting worked (Look at the few comments here) so they didn’t really even manage to vote in the way that satisfies their preferences best. The system and the explanation seem at fault.
I note that a lot of promotion went into this—including emails to non-active users, a lot of posts about it, long extended reviews.
So, my question is—do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both—what would failure have looked like?
He has been trying to do it for years and failed. The first time I read his attempts at doing that, years ago, I also assigned a high probability of success. Then 2 years passed and he hadn’t done it, then another 2 years..
You have to adjust your estimates based on your observations.
I own only ~5 physical books now (prefer digital) and 2 of them are Thinking, Fast and Slow. Despite not being on the site I’ve always thought of him as something of a founding grandfather of LessWrong.