yes
arxhy
That’s the idea behind the post, yeah. I am referring more to the general culture of the site, since it is relevant here.
I find it strange that our response to “politics is the mindkiller” has been less “how can we think more rationally about politics?” and more “let’s avoid politics”. If feasible, the former would pay off long-term.
Of course, a lot of more general ideas pertaining to rationality can be applied to politics too. But if politics is still the mindkiller, this may not be enough—more techniques may be needed to deal with the affective override that politics can cause.
Listeners are probably not assuming that the person they are listening to is being honest.
Seconded.
Interesting, thanks for the reply. I agree that it could develop superhuman ability in some domains, even if that ability doesn’t manifest in the model’s output, so that seems promising (although not very scaleable). I haven’t read on mesa optimizers yet.
I have very little knowledge of AI or the mechanics behind GPT, so this is more of a question than criticism:
If a scaled up GPT-N is trained on human-generated data, how would it ever become more intelligent than the people whose data it is trained on?
Or maybe good enough is the enemy of better. Regardless, the point’s been made
Perfect is the enemy of good; good enough is also the enemy of good.
arxhy’s Shortform
In my case, I probably wouldn’t give my life for less than lives of a billion strangers, so that ratio would have to be extremely high, to the point where it’s probably incalculable.
Why?
I am unfamiliar with the science here—what is the difference between a “reversed-effect stimulant” and a depressant?
We wax poetic about both because we like doing it. I don’t think that is affected by whether it’s a fluke of evolution
[Question] Comparative Advantage Intuition
There are some acronyms that are already in common usage.
Suggestion: continue to use qualifiers, but encourage writing them as acronyms so that they take up less space. Same meaning but quicker and less annoying.
“Wrong” as in “less likely to match reality.” Not very much is certain, but that doesn’t mean we are forbidden from talking about certainty.
It seems like you are interpreting a discussion that doesn’t effortlessly concede to your point of view as a discussion inherently biased against your point of view.
Pledging now to join if at least 8 do.
Same here.
I haven’t seen Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More. Why did it get universally negative votes?