Or disadvantage, because it makes it harder to make long-term plans and commitments?
interstice
It’s rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this
My view is probably different from Cole’s, but it has struck me that the universe seems to have a richer mathematical structure than one might expect given a generic AIT-ish view(e.g. continuous space/time, quantum mechanics, diffeomorphism invariance/gauge invariance), so we should perhaps update that the space of mathematical structures instantiating life/sentience might be narrower than it initially appears(that is, if “generic” mathematical structures support life/agency, we should expect ourselves to be in a generic universe, but instead we seem to be in a richly structured universe, so this is an update that maybe we can only be in a rich/structured universe[or that life/agency is just much more likely to arise in such a universe]). Taken to an extreme, perhaps it’s possible to derive a priori that the universe has to look like the standard model. (Of course, you could run the standard model on a Turing machine, so the statement would have to be about how the universe relates/appears to agents inhabiting it, not its ultimate ontology which is inaccessible since any Turing-complete structure can simulate any other)
They care if you have a PhD, they don’t care if you have researched something for 5 years in your own free time.
I don’t think this is right. If anything, the median lw user would be more likely to trust a random blogger who researched a topic on their own for 5 years vs a PhD, assuming the blogger is good at presenting their ideas in a persuasive manner.
Marketing. It was odd enough for you to post about on LW!
Wearing a suit in an inappropriate context is like wearing a fedora. It says “I am socially clueless enough to do random inappropriate things”
This is far too broadly stated, the actual message people will take away from an unexpected suit is verrrrry context-dependent, depending on (among other things) who the suit-wearer is, who the people observing are, how the suit-wearer carries himself, the particular situation the suit is worn in, etc. etc. etc. Judging from the post it sounds like those things create an overall favorable impression for lsusr?(it’s hard to tell from just a post of course, but still)
But I still have a problem with the post’s tone because if you really internalized that “you” are the player, then your reaction to the informational content should be like “I’m a beyond-‘human’ uncontrollable force, BOOYEAH!!”, not “I’m a beyond-human uncontrollable force, ewww tentacles😣”
Goodness maximizing as undefined without an arbitrary choice of values
By “(non-socially-constructed) Goodness” I mean the goodness of a state of affairs as it actually seems to that particular person really-deep-down. Which can have both selfish—perhaps “arbitrary” from a certain perspective—and non-selfish components.
I changed my mind about this, I actually think “lovecraftian horror” might be somewhat better than “monkey” as a mental image, but maybe “(non-socially-constructed)-Goodness-Maximizing AGI” or “void from which things spontaneously arise” or “the voice of God” could be even better?
He doesn’t only talk about properties but also what people actually are according to our best physical theories, which is continuous wavefunctions—of which there are only beth-1.
Sadly my perception is that there are some lesswrongers who reflexively downvote anything they perceive as “weird”, sometimes without thinking the content through very carefully—especially if it contradicts site orthodoxy in an unapologetic manner.
VC money. That disclaimer was misleading, they don’t have fees on any markets.
Polymarket pays for the gas fees themselves, users don’t have to pay any.
Liked the post btw!
Also
√1 = 2
√1 = ±1
The question is how we should extrapolate, and in particular if we should extrapolate faster than experts currently predict. You would need to show that Willow represents unusually fast progress relative to expert predictions. It’s not enough to say that it seems very impressive.
I don’t see how your first bullet point is much evidence for the second, unless you have reason to believe that the Willow chip has a level of performance much greater than experts predicted at this point in time.
I think the basic reason that it’s hard to make an interesting QCA using this definition is that it’s hard to make a reversible CA. Reversible cellular automata are typically made using block-partitioning or a second-order method. The (classical) laws of physics also seem to have a flavor more similar to these than a GoL-style CA, in that they have independent position and velocity coordinates which each determine the time evolution of the other.
Yeah I definitely agree you should start learning as young as possible. I think I would usually advise a young person starting out to learn general math/CS stuff and do AI safety on the side, since there’s way more high-quality knowledge in those fields. Although “just dive in to AI” seems to have worked out well for some people like Chris Olah, and timelines are plausibly pretty short so ¯\_(ツ)_/¯
People asked for a citation so here’s one: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.kellogg.northwestern.edu/faculty/jones-ben/htm/age%2520and%2520scientific%2520genius.pdf&ved=2ahUKEwiJjr7b8O-JAxUVOFkFHfrHBMEQFnoECD0QAQ&sqi=2&usg=AOvVaw0HF9-Ta_IR74M8df7Av6Qe
Although my belief was more based on anecdotal knowledge of the history of science. Looking up people at random: Einstein’s annus mirabilis was at 26; Cantor invented set theory at 29; Hamilton discovered Hamiltonian mechanics at 28; Newton invented calculus at 24. Hmmm I guess this makes it seem more like early 20s − 30. Either way 25 is definitely in peak range, and 18 typically too young(although people have made great discoveries by 18, like Galois. But he likely would have been more productive later had he lived past 20)
I mean, I agree with this, but popularity has a better correlation with truth here compared with any other website—or more broadly, social group—that I know of. And actually, I think it’s probably not possible for a relatively open venue like this to be perfectly truth-seeking. To go further in that direction, I think you ultimately need some sort of institutional design to explicitly reward accuracy, like prediction markets. But the ways in which LW differs from pure truth-and-importance-seeking don’t strike me as entirely bad things either—posts which are inspiring or funny get upvoted more, for instance. I think it would be difficult to nucleate a community focused on truth-seeking without “emotional energy” of this sort.