https://en.wikipedia.org/wiki/Limits_of_computation
Great relevant wikipedia page
https://en.wikipedia.org/wiki/Limits_of_computation
Great relevant wikipedia page
At the same time it’s basically the only filtering criteria provided besides “software developer job.” Having worked a few different SWE jobs, I know that some company cultures which people love are cultures I hate, and vice versa. I would point someone to completely different directions based off a response. Not because I think it’s likely they got their multidimensional culture preferences exactly perfectly communicated, but because the search space is so huge it’s good to at least have an estimator on how to order what things to look into.
I don’t have strong preferences about what the company does. I mostly care about working with a team that has a good culture.
This is pretty subjective, and I would find it helpful to know what sort of culture you’re looking for.
so I have forwarded all of these domains to my home page
On my end this does not appear to be working.
Also, nice work.
Disambiguation is a great feature of language, but we can also opt instead to make things maximally ambiguous with my favorite unit system: CCC. All measurements expressed with only the letter C.
A sketch of solution that doesn’t involve (traditional) world leaders could look like “Software engineers get together and agree that the field is super fucked, and start imposing stronger regulations and guidelines like traditional engineering disciplines use but on software.” This is a way of lowering the cost of alignment tax in the sense that, if software engineers all have a security mindset, or have to go through a security review, there is more process and knowledge related to potential problems and a way of executing a technical solution at the last moment. However, this description is itself is entirely political not technical, yet easily could not reach the awareness of world leaders or the general populace.
My conclusion: Let’s start the meme that Alignment (the technical problem) is fundamentally impossible (maybe it is? why think you can control something supposedly smarter than you?) and that you will definitely kill yourself if you get to the point where finding a solution to Alignment is what could keep you alive. Pull a Warhammer 40k, start banning machine learning, and for that matter, maybe computers (above some level of performance) and software. This would put more humans in the loop for the same tasks we have now, which offers more opportunities to find problems with the process than how a human right now can program 30 lines of C++, have it LGTM’d by one other person at Google and then have those lines of code be used billions of time, per the input of two humans, ever.
(This meme hasn’t undergone sufficient evolution, feel free to attack with countermemes and supporting memes until it evolves into one powerful enough to take over, and delay the death of the world)
“MIRI walked down this road, a faithful scout, trying to solve future problems before they’re relevant. They’re smart, they’re resourceful, they made noise to get other people to look at the problem. They don’t see a solution in sight. If we don’t move now, the train will run us over. There is no technical solution to alignment, just political solutions—just like there’s no technical solution to nuclear war, just treaties and individuals like Petrov doing their best to avoid total annihilation.”
I think there’s an important difference Valentine tries to make with respect to your fourth bullet (and if not, I will make). You perhaps describe the right idea, but the wrong shape. The problem is more like “China and the US both have incentives to bring about AGI and don’t have incentives towards safety.” Yes deflecting at the last second with some formula for safe AI will save you, but that’s as stupid as jumping away from a train at the last second. Move off the track hours ahead of time, and just broker a peace between countries to not make AGI.
Yes, there are those who are so terrified of Covid that they would advise practicing social distancing in the wake of nuclear Armageddon. This is an insight into that type of thinking. I do think keeping your mask on would be wise, but for obvious other reasons.
I saw this too and was very put off to find social distancing being mentioned in a nuclear explosion survival guide, glad I’m not the only one who noticed this. Doubt many would survive (myself included) without the aid of other humans in such an apocalyptic situation, you know, like a crowded bus out of the fallout zone that I would have to turn down to follow social distancing.
Ah, I forgot to emphasize that these were things to look into to get better. I don’t claim to know EY’s lineage. That said, how many people do you think are well versed in cryptography? If someone said, “I am one of very few people who is well versed in cryptography” that doesn’t sound particularly wrong to me (if they are indeed well versed). I guess I don’t know exactly how many people EY thinks is in this category with him, but people versed enough in cryptography to, say, make their own novel and robust scheme is probably on the order of 1,000-10,000 worldwide. His phrasing would make sense to me for any fraction of the population lower than 1 in 1,000, and I think he’s probably referring to a category at the size of or less than 1 in 10,000. That said, I would like to emphasize that I don’t think cryptography is especially useful to this ends, rather, the reason it was mentioned above was to bring up the security mindset.
Zen/mindfulness meditation generally has an emphasis on noticing concrete sensations. In particular, it might help you interject your attention at the proper level of abstraction to reroute concrete observations and sensations into your language. Also, with all of these examples, I do not claim that any individual one will be enough, but I do believe that experience with these things can help.
One fun way to learn concreteness is something I tried to exercise in this reply: use actual numbers. Fermi estimation is a skill that’s relatively easy to pick up and makes you exercise your ability to think concretely about actual numbers that you are aware of to predict numbers you that are not. The process of actually referencing the concrete observations into a concrete prediction is a pattern that I have found to produce concrete thoughts which get verbalized in concrete language. :)
Cryptography was mentioned in this post in a relevant manner, though I don’t have enough experience with it to advocate it with certainty. Some lineages of physics (EY points to Feynman) try to evoke this, though it’s pervasiveness has decreased. You may have some luck with Zen. Generally speaking, I think if you look at the Sequences, the themes of physics, security mindset, and Zen are invoked for a reason.
Color blindness is a blind spot in color space.
I think you forgot to insert “Vaccinations graphs”
It is almost a fully general counter argument. It argues against all knowledge, but to different degrees. You can at least compare the references of symbols to finite calculations that you have already done within your own head, and then use Occam’s Razor.
I don’t accept “math” as a proper counterexample. Humans doing math aren’t always correct, how do you reason about when math is correct?
My argument is less about “finite humans cannot think about infinities perfectly accurately” and more, “your belief that humans can think about infinities at all is predicated upon the assumption (which can only be taken on faith) that the symbol you manipulate relates to reality and its infinities at all.”
By what means are you coming to your reasoning about infinite quantities? How do you know the quantities you are operating on are infinite at all?
I am confused how you got to the point of writing such a thoroughly detailed analysis of the application of the math of infinities to ethics while (from my perspective) strawmanning finitism by addressing only ultrafinitism. “Infinities aren’t a thing” is only a “dicey game” if the probability of finitism is less than 100% :). In particular, there’s an important distinction between being able to reference the “largest number + 1″ and write it down versus referencing it as a symbol as we do, because in our referencing of it as a symbol, in the original frame, it can be encoded as a small number.
Another easy way to just dismiss the question of infinite ethics that I feel you overlooked is that you can assign zero probability to our choice of mathematical axioms is exactly correct about the nature of infinities (or even probabilities).
You’ll notice that both of these examples correspond to absolute certainty, and that one may object that I am “not being open minded” or something like that for having infinitely trapped priors. However, I would remind readers that you cannot chose all of your beliefs and that, practically, understanding your own beliefs can be more important than changing (or being able to change) them. We can play word games regarding infinities, but will you put your life at stake? Or will your body reject the attempts of your confused mind when the situation and threats at hand are visible?
I would also like to directly claim, regardless of the truth of aforementioned claims, that entities and actions beyond the cosmic horizon of our observable universe are forfeit for consideration of ethics (and only once they are past that horizon). In particular, I dislike that your argument relies on the notion that cosmologists believe that the universe is infinite, while cosmologist will also definitely tell you that things beyond the cosmological horizon are outside of causal influence. Your appeal to logos only to later reject it in your favor is inconsistent and unpalatable to me.
I also am generally under the impression that a post like this should be classified as a cognitohazard, as I am under the impression that the post will cause net harm under the premise that it attempts to update people in the direction of susceptibility to arguments of the nature of Pascal’s Wager.
I’m sorry if I’m coming off as harsh. In particular, I know from reading your posts that I think you generally contribute positively and I have enjoyed much of your content. However, I am under the impression that this post is likely a net negative, and directly conflicting against the proposition that we “help our species make it to a wise and empowered future” because I think that this contributes towards misleading our species. I have found myself, and obviously others may find otherwise, that as far as I can tell there is ingrained in my experience of consciousness itself that assigns zero probability to our choice of axioms as being literally entirely correct (the map is not the territory). I also claim that regardless of the supposed “actual truth” of the existence of infinities in ethics, that a practical standpoint suggests that you should definitely reject the idea, as I believe practically having any modicum of belief is more likely to lead you astray and likely to perform worse in exceptional case that our range of causal influence is “actually infinite” though clearly this is not something I can prove.
Typo in this sentence: “And probably I we would have had I not started working on The Machine.”
You can find it here. https://www.glowfic.com/replies/1824457#reply-1824457
I would describe it as extremely minimal spoilers as long as you read only the particular post and not preceding or later ones. The majority of the spoilerability is knowing that the content of the story is even partially related, which you would already learn by reading this post. The remainder of the spoilers is some minor characterization.