This is potentially a naive question, but how well would the imagining deal with missing data? Say that 1% (or whatever the base rate is) of tissue samples would be destroyed during slicing or expansion—would we be able to interpolate those missing pieces somehow? Do we know any bounds on the error would that introduce in the dynamics later?
wolajacy
I strong downvoted, because I think public protest are not a good way of pushing for change.
They are a symmetric weapon.
They lock you in certain positions. There is a lot of momentum in a social movement that is carried through such public displays, which makes it difficult to change or reverse your position (for example, if we learned that for reason X it is much better to speed up the development of AI, which I don’t think is that improbable a priori).
They promote tribalistic, collective mindset. Protests like this are antithetic to the deep, 1-1 dialogue that LW stands for. I feel that the primary motivation for attending a protest is building camadarie and letting out emotions, which has more downsides than upsides, especially long-term. It also suports us-vs-them mentality.
Even if they change anyone’s mind, it is for wrong reasons. Public protests by necessity have to dumb down the message to a point that you can write on a poster. They lump people together present a unified front, and by doing that, lose nuance and diversity of opinions. If anyone changes their mind, it is because of reasons other than its merit.
They are an ineeffective way of using resources. The marginal value of spending time at a protest is negative for most of the people with any background in AI safety. It is much better to think, read papers, write papers, do experiments, chat with people around you, attend research seminars etc., than to picket on a street. Protests signal that you do not have anything more to offer than your presence.
There are some rare situations in which protests are a good choice, but mostly as the option of last resort. A possible counterpoint, that you are mostly advocating for awareness as opssosed to specific points is null, since pretty much everyone is aware of the problem now—both society as a whole, policymakers in particular, and people in AI research and alignment.
FYI, in ther answer you linked to, there is another, way easier way of doing it (& it worked for me):
tl;dr:
have the Android command line tools installed on a development machine, and USB debugging enabled on your device. The device does not need to be rooted
adb forward tcp:9222 localabstract:chrome_devtools_remote
wget -O tabs.json http://localhost:9222/json/list
Interesting point of view. I don’t think I agree with the sex triggers section: it seems that applying this retroactively would predict that the internet and video games would be banned by now (it is of course the case that in many instances they are stigmatized, but nowhere near the extent that would result in banning them).
Also, the essay does not touch on the most important piece of equation, which is the immense upside of AGI—the metaphore about the nuclear weapons spitting out gold, up until they got large enough. This means there is a huge incentive for private companies to unilaterally improve the tech, plus the Moore’s law of the compute being cheaper every year. If you can get the AI comprehend text a bit better (or do any sort of other “backend” task), this is much different from the production of child porn, growing weed, or killing people more effectively, which are very localized sources of profit. I think only human cloning comes as the a close example, but still not quite (the gains are very uncertain and temporarily discontinued, it’s more difficult to hide the experiments, the technology is much more specialised, whilist compute is needed in every other part of the economy, and ‘doing AI’ is not so well-defined category as ‘using human stem cells’).
Suppose you want to make a binary decision with a specified bias $p$. If, say, $p=1/8$ then you can throw a coin 3 times, and if you got, say, $HHH$, you take it as positive, else negative.
But if $p$ is a big number (say $1/1000$), or a weird number, say $1/\pi$, then this method fails. There is another really beautiful method I learned some time ago, which allows you to get any biased coin in a constant =2 expected number of throws! (I lost the source, unfortunately)
It works as follows: you throw the coin until the first time you get a head—assume this happened on your $n$-th throw. Then, you accept if and only if the $n$-th digit in the binary expansion of $p$ is 1. It is easy to show that this comes out to the bias exactly = p, and the expected number of coin throws is always 2.
This line of reasoning, of “AGI respecting human autonomy” has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad—not because of some external circumstances, but because of us being human. It’s like in the Great Divorce—given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that “AGI will figure this out somehow”. Which is equivalent to saying “I don’t know”. Which I think contradicts the argument “If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on”.
The correct argument is, IMO: “there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible”.
I was trying to guess what the idea is before reading the post, and my first thought was: in a multi-player game, there is a problem where, say, two players are in a losing position, and would like to resign (and go play something else), two other players are in a so-so position and want to possibly resign, and the final player is clearly winning and wants to continure. But there is no incentive to straight-up resign unilaterally, as then you have to sit and wait idly until the game finishes.
So, we introduce “fractional resignations”, we get something like [1, 1, 0.6, 0.6, 0.1], compare it to the pre-agreeded threshold (say, =3) - and end the game if it passes this bar.
Can you please link some of those Youtube channels you mentioned in the comment? I’d like to learn more about the topic—ideally, grasp the big ideas & what-I-don’t-know (coming from the pure math angle, so not much grounding in the natural sciences).
For reference, I found Introduction to Biology—The Secret of Life (an MIT course at edX) to be very helpful in this kind of exploration.
The argument is very unclear clear to me. What does “unbounded” mean? What does it mean to “retrocausally compress ‘self’”?
Are you postulating that:
- the notion of “an individual” does not make sense even in principle
—there exists something like “self”/”individual” in general, but we don’t know how to define rigorously
—there exists something like “self”/”individual”, but specific individuals (people, in this case) are not able to precisely define ‘themselves’
- some fourth option?(The second and third paragraph are even less clear to me, so if they present separate lines of thought, maybe let’s start with the first one)
Sorry to be blunt, but the whole post is made of unsubstaintiated claims and dubious associations. I had a very difficult time going through it.
Among many, many good reasons not to play video games, the main one is that they create invisible stress and consume large amounts of brain energy that you could be using for work, school, or moments of inspiration.
You claim that, but don’t provide any evidence for this.
Don’t trust any source that says video games don’t stress you out; there are billions of dollars and vested interests at play.
Why should I trust you instead? As in: don’t trust any source that says X; there are billions of dollars and vested interests at play.
The takeaway is to pay attention to your mind and body, and not to any pundit who claims that video games are good for you. They are not. From a bayesian perspective, you are more likely to encounter a lying, bribed pundit, than you are to encounter someone who has done honest research that legitimately argues that video games will make your life better.
Again, “you’re more likely to encounter a lying, bribed pundit than you are to encounter someone who has done honest research that legitimately argues that X will make your life better”. You’re not presenting any such research, or even do not vaugly point in a direction of it.
You can insert whatever you want in X, and it will be as much convincing as your statements.
Then, the posts turns into (what looks like to me) a list of games that you personally find relaxing, with your prescriptions on how other people should play them, and then a shorter list of games you didn’t like, that is devoid of even a trace of argument, besides already-repeated “games that stress you are bad”.
Doesn’t the anthropic bias impact the calculation, where you take into account not seeing nuclear war before?
There is a great (free) online course called ‘NAND to Tetris’, which is built on this exact premisse. Can’t recommend it enough: https://www.nand2tetris.org/
AFAIK, popular data science tools (Spark, Pandas, etc.) already use columnar formats for data serialization and network-based communication: https://en.wikipedia.org/wiki/Apache_Arrow
Similiar idea for disk storage (which is again orders of magnitude slower, so the gains in certain situations might be even bigger): https://en.wikipedia.org/wiki/Apache_Parquet
Generally, if you’re doing big data, there are actually more benefits from using this layout—data homogenity means much better compression and possibilities for smarter encodings.
Random users installing random software gives you botnets.
This is only true in case of insufficient security mechanisms. Virtualization/containerization (for example, docker model) would allow users to run independently installed applications safely.
Similarly, I guess that the motivation for centralized store (apart from the financial motive of the store owner: Apple/Google) is to provide security through the process of vetting the apps. But again, if we had proper virtualization software, there would be no reason not to allow users to add unofficial repositories, maintained in a decentralized way.
Of course, virtualization/containerization done on the OS level is (currently) quite resource-intensive. But the alternative is even worse—with everything moved to the web, we are building (we have built...) OS inside OS! With all the problems that it entails: this “new OS” supporting really only one language, having extremely limted set of protocols, overall not having anything close to the full environment of the proper OS.
Summarizing: why would you advocate this all just to solve intercompatibility and safety problems (which, if I read your post correctly, are the reasons for moving apps to web), instead of dealing with them properly, on the OS level?
I really like the thought behind the post! But, your idea seems kind of… overengineered. For one, an important requirement for the packaging is that it should be easy to hold in your hand (e.g. eating in a car/on a couch/anywhere that you can’t actually put it on a table).
Additionally, let’s say there are two varieties of chips’ sizes: small and large. Small ones are small and cheap, so there’s no better way to package them than throw some in a bag, and it’d be too costly to package them in a more sophisticated way.
Large ones could have more complex packaging, but there’s the problem of closing the bag when there’s still some leftovers. In case of the usual bag, it’s as easy as folding the top—you get reasonable airtightness etc. But in case of a box, you’d have to make some closing mechanism, or shove it back in the bag (as in your pictures), which seems… complicated.
There are two ideas here. First are Pringles—just put them in a tube. Closing is not a problem, and it has the additional advantage of not crumbling them to pieces (which I’d say should be THE feature of boxes). Second idea is a bag that can be opened vertically as well as horizontally (Lay’s Stix implemented this some time ago, although I’m not sure about the US version). Then, you can have best of two worlds—easy to hold/easy to close (open on top) OR easy to access/share (open on the side).
If you don’t have a given joint pobability space, you implicitly construct it (for example, by saying RV are independent, you implicitly construct a product space). Generally, the fact that sometimes you talk about X living on one space (on its own) and other time on the other (joint with some Y) doesn’t really matter, because in most situations, probability theory is specifically about the properties of random variables that are independent of the of the underlying spaces (although sometimes it does matter).
Your example, by definition, P = Prob(X = 6ft AND Y = raining) = mu{t: X(t) = 6ft and Y(t) = raining}. You have to assume their joint probability space. For example, maybe they are independent, and then it P = Prob(X = 6ft) \* Prob(Y = raining), or maybe it’s Y = if X = 6ft than raining else not raining, and then P = Prob(X = 6ft).
Answering the last question: If you deal with any random variable, formally you are specifying a probability space, and the variable is a measurable function on it. So, to say anything useful about a family of random variables, they all have to live on the same space (otherwise you can’t—for example—add them. It does not make sense to add functions defined on different spaces). This shared probability space can be very complicated by itself, even though the marginal distributions are the same—it encodes the (non-)independence among them (in case of independent variables, it’s just a product space with a product measure).
Don’t have any good source except univeristy textbooks, but:
The simplest proof I know of (in 3 lines or so) is to just compute characteristic functions.
In general, the theorem talks about weak convergence, i.e. convergence in distributions.
The sample mean converges to expected value of the distribution it was taken from almost surely (i.e. strong convergence). This is a different phenomenon than CLT, it’s called the law of large numbers.
CLT applies to a family of random variables, not to distributions. The random variables in question do not have to be identically distributed, but do have to be independent (in particular, independence of a family of random variables is NOT the same as their pairwise independence).
The best intuition behind the CLT I know of: Gaussian is the only distribution with a finite variance where a linear combination of two independent variables has the same distribution (modulo parameter shift) as they have (i.e. it is a stable distribution). So, if try to “solve” the recursive equation for the limit in CLT, you’ll see that, if it exists, it has to be Gaussian. The theorem is actually about showing that the limit exists.
In general, as someone nicely put this: The importance of stable probability distributions is that they are “attractors” for properly normed sums of independent and identically distributed (iid) random variables.
On the exactly the same phenomenon, but from a different perspective—C. S. Lewis in The Great Divorce goes to explain the Christian Hell as the place that people are stuck in because they choose to wallow in despair/grief/anger/victimhood, instead of just forgiving and letting go.
For example, he talks about a mother that lost her child, and is now stuck on anger of this child being unfairly treated by the word/God. The crucial fact is that she is indulging in that anger as a way of signalling her own self-rightioussness, not for any productive purpose.
Quite interesting, how all these different worldviews converge on that one :)
Agreed. Advocacy seems to me to be ~very frequently tied to bad epistemics, for a variety of reasons. So what is missing to me in this writeup (and indeed, in most of the discussions about the issue): why does it make sense to make laypeople even more interested?
The status quo is that relevant people (ML researchers at large, AI investors, governments and international bodies like UN) are already well-aware of the safety problem. Institutions are set up, work is being done. What is there to be gained from involving the public to an even greater extent, poison and inevitably simplify the discourse, add more hard-to-control momentum? I can imagine a few answers (at present not enough being done, fear of the market forces eventually overwhelming the governance, “democratic mindset”), but none of those seem convincing in the face of the above.
To tie with the environmental movement: wouldn’t it be much better for the world if it was an uninspiring issue. It seems to me that this would prevent the anti-nuclear movement being solidified by the momentum, the extinction rebellion promoting degrowth etc, and instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?