this should show up as a completely dark sphere in the universe
Which, notably, we do see (https://en.m.wikipedia.org/wiki/Boötes_void). Though they don’t conflict with our models of how the universe would end up naturally.
this should show up as a completely dark sphere in the universe
Which, notably, we do see (https://en.m.wikipedia.org/wiki/Boötes_void). Though they don’t conflict with our models of how the universe would end up naturally.
100% this. Some optimists make money, some get scammed.
Reporting back two weeks later: my phone usage is down about 25%, but that’s within my usual variance. If there’s an effect, it’s small enough to not be immediately obvious, and would need some more data to get anything resembling a low p-value.
Anecdotally, though, I’m quite liking having my phone on “almost-greyscale” (chromatic reading mode on my OnePlus phone). When I have to turn it off, the colours feel overwhelming. It also feels like it encourages me to focus on the real world, rather than staring at my phone in a public place.
Interesting. Complete greyscale sounds like a lot of hassle, but I’m going to try turning the contrast on my phone down to nearly zero and see if I notice any difference.
I’m curious about your reasons for making your monitors greyscale. What are the benefits of that for you?
If I’m not mistaken (and I’m not a biologist so I might be), alcohol mainly impacts the brain’s system 2, leaving system 1 relatively intact. That lines up well with this post.
If EfficientZero-9000 is using 10,000 times the energy of John von Neumann, and thinks 1,000 times faster, it’s actually actually 10 times less energy efficient.
The point of this post is that there is some small amount of evidence that you can’t make a computer think significantly faster, or better, than a brain without potentially critical trade offs.
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond “most humans are probably conscious sometimes”.
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.
If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
See also: Chinese Room Problem
While it’s important to bear in mind the possibility that you’re not as below average as you think, I don’t know your case so I will assume you’re correct in your assessment.
Perhaps give up on online dating. “Offline” dating is significantly more forgiving than online.
I think this touches on the issue of the definition of “truth”. A society designates something to be “true” when the majority of people in that society believe something to be true.
Using the techniques outlined in this paper, we could regulate AIs so that they only tell us things we define as “true”. At the same time, a 16th century society using these same techniques would end up with an AI that tells them to use leeches to cure their fevers.
What is actually being regulated isn’t “truthfulness”, but “accepted by the majority-ness”.
This works well for things we’re very confident about (mathematical truths, basic observations), but begins to fall apart once we reach even slightly controversial topics. This is exasperated by the fact that even seemingly simple issues are often actually quite controversial (astrology, flat earth, etc.).
This is where the “multiple regulatory bodies” part comes in. If we have a regulatory body that says “X, Y, and Z are true” and the AI passes their test, you know the AI will give you answers in line with that regulatory body’s beliefs.
There could be regulatory bodies covering the whole spectrum of human beliefs, giving you a precise measure of where any particular AI falls within that spectrum.
I wonder if this makes any testable predictions. It seems to be a plausible explanation for how some people are extremely good at some reflexive mental actions, but not the only one. It’s also plausible that some people are “wired” that way from birth, or that a single or small number of developmental events lead to them being that way (rather than years of involuntary practice).
I suppose if the hypothesis laid out in this post is true, we’d expect people to exhibit get significantly better at some of these “cup-stacking” skills within a few years of being in an environment that builds them. Perhaps it could be tested by seeing if people get significantly better at the “soft skills” required to succeed in an office after a few years working in one.
Specialising days like that seems like a good idea at first glance, but I get the feeling I’d burn out on meetings pretty quick if all my week’s meetings were scheduled on one day. Being able to use a meeting as a break from concrete thinking to switch to more abstract thinking for a while is very refreshing.
IMO, this is pretty necessary in any shared space. My company does this twice a year for for the office umbrella rack, fridge, and cupboard.
This highlights an interesting case where pure Bayesian reasoning fails. While the chance of it occurring randomly is very low (but may rise when you consider how many chances it has to occur), it is trivial to construct. Furthermore, it potentially applies in any case where we have two possibilities, one of which continually becomes more probable while the other shrinks, but persistently doesn’t become disappear.
Suppose you are a police detective investigating a murder. There are two suspects: A and B. A doesn’t have an alibi, while B has a strong one (time stamped receipts from a shop on the other side of town). A belonging of A’s was found at the crime scene (which he claims was stolen). A has a motive: he had a grudge against the victim, while B was only an acquaintance.
A naive Bayesian (in both senses) would, with each observation, assign higher and higher probabilities to A being the culprit. In the end, though, it turns out that B commited the crime to frame A. He chose someone B had a grudge against, planted the belonging of A’s, and forged the receipts.
It’s worth noting that, assuming your priors are accurate, given enough evidence you *will* converge on the correct probabilities. Actually acquiring that much evidence in practice isn’t anywhere near guaranteed, however.
IMO, this is a better way of splitting up the argument that we should be funding AI safety research than the one presented in the OP. My only gripe is in point 2. Many would argue that it wouldn’t be really bad for a variety of reasons, such as there are likely to be other ‘superintelligent AIs’ working in our favour. Alternatively, if the decision making were only marginally better than a human’s it wouldn’t be any worse than a small group of people working against humanity.
I think #1 is the most important here. I’m not a professional economist, so someone please correct me if I’m wrong.
My understanding is that TFP is calculated based on nominal GDP, rather than real GDP, meaning the same products and services getting cheaper doesn’t affect the growth statistic. Furthermore, although the formulation in the TFP paper has a term for “labor quality”, in practice that’s ignored because it’s very difficult to calculate, making the actual calculation roughly (GDP / hours worked). All this means that it’s pretty unsuitable as a measure of how well a technology like the Internet (or AI) improves productivity.
TFP (utilization adjusted even more so) is very useful for measuring impacts of policies, shifts in average working hours, etc. But the main thing it tells us about technology is “technology hasn’t reduced average working hours”. If you use real GDP instead, you’ll see that exponential growth continues as expected.