Batteries also help the efficiency of hybrid peaker plants by reducing idling and smoothing out ramp-up and ramp-down logistics.
Archimedes
I’ve tried PB2 and it was gross enough that I wondered if it had gone bad. It turns out that’s just how it tastes. I’m jealous of people for whom it approximates actual peanut butter.
Unlike the Hobbes snippet, I didn’t feel like the Hume excerpt needed much translation to be accessible. I think I would decide on a case-by-case basis whether to read the translated version or the original rather than defaulting to one or the other.
Do you have any papers or other resources you’d recommend that cover the latest understanding? What is the SOTA for Bayesian NNs?
It’s probably worth noting that there’s enough additive genetic variance in the human gene pool RIGHT NOW to create a person with a predicted IQ of around 1700.
I’d be surprised if this were true. Can you clarify the calculation behind this estimate?
The example of chickens bred 40 standard deviations away from their wild-type ancestors is impressive, but it’s unclear if this analogy applies directly to IQ in humans. Extrapolating across many standard deviations in quantitative genetics requires strong assumptions about additive genetic variance, gene-environment interactions, and diminishing returns in complex traits. What evidence supports the claim that human IQ, as a polygenic trait, would scale like weight in chickens and not, say, running speed where there are serious biomechanical constraints?
I’m not sure the complexity of a human brain is necessarily bounded by the size of the human genome. Instead of interpreting DNA as containing the full description, I think treating it as the seed of a procedurally generated organism may be more accurate. You can’t reconstruct an organism from DNA without an algorithm for interpreting it. Such an algorithm contains more complexity than the DNA itself; the protein folding problem is just one piece of it.
This seems likely. Sequences with more than countably many terms are a tiny minority in the training data, as are sequences including any ordinals. As a result, you’re likely to get better results using less common but more specific language rather than trying to disambiguate “countable sequence”, i.e., when its vocabulary is less overloaded.
For a sentient, sapient entity, this would have been a very bad position to be put into, and any possible behaviour would have been criticised—because the AI either does not obey humans, or obeys them and does something evil, both of which are concerning.
I agree. This paper gives me the gut feeling of “gotcha journalism”, whether justified or not.
This is just a surface-level reaction though. I recommend Zvi’s post that digs into the discussion from Scott Alexander, the authors, and others. There’s a lot of nuance in framing and interpreting the paper.
Did you mean to link to my specific comment for the first link?
The main difference in my mind is that a human can never be as powerful as potential ASI and cannot dominate humanity without the support of sufficiently many cooperative humans. For a given power level, I agree that humans are likely scarier than an AI of that power level. The scary part about AI is that their power level isn’t bounded by human biological constraints and the capacity to do harm or good is correlated with power level. Thus AI is more likely to produce extinction-level dangers as tail risk relative to humans even if it’s more likely to be aligned on average.
Related question: What is the least impressive game current LLMs struggle with?
I’ve heard they’re pretty bad at Tic Tac Toe.
I’m new to the term AIXI and went three links deep before I learned what it refers to. I’d recommend making this journey easier for future readers by linking to a definition or explanation near the beginning of the post.
The terms “tactical voting” or “strategic voting” are also relevant.
I think your assessment may be largely correct but I do think it’s worth considering how things are not always nicely compressible.
This review led me to find the following podcast version of Planecrash. I’ve listened to the first couple of episodes and the quality is quite good.
this concern sounds like someone walking down a straight road and then closing their eyes cause they know where they want to go anyway
This doesn’t sound like a good analogy at all. A better analogy might be a stylized subway map compared to a geographically accurate one. Sometimes removing detail can make it easier to process.
I don’t think it’s necessarily GDPR-related but the names Brian Hood and Jonathan Turley make sense from a legal liability perspective. According to info via ArsTechnica,
Why these names?
We first discovered that ChatGPT choked on the name “Brian Hood” in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood’s 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
As for Jonathan Turley, a George Washington University Law School professor and Fox News contributor, 404 Media notes that he wrote about ChatGPT’s earlier mishandling of his name in April 2023. The model had fabricated false claims about him, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed. Turley told 404 Media he has not filed lawsuits against OpenAI and said the company never contacted him about the issue.
Interestingly, Jonathan Zittrain is on record saying the Right to be Forgotten is a “bad solution to a real problem” because “the incentives are clearly lopsided [towards removal]”.
User throwayian on Hacker News ponders an interesting abuse of this sort of censorship:
I wonder if you could change your name to “April May” and submitted CCPA/GDPR what the result would be..
It’s not a classic glitch token. Those did not cause the current “I’m unable to produce a response” error that “David Mayer” does.
Participants are at least somewhat aligned with non-participants. People care about their loved ones even if they are a drain on resources. That said, in human history, we do see lots of cases where “sub-marginal participants” are dealt with via genocide or eugenics (both defined broadly), often even when it isn’t a matter of resource constraints.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete? What happens when humans become the equivalent of advanced Alzheimer’s patients who’ve escaped from their memory care units trying to participate in general society?