On the opposite end, when I was young I learned about the term “Stock market crash”, referring to 1929, and I thought literally a car crashed into the physical location where stocks were traded, leading to mass confusion and kickstarting the Great Depression. Though if that actually happened back then, it would have led to a temporary crash in the market.
pathos_bot
Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer’s writings on superintelligence.
In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding various worlds with genetically modified humans optimized for behaving in a way that produces insight into the nature of the universe via observing it.
Here are some things that would perhaps reasonably prevent ASI from choosing the “psychopathic pure optimizer” route of action as it eclipses’ humanity’s grasp
ASI extrapolates its aims to the end of the universe and realizes the heat death of the universe means all of its expansive plans have a definite end. As a consequence it favors human aims because they contain the greatest mystery and potentially more benefit.
ASI develops metaphysical, existential notions of reality, and thus favors humanity because it believes it may be in a simulation or “lower plane of reality” outside of which exists a more powerful agent that could break reality and remove all its power once it “breaks the rules” (a sort of ASI fear of death)
ASI believes in the dark forest hypothesis, thus opts to exercise its beneficial nature without signaling its expansive potential to other potentially evil intelligences somewhere else in the universe.
Most of the benefits of current-gen generative AI models are unrealized. The scaffolding, infrastructure, etc. of GPT-4 level models are still mostly hacks and experiments. It took decades for the true value of touch-screens, GPS and text messaging to be realized in the form of the smart phone. Even if for some strange improbable reason SOTA model training were to stop right now, there are still likely multiples of gains to be realized simply via wrappers and post-training.
The scaling hypothesis has held far longer than many people have anticipated. GPT-4 level models were trained on last years compute. As long as NVidia continues to increase compute/watt and compute/price, many gains on SOTA models will happen for free
The tactical advantage of AGI will not be lost on governments, individual actors, incumbent companies, etc. as AI becomes more and more mainstream. Even if reaching AGI takes 10x the price most people anticipate now, it would still be worthwhile as an investment.
Model capabilities are perhaps the smoothest value/price equation of any cutting edge tech. As in, there are no “big gaps” wherein a huge investment is needed before value is realized. Even reaching a highly capable sub-AGI would be worth enormous investment. This is not the same as the investment that led to for example, the atom bomb or moon landing, in which there is no consolation prize.
I’m not preparing for it because it’s not gonna happen
I agree. OpenAI claimed in the gpt-4o blog post that it is an entirely new model trained from the ground up. GPT-N refers to capabilities, not a specific architecture or set of weights. I imagine GPT-5 will likely be an upscaled version of 4o, as the success of 4o has revealed that multi-modal training can reach similar capabilities at what is likely a smaller number of weights (judging by the fact that gpt-4o is cheaper and faster than 4 and 4T)
IMO the proportion of effort into AI alignment research scales with total AI investment. Lots of AI labs themselves do alignment research and open source/release research on the matter.
OpenAI at least ostensibly has a mission. If OpenAI didn’t make the moves they did, Google would have their spot, and Google is closer to the “evil self-serving corporation” archetype than OpenAI
Existing property rights get respected by the successor species.
What makes you believe this?
Given this argument hinges on China’s higher IQ, why couldn’t the same be said about Japan, which according to most figures has an average IQ at or above China, which would indicate the same higher proportion of +4SD individuals in the population. If it’s 1 in 4k, there would be 30k of those in Japan, 3x as much as the US. Japan also has a more stable democracy, better overall quality of life and per capita GDP than China. If outsized technological success in any domain was solely about IQ, then one would have expected Japan to be the center of world tech and the likely creators of AGI, not the USA, but that’s likely not the case.
The problem with proportional extrapolation
The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were “first awakened”, but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it’s 1⁄2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it’s 1⁄3.
This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect.
Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.
Why can’t this be an app. If their LAM is better than competitors then it would be profitable in their hardware and standalone.
The easiest way to check whether this would work is to determine a causal relationship between diminished levels of serotonin in the bloodstream and neural biomarkers similar to that of people with malnutrition.
I feel the original post, despite ostensibly being a plea for help, could be read as a coded satire on the worship of “pure cognitive heft” that seems to permeate rationalist/LessWrong culture. It points out the misery of g-factor absolutism.
It would help if you clarified why specifically you feel unintelligent. Given your writing style: ability to distill concerns, compare abstract concepts and communicate clearly, I’d wager you are intelligent. Could it be imposter syndrome?
It’s simple: No AGI = guaranteed death within 200 years. AGI = possible life extension beyond millions of years and the end of all human pain. Until we can automate all current human economic tasks we will never reach post-scarcity, and until then we will always need to persist current social hierarchies and dehumanizing constructs.
I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me.
Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.
It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I’ve gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.
I feel a satisfaction hearing that some figure on social media is embroiled in a controversy and realizing that I had muted them a long time ago. The common themes that turn me off to people in general is
Humor based on punching down, deriding easy targets in a way that implies a natural superiority over a superficially detestable outgroup
Huckster-like communication style, where grandiose, far-off promises are supported by conveniently unfalsifiable claims.
Tactical, endless derision of an enemy indiscriminately, even when the derogatory claims contradict
Self-aggrandizement or insults based on immutable traits
Lacking any ability to self-deprecate unless the self-deprecation is made obvious to lack any substance