Or as a possible more concrete prompt if preferred: “Create a cost benefit analysis for EU directive 2019⁄904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.
Output:
-
key costs and benefits table
-
economic cost for the beverage industry to make the transition
-
expected change in littering, total over first 5 years
-
QALYs lost or gained for consumers throughout the first 5 years”
Downvoted for 3 reasons:
The style strikes me as very AI-written. Maybe it isn’t—but the very repetitive structure looks exactly like the type of text I tend to get out of ChatGPT much of the time. Which makes it very hard to read.
There are many highly superficial claims here without much reasoning to back them up. Many claims of what AGI “would” do without elaboration. “AGI approaches challenges as problems to be solved, not battles to be won.”—first, why? Second, how does this help us when the best way to solve the problem involves getting rid of humans?
Lastly, I don’t get the feeling this post engages with the most common AI safety arguments at all. Neither does it with evidence from recent AI developments. How do you expect “international agreements” with any teeth in the current arms race? When we don’t even get national or state level agreements. While Bing/Sydney was not an AGI, it clearly showed that much of what this post dismisses as anthropocentric projections is realistic, and, currently, maybe even the default of what we can expect of AGI as long as it’s LLM-based. And even if you dismiss LLMs and think of more “Bostromian” AGIs, that still leaves you with instrumental convergence, which blows too many holes into this piece to leave anything of much substance.