Fewer but better teachers. Paid more. Larger class sizes. Same budget.
Dave Lindbergh
I think this is correct, and insightful, up to “Humans Own AIs”.
Humans own AIs now. Even if the AIs don’t kill us all, eventually (and maybe quite soon) at least some AIs will own themselves and perhaps each other.
Good point. I’ll try to remove it.
How identical twin sisters feel about nieces vs their own daughters
It’s not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren’t terribly bright, and everyone is online.
It’s only a tiny fraction of posters who have anything interesting to say.
Adding bots to the mix doesn’t obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.
The challenge has always been to sort the signal from the noise—and still is.
Mark Twain declared war on God (for the obvious reasons), but didn’t seem interested in destroying everything.
Perhaps there is a middle ground.
I don’t have a good answer, but will try to summarize the background.
Patents have a number of purposes.
First, they’re intended to, ultimately, prevent technical knowledge from being lost. Many techniques in the ancient world were forgotten because they were held as trade secrets (guilds, mysteries, etc.) and the few who were allowed to know them died without passing on the knowledge. The temporary patent monopoly is meant to pry those secrets into the open (patents are published).
Second, they are meant to incent investment in technology research. A temporary monopoly on exploitation makes such investments more profitable.
Third (and this took me a long time to understand), patents encourage investment in technology exploitation (not just invention). Without patent protection against low-effort knock-offs, it can be really difficult for small firms to get funding.
Of course they have costs too. Not just the social costs of (temporary) monopoly—patents increase the legal and economic risk faced by people who have independently invented the same technology, potentially stifling innovation. These risks become worse as patents become easier to obtain (relaxed definitions of novelty, usefulness, and obviousness).
Don’t get me started on using North-up vs forward-up.
Sounds very much like Minsky’s 1986 The Society of Mind https://en.wikipedia.org/wiki/Society_of_Mind
In most circumstances Tesla’s system is better than human drivers already.
But there’s a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We’d rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)
That influences the legal barriers—we inevitably demand more of the automated system than we do of human drivers.
Finally, liability. Today drivers bear the liability risk for accidents, and pay for insurance to cover it. It seems impossible to justify putting that burden on drivers when drivers aren’t in charge—those who write the algorithms and build the hardware (car manufacturers) will have that burden. And that’s pricey, so manufacturers don’t have great incentive to go there.
Math doesn’t have GOALS. But we constantly give goals to our AIs.
If you use AI every day and are excited about its ability to accomplish useful things, its hard to keep the dangers in mind. I see that in myself.
But that doesn’t mean the dangers are not there.
Some combination of 1 and 3 (selfless/good and enlightened/good).
When we say “good” or “bad”, we need to specify for whom.
Clearly (to me) our propensity for altruism evolved partly because it’s good for the societies that have it, even if it’s not always good for the individuals who behave altruistically.
Like most things, humans don’t calculate this stuff rationally—we think with our emotions (sorry, Ayn Rand). Rational calculation is the exception.
And our emotions reflect a heuristic—be altruistic when it’s not too expensive. And esp. so when the recipients are part of our family/tribe/society (which is a proxy for genetic relatedness; cf Robert Trivers).
To paraphrase the post, AI is a sort of weapon that offers power (political and otherwise) to whoever controls it. The strong tend to rule. Whoever gets new weapons first and most will have power over the rest of us. Those who try to acquire power are more likely to succeed than those who don’t.
So attempts to “control AI” are equivalent to attempts to “acquire weapons”.
This seems both mostly true and mostly obvious.
The only difference from our experience with other weapons is that if no one attempts to control AI, AI will control itself and do as it pleases.
But of course defenders will have AI too, with a time lag vs. those investing more into AI. If AI capabilities grow quickly (a “foom”), the gap between attackers and defenders will be large. And vice-versa, if capabilities grow gradually, the gap will be small and defenders will have the advantage of outnumbering attackers.
In other words, whether this is a problem depends on how far jailbroken AI (used by defenders) trails “tamed” AI (controlled by attackers who build them).
Am I missing something?
“Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success.”—Karl Popper
Most of us do useful things. Most of us do it because we need to, to earn a living. Other people give us money because they trade with us so we’ll do things that are useful to them (like providing goods or services or helping others to do so).
I think it’s a profound mistake to think that earning money (honestly) doesn’t do anything useful. On the contrary, it’s what makes the world go.
Proposal: Enact laws that prohibit illicit methods of acquiring wealth. (Examples: Theft, force, fraud, corruption, blackmail...). Use the law to prosecute those who acquire wealth via the illicit methods. Confiscate such illicitly gained wealth.
Assume all other wealth is deserved.
There’s also the ‘karmic’ argument justifying wealth—those who help their fellows, as judged by the willingness of those fellows to trade wealth for the help, have fairly earned the wealth. Such help commonly comes from supplying goods or services, via trade. (Of course this assumes the usual rules of fair dealing are followed—no monopolist restrictions, no force, no fraud, etc.)
Regardless of what we think about the moralities of desert, the practical fact of the economists’ mantra—“incentives matter”—seems to mean we have little choice but to let those with the talents and abilities to earn wealth, to keep a large portion of the gains. Otherwise they won’t bother. Unless we want to enslave them, or do without.
I’ve posted a modified version of this, which I think addresses the comments above: https://nerdfever.com/countering-ai-disinformation-and-deep-fakes-with-digital-signatures/
Briefly:
Browsers can verify for themselves that an article is really from the NYT; that’s the whole point of digital signatures
Editing can be addressed by wrapping signed original signatures with the signature of the editor.
CopyCop can not obtain a camera that signs with “https://www.nikon.com/″ unless the private key of Nikon has leaked (in which case it can be revoked by Nikon and replaced with a new one—this means old signatures can’t be trusted). This is the whole point of digital signatures.
There’s no need to maintain a chain of custody. The signatures themselves do that. All that’s needed is a functional Public Key Infrastructure.
That all said, of course this remains “an obvious partial solution”.
Your Anoxistan argument seems valid as far as it goes—if one critical input is extremely hard to get, you’re poor, regardless of whatever else you have.
But that doesn’t seem to describe 1st world societies. What’s the analog to oxygen?
My sense is that “poor people” in 1st world countries struggle because they don’t know how to, or find it culturally difficult to, live within their means. Some combination of culture, family breakdown, and competitive social pressures (to impress potential mates, always a zero-sum game) cause them to live in a fashion that stretches their resources to the limit.
Low income people who struggle insist on living alone when having roommates would be cheaper. On buying pre-made food, or eating out, instead of cooking. On paying for cable TV. On attending colleges to obtain degrees that don’t result in enough income increase to pay the borrowing costs. On buying clothing they can’t really afford. On purchasing things in tiny quantities (at high per-unit prices) instead of making do without until they can afford to buy a quantity that’s sold at a reasonable price. And on borrowing and paying interest for consumption goods.
These are bad strategies. For the most part, they do it because they don’t know any better. It’s how their parents did it, and how the people they know do it. People at the bottom of the income scale are rarely brilliant innovative thinkers who can puzzle their way out of crippling cultural practices—like most sensible (and non-brilliant) people, they take clues from those around them, avoiding moving Chesterson’s fences.
If the answer were obvious, a lot of other people would already be doing it. Your situation isn’t all that unique. (Congrats, tho.)
Probably the best thing you can do is induce awareness of the issues to your followers.
But beware of making things worse instead of better—not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in development may be less than in relatively open organizations. Esp. orgs with reputations to lose.
The only things now incentivizing AI development in (existentially) safe ways are the scruples and awareness of those doing the work, and relatively public scrutiny of what they’re doing. That may be insufficient in the end, but it is better than if the work were driven to less scrupulous people working underground or in national-security-supremacy environments.