On how governance institutional designs cannot restrain corporate, criminal and political groups of humans who gain increasing power from training and exploiting increasingly more capable – eventually power-seeking – machine learning architectures.
As elaborating on the human-to-human stage of the previous post.
Excerpts compiled below:
> How can governments and companies > cooperate to reduce misuse risks from AI > and equitably distribute its benefits?
that people and governments simply do not have any way to understand and mitigate the significant advantage that companies having and deploying AI have, in regards to their own capabilities.
that the governance challenges perceived by many/most regular American people to be the most likely to be impactful to individual people “soon” (ie, within the next decade) and which where also considered by them to be “the highest priority, most significant/important issues”, etc, in regards to AI, machine learning, and enhanced use of technology general, were:
> 1. “Preventing AI-assisted surveillance from violating privacy and civil liberties”.
> 2. “Preventing AI from being used to spread fake and harmful content online”.
> 3. “Preventing AI cyber attacks against governments, companies, organizations, and individuals”.
> 4. “Protecting data privacy”.
that the issues listed by the common people, as being of ‘primary concern’, are actually less likely to be personally noticeable than the use of AI tools by criminals by businesses engaged in predatory practices, by cultural/cult leaders (operating in their own interests), politicians with predominately privatize interests, etc, to do the kind of sense making (perception) and action taking (expression) as necessary to create better honeypots, more, deeper, and more complex entanglements, and to implement more effective and efficient extraction and extortion of resources at higher frequency, intensity, consequentiality, at larger scale, more quickly, invisibly/transparently/covertly, etc, in more and more ways that are more and more difficult to avoid, prevent, mitigate, and heal/restore from, for larger and larger and more varied fractions of the population, etc.
where/moreover, as these tools become and more widespread, effective, etc; that more and more people will be using them, and/or find that they have to use them, in order to remain competitive to their neighbors (ie; as per any other/similar multi-polar trap scenario) such that the prevalence and variety of such traps, risks, harms, costs, etc, everywhere increases, systemically, with universal extraction occurring to such an extent, in so many ways, for so many resource kinds, with so many degrees of resource motion that is overall globally unconscious, that the net effect is eventual inexorable system/civilization/cultural collapse.
that the overall effect of introducing AI/machine learning, is that it ends up being used for more effective social pathology, as evidenced in increasing occurrence of sophisticated bank fraud, stock market manipulation, back room dealing and government bailouts, etc.
that most people do not actually realize/understand the actual most likely risks/hazards/costs associated with widespread AI use/deployment. - as including people in government.
> Can existing governments > be used to prevent or regulate the use > of AI and/or other machine learning tech > by predatory people in predatory ways?
as in: Can goverments make certain harmful, risky, or socially costly activities illegal, and yet also to be able to effectively enforce those new laws?
as to actually/effectively protect individuals/groups from the predatory actions, of other AI/machine/weapon empowered individuals/groups, in ways that favor:
making the right outcome much more likely (as individually and socially beneficial) than the wrong/harmful outcome.
early detection of risk, harms, costs, law violations, etc.
the effective, complete proactive mitigation of such risks/harms/costs, etc.
the restoration and healing of harm, reparation of cost, etc, as needed to restore actual holistic wholeness, of individuals, families, communities, cultures, etc.
^– where in general, no; not with governance structures/methodologies currently in place.
that only much more effective, actual good governance structures will have any hope of actually mitigating the real risks/costs/harms of any substantial new technology based on complexity itself (ie, examples as AI, machine learning, biotech, pharmaceuticals, and all intersections and variants of these).
where in a/any contest between people savvy with AI use, and the rate of change of that technology, and its use, and the likely naivety of people in government attempting to regulate that AI, its use, etc, and the fact of extensive, very well funded, industry lobbyists all being (much) more knowledgeable, skillful, and moreover themselves empowered with the use of the tech itself, so as to either influence the policy makers, or to be/become the policy makers themselves, and thus to be serving their own interests (rather than the interests of the actual public good); that the chances that anyone who actually has the public interest in mind, and somehow manages, by complete accident, to find themselves at a government post, that they will for sure have too many things -- of way too much complexity, concurrently occurring -- for such ostensive government regulators to have or provide whatever sufficient amount of attention and understanding, that would actually be needed, to regulate that AI and machine learning industry, and/or its applications and/or uses, in anything at all approaching any sort of effective and actually risk mitigating manner, even when considered for acute problems only, leaving alone the complete unaddress of long term problems. - as consistent with nearly all historical precedent.
that artificial inteligence are tools in the hands of psychopaths, who also in their completeness of being incapable of feeling the pain of others, or in being characteristically and unusually unable to relate to such feelings, or feelings/meaning/value in/of, or in association with, other humans at all, as conscious, alive, and worthwhile beings, with value, meaning, and agency, a will and sovereignty of their own.
where psychopaths have aligned tenancies with the nature of the machines themselves, that this near perfect mating of solely personal benefit agency with the soulless yet adaptable responsive nature of the machine intelligence process, makes for a significantly enhanced psychopath with new superpowers.
where leaders (either in business or governance, though more typically in business) have learned to ‘do whatever it takes’ to climb the social ladder (on the backs of whomever) to ‘win regardless of whatever cost’ (to others, and maybe to (future) self), that such machine learning tools become indispensable to the operations of the business itself, enabling increased efficiency of extraction across all networks of capability.
as combining Metcalfe’s law with network commerce to build the ultimate parasitic system, intimately hostile to all humans, and possibly to all of life, through the will and agency of the humans who elect to use such tools, and/or moreover may be required to use such tools, so as to effectively continue to compete with (their illusion of) (the capabilities of) “the other guy”.
that new governance (and economic) architectures will be needed, which are inherently anti-psychopathic, anti- corruptible, to be anywhere near in capability to dealing with situations like this.
Institutions Cannot Restrain Dark-Triad AI Exploitation
Link post
On how governance institutional designs cannot restrain corporate, criminal and political groups of humans who gain increasing power from training and exploiting increasingly more capable – eventually power-seeking – machine learning architectures.
As elaborating on the human-to-human stage of the previous post.
Excerpts compiled below:
→ Read link to Forrest Landry’s blog for more.
Note: Text is laid out in his precise research note-taking format.