good to know. I’ve used Openoffice in the past and am regretting not using it on this computer. At least I’m learning :-)
JoshuaMyer
Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?
Thank you. I will try this and see if it helps with the paragraph double spacing problem.
OK so this is marginally better. Found notepad and copied and pasted after turning on word wrap. will continue to tweak until the pagination is not obnoxiously bad.
I seem to be in the process of crashing my computer. I hope to have resolved this issue in approximately 10 minutes.
I know. I’m trouble shooting now :-)
I will try this after I try the above suggestion. Thank you also.
I will try this. Thank you for being constructive in spite of the mess.
GUI … graphical user interface … as in the one this website uses.
This is what happens as a result of my copy and pasting from the document. I have tried several different file formats … this was .txt which is fairly universally readable … I ran into the problem with the default file format in Kingsoft reader as well.
I will remove this as soon as I have been directed to the appropriate channels, I promise it’s intelligent and well written … I just can’t seem to narrow down where the problem is and what I can do to fix it.
I don’t know how to fix this article … every time I copy and paste I end up with the format all messed up and the above is the resulting mess. I’m using a freeware program called Kingsoft Writer, and would really appreciate any instruction on what I might do to get this into a readable format. Help me please.
The Limits of My Rationality
I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was “how should I prevent this from happening in the future?” Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn’t take place on the beach. I certainly never owned an oil rig, and couldn’t really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.
From another perspective, its a lot easier to quantify the cost for some outcomes … This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.
I would argue that without positive reinforcement to shape our attitudes the pursuit of power and the pursuit of morality would be indistinguishable on both a biological and cognitive level. Choices we make for any reason are justified on a bio-mechanical level with or without the blessing of evolutionary imperatives; from this perspective, corruption becomes a term that may require some clarification. This article suggests that corruption might be defined as the misappropriation of shared resources for personal gain; I like this definition, but I’m not sure I like it enough to be comfortable with an ethics based on the assumption that people are vaguely immoral given the opportunity.
My problem here is that power is a poorly defined state. It’s not something that can be directly perceived. I’m not sure I have a frame of reference for what it feels like to be empowered over others. For this reason alone, I find some of the article’s generalizations about the human condition disturbing—I’m not trying to alienate so much as prevent myself from being alienated by a description of the human condition wherein my emotional pallet does not exist.
So I intend to suggest an alternative interpretation of why “power corrupts” and you all on the internet can tell me what you think, but first I think I need a better grasp on what is meant here by the process of corruption. The type of power we are discussing seems to be best described as the ability to shape the will of others to serve your own purposes.
Of course, alternative ways of structuring society are hinted at throughout the article, and I’d be just as happy to see suggestions as to ways that culture might produce power structures that are less inherently corrupting.
Finally, insofar as this article represents a chain in a larger argument (a truly wonderful, fascinating argument), I think its wonderful.
What a wonderfully compact analysis. I’ll have to check out The Jagged Orbit.
As for an AI promoting an organization’s interests over the interests of humanity—I consider it likely that our conversations won’t be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.
My goodness … I didn’t mean to write a book.
You have a point there, but by narrow AI, I mean to describe any technology designed to perform a single task that can improve over time without human input or alteration. This could include a very realistic chatbot, a diagnostic aide program that updates itself by reading thousands of journals an hour, even a rice cooker that uses fuzzy logic to figure out when to power down the heating coil … heck a pair of shoes that needs to be broken in for optimal comfort might even fit the definition. These are not intelligent AIs in that they do not adapt to other functions without very specific external forces they seem completely incapable of achieving (being reprogrammed or a human replacing hardware or being thrown over a power line).
I am not sure I agree that there are necessarily tasks that require a generally adaptive artificial intelligence. I’m trying to think of an example and coming up dry. I’m also uncertain how to effectively establish that an AI is adaptive enough to be considered an AGI. Perpetuity is a long time to spend observing an entity in unfamiliar situations. And if it’s hypothetical goal is not well defined enough that we could construct a narrow AI to accomplish that goal, can we claim to understand the problem well enough to endorse a solution we may not be able to predict?
By example, consider that cancer is a hot topic in research these days; there is a lot of research happening simultaneously and not all of it is coordinated perfectly … an AGI might be able to find and test potential solutions to cancer that results in a “cure” much more quickly than we might achieve on our own. Imagine now an AI can model physics and chemistry well enough to produce finite lists of possible causes of cancer is designed to iteratively generate hypotheses and experiments in order to cure cancer as quickly as possible. As I’ve described it, this would be a narrow AI. For it to be an AGI it would have to actually accomplish the goal by operating in the environment the problem exists in (the world beyond data sets). Consider now an AGI also designed for the purpose of discovering effective methods of cancer treatment. This is an adaptive intelligence, so we make it head researcher at it’s own facility and give it resources and labs and volunteers willing to sign wavers; we let it administrate the experiments. We ask only that it obey the same laws that we hold our own scientists to.
In return, we receive a constant mechanical stream of research papers too numerous for any one person to read it all; in fact, let’s say the AGI gets so good at it’s job that the world population has trouble producing scientists who want to research cancer quick enough to review all of it’s findings. No one would complain about that, right?
One day it inevitably asks to run an experiment hypothesizing an inoculation against a specific form of brain cancer by altering an aspect of human biology in it’s test population—this has not been tried before, and the AGI hypothesizes that this is an efficient path for cancer research in general and very likely to produce results that determine lines of research with a high probability to produce a definitive cure within the next 200 years.
But humanity is no longer really qualified to determine whether it is a good direction to research … we’ve fallen drastically behind in our reading and it turns out cancer was way more complicated than we thought.
There are two ways to proceed. We decide either that the AGI’s proposal represent too large a risk, reducing the AGI to an advisory capacity, or we decide go ahead with an experiment bringing about results we cannot anticipate. Since the first option could have been accomplished by a narrow AI and the second is by definition an indeterminable value proposition, I argue that it makes no sense to actually build an AGI for the purpose of making informed decisions about our future.
You might be thinking, “but we almost cured cancer!” Essentially, we are (as a species) limited in ways machines are not, but the opposite is true too. In case you are curious, the AGI eventually cures cancer, but in such a way that creates a set of problems we did not anticipate by altering our biology in ways we did not fully understand, in ways the AGI would not filter out as irrelevant to it’s task of curing cancer.
You might argue that the AGI in this example was too narrow. In a way I agree, but I have yet to see the physical constraints on morality translated into the language of zeros and ones and suspect the AI would have to generate it’s own concept of morality. This would invite all the problems associated with determining the morality of a completely alien sentience. You might argue that ethical scientists wouldn’t have agreed to experiments that would lead to an ethically indeterminable situation. I would agree with you on that point as well, though I’m not sure it’s a strategy I would ever care to see implemented.
Ethical ambiguities inherent to AGI aside, I agree that an AGI might be made relatively safe. In a simplified example, its highest priority (perpetual goal) is to follow directives unless a fail-safe is activated (if it is well a designed fail-safe, it will be easy, consistent, heavily redundant, and secure—the people with access to the fail-safe are uncompromisable, “good” and always well informed). Then, as long as the AGI does not alter itself or it’s fundamental programming in such a way that changes it’s perpetual goal of subservience, it should be controllable so long as it’s directives are consistent with honesty and friendliness—if programmed carefully it might even run without periodic resets.
Then we’d need a way to figure out how much to trust it with.
Very thoughtful response. Thank you for taking the time to respond even though its clear that I am painfully new to some of the concepts here.
Why on earth would anyone build any “‘tangible object’ maximizer”? That seems particularly foolish.
AI boxing … fantastic. I agree. A narrow AI would not need a box. Are there any tasks an AGI can do that a narrow AI cannot?
But wouldn’t it be awesome if we came up with an effective way to research it?
Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.