we haven’t discovered anything dangerously unfriendly… Or anything that can’t be boxed.
Since many humans are difficult to box, I would have to disagree with you there.
And, obviously, not all humans are Friendly.
An intelligent, charismatic psychopath seems like they would fit both your criteria. And, of course, there is no shortage of them. We can only be thankful they are too rare relative to equivalent semi-Friendly intelligences, and too incompetent, to have done more damage than all the deaths and so on.
Of course we haven’t discovered anything dangerously unfriendly...
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
Or anything that can’t be boxed. Remind me how AIs are supposedmtomgetnout of boxes?
AIs can be causally boxed, just like anything else. That is, as long as the agent’s environment absolutely follows causal rules without any exception that would leak information about the outside world into the environment, the agent will never infer the existence of a world outside its “box”.
But then it’s also not much use for anything besides Pac-Man.
Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
FWIW, I think that would make for a pretty interesting post.
And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
I think you’ll have serious trouble getting an AIXI approximation to do much of anything interesting, let alone misbehave. The computational costs are too high.
Not just raw compute-power. An approximation to AIXI is likely to drop a rock on itself just to see what happens long before it figure out enough to be dangerous.
Dangerous as in, capable of destroying human lives? Yeah, probably. Dangerous as in, likely to cause some minor property damage, maybe overwrite some files someone cared about? It should reach that level.
AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
Is it possible to run an AIXI approximation as root on a machine somewhere and give it the tools to shoot itself in the foot? Sure. Will it actually end up shooting itself in the foot? I don’t know. I can’t think of any theoretical reasons why it wouldn’t, but there are practical obstacles: a modern computer architecture is a lot more complicated than anything I’ve seen an AIXI approximation working on, and there are some barriers to breaking one by thrashing around randomly.
It’d probably be easier to demonstrate if it was working at the core level rather than the filesystem level.
Huh. I was under the impression it would require far too much computing power to approximate AIXI well enough that it would do anything interesting. Thanks!
It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
Sir Lancelot: Look, my liege! [trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle] King Arthur: [in awe] Camelot! Sir Galahad: [in awe] Camelot! Sir Lancelot: [in awe] Camelot! Patsy: [derisively] It’s only a model! King Arthur: Shh!
Do you even know what “monte carlo” means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?
For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:
Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.
A classic very very simple example is a program that approximates the value of ‘pi’ thusly:
Estimate pi by dropping $total_hits random points into a square with corners at −1,-1 and 1,1
(then count how many are inside radius one circle centered on origin)
(loop here for as many runs as you like) {
define variables $x,$y, $hits_inside_radius = 0, $radius =1.0, $total_hits=0, pi_approx;
input $total_hits for this run;
seed random function 'rand';
for (0..total_hits-1) do {
$x = rand(-1,1);
$y = rand(-1,1);
$hits_inside_radius++ if ( $x*$x + $y * $y <= 1.0);
}
$pi_approx = 4 * $hits_inside_radius
add $pi_approx and $total_hits to a nice output data vector or whatever
}
output data for this particular run
}
print nice report
exit();
OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It’s a very powerful methodology and very well known.
In what way is this little program an instance of throwing a lot of random programs at the problem of approximating ‘pi’?
What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?
I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.
If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli’s part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.
OK. Now some serious advice:
If you find that you have just typed “Do you even know what X is?” then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I’m having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to
HERE
was done before the check. (Off half recalled stuff from grad school a quarter century ago...)
OK, Wikipedia’s article is much better than mine. But I don’t need to change anything, so I won’t.
P.S. It’s ok to look like an idiot in public, it’s a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?
P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?
P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?
I’m well aware of what Monte Carlo methods are (I work in computer graphics where those are used a lot), I’m also aware of what AIXI does.
Furthermore eli (and the “robots are going to kill everyone” group—if you’re new you don’t even know why they’re bringing up monte-carlo AIXI in the first place) are being hostile to TheAncientGeek.
edit: to clarify, Monte-Carlo AIXI is most assuredly not an AI which is inventing and applying some clever Monte Carlo methods to predict the environment. No, it’s estimating the sum over all predictors of environment with a random subset of predictors of environment (which doesn’t work all too well, and that’s why hooking it up to the internet is not going to result in anything interesting happening, contrary to what has been ignorantly asserted all over this site). I should’ve phrased it differently, perhaps—like “Do you even know what “monte carlo” means as applied to AIXI?”.
It is completely irrelevant how human-invented Monte-Carlo solutions behave, when the subject is hooking up AIXI to a server.
edit2: to borrow from your example:
″ Of course we haven’t discovered anything dangerously good at finding pi...”
“Of course we have, it’s called area of the circle. Do I need to download a Monte Carlo implementation from Github and run it… ”
“Do you even know what “monte carlo” means? It means it tries random points and checks if they’re in a circle. Even very stupid geometric methods do better.”
You appear to have posted this as a reply to the wrong comment. Also, you need to indent code 4 spaces and escape underscores in text mode with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
As a data point, I skipped more_wrong’s comment when I first saw it (partly) because of its length, and only changed my mind because paper-machine & Lumifer made it sound interesting.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Once you enter the domain of practical software you’ve entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.
On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.
Let me post about my emotional state since this is a rationality discussion and if we can’t deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.
I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.
Then I saw that private_messaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen private_messaging attack eli_sennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. “If private_messaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!” And that led to the result you see, a savage mocking.
I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.
If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just “a lot of random guesses” and that “even a very stupid evolutionary program” would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.
Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. “Oh those idiots, using those stupid methods, why don’t they switch to evolutionary algorithms”. I’m not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.
So I went off a little heavily on private_messaging, who I am sure is a good person at heart.
Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.
I apologize to anyone who got emotionally hurt by my tirade.
You appear to have posted this as a reply to the wrong comment. Also, you need to escape underscores with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.
Of course we haven’t discovered anything dangerously unfriendly...
Or anything that can’t be boxed. Remind me how AIs are supposed to out of boxes?
Since many humans are difficult to box, I would have to disagree with you there.
And, obviously, not all humans are Friendly.
An intelligent, charismatic psychopath seems like they would fit both your criteria. And, of course, there is no shortage of them. We can only be thankful they are too rare relative to equivalent semi-Friendly intelligences, and too incompetent, to have done more damage than all the deaths and so on.
Most humans are easy to box, since they can be contained jn prisons.
How likly is an .AI to be psychopathic that is not designed to be psychopathic?
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
AIs can be causally boxed, just like anything else. That is, as long as the agent’s environment absolutely follows causal rules without any exception that would leak information about the outside world into the environment, the agent will never infer the existence of a world outside its “box”.
But then it’s also not much use for anything besides Pac-Man.
FWIW, I think that would make for a pretty interesting post.
And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
No. But .it will be eminently boxable. In fact, if you not nuts, youll be running it a box.
I think you’ll have serious trouble getting an AIXI approximation to do much of anything interesting, let alone misbehave. The computational costs are too high.
Given how slow and dumb it is, I have a hard time seeing an approximation to AIXI as a threat to anyone, except maybe itself.
True, but that’s an issue of raw compute-power, rather than some innate Friendliness of the algorithm.
It would still be useful to have an example, of innate unfriendliness, rather than ” it doesn’t really run or do anything”
Not just raw compute-power. An approximation to AIXI is likely to drop a rock on itself just to see what happens long before it figure out enough to be dangerous.
Dangerous as in, capable of destroying human lives? Yeah, probably. Dangerous as in, likely to cause some minor property damage, maybe overwrite some files someone cared about? It should reach that level.
Is that … possible?
Is it possible to run an AIXI approximation as root on a machine somewhere and give it the tools to shoot itself in the foot? Sure. Will it actually end up shooting itself in the foot? I don’t know. I can’t think of any theoretical reasons why it wouldn’t, but there are practical obstacles: a modern computer architecture is a lot more complicated than anything I’ve seen an AIXI approximation working on, and there are some barriers to breaking one by thrashing around randomly.
It’d probably be easier to demonstrate if it was working at the core level rather than the filesystem level.
Huh. I was under the impression it would require far too much computing power to approximate AIXI well enough that it would do anything interesting. Thanks!
This can easily be done, and be done safely, since you could give an AIXI root access to a virtualused machine.
I’m still waiting for evidence that it would do something destructive in the pursuit of a goal that’s is not obviously destructive.
That would be the AIXI that is uncomputable?
And don’t AIs get out of boxes by talking their way out, round here?
It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
But people do run these things that aren’t actually AIXIs , and they haven’t actually taken over the world, so they aren’t actually dangerous.
So there is no actually dangerous actual .AI.
...it’s not dangerous until it actually tries to take over the world?
I can think of plenty of ways in which an AI can be dangerous without taking that step.
The you had better tell people not to download and run AIXI approximation.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
The difference between one and the other is just a matter of processing power and training data.
Sir Lancelot: Look, my liege!
[trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle]
King Arthur: [in awe] Camelot!
Sir Galahad: [in awe] Camelot!
Sir Lancelot: [in awe] Camelot!
Patsy: [derisively] It’s only a model!
King Arthur: Shh!
:-D
Do you even know what “monte carlo” means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?
For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:
Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.
A classic very very simple example is a program that approximates the value of ‘pi’ thusly:
Estimate pi by dropping $total_hits random points into a square with corners at −1,-1 and 1,1
(then count how many are inside radius one circle centered on origin)
(loop here for as many runs as you like) { define variables $x,$y, $hits_inside_radius = 0, $radius =1.0, $total_hits=0, pi_approx;
} output data for this particular run } print nice report exit();
OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It’s a very powerful methodology and very well known.
In what way is this little program an instance of throwing a lot of random programs at the problem of approximating ‘pi’? What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?
I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.
If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli’s part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.
OK. Now some serious advice:
If you find that you have just typed “Do you even know what X is?” then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I’m having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to HERE was done before the check. (Off half recalled stuff from grad school a quarter century ago...)
OK, Wikipedia’s article is much better than mine. But I don’t need to change anything, so I won’t.
P.S. It’s ok to look like an idiot in public, it’s a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?
P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?
P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?
I’m well aware of what Monte Carlo methods are (I work in computer graphics where those are used a lot), I’m also aware of what AIXI does.
Furthermore eli (and the “robots are going to kill everyone” group—if you’re new you don’t even know why they’re bringing up monte-carlo AIXI in the first place) are being hostile to TheAncientGeek.
edit: to clarify, Monte-Carlo AIXI is most assuredly not an AI which is inventing and applying some clever Monte Carlo methods to predict the environment. No, it’s estimating the sum over all predictors of environment with a random subset of predictors of environment (which doesn’t work all too well, and that’s why hooking it up to the internet is not going to result in anything interesting happening, contrary to what has been ignorantly asserted all over this site). I should’ve phrased it differently, perhaps—like “Do you even know what “monte carlo” means as applied to AIXI?”.
It is completely irrelevant how human-invented Monte-Carlo solutions behave, when the subject is hooking up AIXI to a server.
edit2: to borrow from your example:
″ Of course we haven’t discovered anything dangerously good at finding pi...”
“Of course we have, it’s called area of the circle. Do I need to download a Monte Carlo implementation from Github and run it… ”
“Do you even know what “monte carlo” means? It means it tries random points and checks if they’re in a circle. Even very stupid geometric methods do better.”
You appear to have posted this as a reply to the wrong comment. Also, you need to indent code 4 spaces and escape underscores in text mode with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.
I am sorry, I have some impairments and it is hard to make everything come out right.
Thank you for your help
As a data point, I skipped more_wrong’s comment when I first saw it (partly) because of its length, and only changed my mind because paper-machine & Lumifer made it sound interesting.
“Good, I can feel your anger. … Strike me down with all of your hatred and your journey towards the dark side will be complete!”
It’s so… *sniff*… beautiful~
Once you enter the domain of practical software you’ve entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.
On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.
Let me post about my emotional state since this is a rationality discussion and if we can’t deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.
I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.
Then I saw that private_messaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen private_messaging attack eli_sennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. “If private_messaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!” And that led to the result you see, a savage mocking.
I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.
If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just “a lot of random guesses” and that “even a very stupid evolutionary program” would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.
Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. “Oh those idiots, using those stupid methods, why don’t they switch to evolutionary algorithms”. I’m not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.
So I went off a little heavily on private_messaging, who I am sure is a good person at heart.
Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.
I apologize to anyone who got emotionally hurt by my tirade.
I have not the slightest idea what happened, but your revised response seems extraordinarily mature for an internet comment, so yeah.
You appear to have posted this as a reply to the wrong comment. Also, you need to escape underscores with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
To think of the good an EPrime style ban on “is” could do here....
How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.