(1) I’d rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.
(2) The other important thing is law. Law is the “offensive approach to the problem of security” in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It’s not something you can ever come close to competing with by a philosophy invented from scratch.
(3) I stand by my comment that “AGI” and “friendliness” are hopelessly anthropomorphic, infeasible, and/or vague.
(4) Computer “goals” are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.
The other important thing is law. Law is the “offensive approach to the problem of security” in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It’s not something you can ever come close to competing with by a philosophy invented from scratch.
As a lawyer, I strongly suspect this statement is false. As you seem to be referring to the term, Law is society’s organizational rules about how and when to implement coercive violence. In the abstract, this is powerful, but concretely, this power is implemented by individuals. Some of them (i.e. police officers), care relatively little about the abstract issues—in other words, they aren’t careful about the issues that are relevant to AI.
Further, law is filled with backdoors—they are called legislators. In the United States, Congress can make almost any judicially announced rule irrelevant by passing a statute. If you call that process “Law,” then you aren’t talking about the institution that draws on “the work of millions of smart people” over time.
Finally, individual lawyers’ day-to-day work has almost no relationship to the parts of Law that you are suggesting is relevant to AI. Worse for your point, lawyers don’t even engage with the policy issues of law with any frequency. For example, a lawyer litigating contracts might never engage with what promises should be enforced in her entire career.
In short, your paragraph about law is misdirected and misleading.
Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium,
That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for “legal occupations” in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal education or authority was only available to a small minority, and the Flynn Effect had not occurred. Not to mention that law is disproportionately made by politicians who are selected for charisma and other factors in addition to intelligence.
and tested empirically against the real world of real agents with a real diversity of values every day. It’s not something you can ever come close to competing with by a philosophy invented from scratch.
It’s hard to know what to make of this.
Perhaps that the legal system is good at creating incentives that closely align the interests of those it governs with the social good, and that this will work on new types of being without much dependence on their decisionmaking processes?
Contracts and basic property rights certainly do seem to help produce wealth. On the other hand, financial regulation is regularly adjusted to try to nullify new innovation by financiers that poses systemic risks or exploits government guarantees, but the financial industry still frequently outmaneuvers the legal system. And of course the legal system depends on the loyalty of the security forces for enforcement, and makes use of ideological agreement among the citizenry that various things are right or wrong.
Restraining those who are much weaker is easier than restraining those who are strong. A more powerful analogy would be civilian control over military and security forces. There do seem to have been big advances in civilian control over the military in the developed countries (fewer coups, etc), but they seem to reflect changes in ideology and technology more than law.
If it is easy to enforce laws on new AGI systems, then the situation seems fairly tractable, even for AGI systems with across-the-board superhuman performance which take action based on alien and inhumane cost functions. But it doesn’t seem guaranteed that it will be easy to enforce such laws on smart AGIs, or that the trajectory of development will be “all narrow AI, all the time,” given the great economic value of human generality.
There’s 0.0001 prior for 1 in 10000 intelligence level. It’s a low prior, you need a genius detector with an incredibly low false positive rate before most of your ‘geniuses’ are actually smart. A very well defined problems with very clear ‘solved’ condition (such as multiple novel mathematical proofs or novel algorithmic solution to hard problem that others try to solve) would maybe suffice, but ‘he seems smart’ certainly would not. This also goes for IQ tests themselves—while a genius would have high IQ score, high IQ scored person would most likely be someone somewhat smart slipping through the crack between what IQ test measures and what intelligence is (case example, Chris Langan, or Keith Raniere, or other high IQ ‘geniuses’ we would never suspect of being particularly smart if not for IQ tests).
Weak and/or subjective evidence of intelligence, especially given lack of statistical independence of evidence, should not get your estimate of intelligence of anyone very high.
This is rather tangential, but I’m curious, out of those who score 1 in 10000 on a standard IQ test, what percentage is actually at least, say, 1 in 5000 in actual intelligence? Do you have a citation or personal estimate?
Depends what you call “actual intelligence” as distinct from what IQ tests measure. private_messaging talks a lot in terms of observable real-world achievements, so presumably is thinking of something along those lines.
The easiest interpretation to measure would be a regression toward the mean effect. Putting a lower bound on the IQ scores in your sample means that you have a relevant fraction of people who tested higher than their average test score. I suspect that at the high end, IQ tests have few enough questions scored incorrectly that noise can let some < 1 in 5000 IQ test takers into your 1 in 10000 cutoff.
I also didn’t note the other problem: 1 in 10,000 is around IQ=155; the ceiling of most standardized (validated and normed) intelligence tests is around 1 in 1000 (IQ~=149). Tests above this tend to be constructed by people who consider themselves in this range, to see who can join their high IQ society and not substantially for any other purpose.
Would depend to how you evaluate actual intelligence. IQ test, at high range, measures reliability in solving simple problems (combined with, maybe, environmental exposure similarity to test maker when it comes to progressive matrices and other ‘continue sequence’ cases—the predictions by Solomonoff induction depend to machine and prior exposure, too). As an extreme example consider an intelligence test of very many very simple and straightforward logical questions. It will correlate with IQ but at the high range it will clearly measure something different from intelligence. All the intelligent individuals will score highly on that test, but so will a lot of people who are simply very good at simple questions.
A thought experiment: picture a class room of mind uploads, set for a half the procedural skills to read only, and teach them the algebra class. Same IQ, utterly different outcome.
I would expect that if the actual intelligence correlates with IQ to the factor of 0.9 (VERY generous assumption), the IQ could easily become non-predictive at as low as 99th percentile without creating any contradiction with the observed general correlation. edit: that would make about one out of 50 people with IQ of one in 10 000 (or one in 1000 or 1 in 1000 0000 for that matter) be intelligent at level of 1 in 5 000. That seems kind of low, but then, we mostly don’t hear of the high IQ people just for IQ alone. edit: and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences.
In any case the point is that the higher is the percentile the more confident you must be that you have no common failure mode between parts of your test.
edit: and for the record my IQ is 148 as measured on a (crappy) test in English which is not my native tongue. I also got very high percentile ratings in programming contest, and I used to be good at chess. I have no need to rationalize something here. I feel that a lot of this sheepish innumerate assumption that you can infer one in 10 000 level performance from a test in absence of failure modes of which you are far less certain than 99.99% , comes simply from signalling—to argue against applicability of IQ test in the implausibly high percentiles lets idiots claim that you must be stupid. When you want to select one in 10 000 level of performance in running 100 meters you can’t do it by measuring performance at a standing jump.
There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests have substantially better performance (on patents, income, tenure at top universities) than those at the 99.9th or 99th percentiles. More here.
and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences.
Mensa is less selective than elite colleges or workplaces for intelligence, and much less selective for other things like conscientiousness, height, social ability, family wealth, etc. Far more very high IQ people are in top academic departments, Wall Street, and Silicon Valley than in high-IQ societies more selective than Mensa. So high-IQ societies are a very unrepresentative sample, selected to be less awesome in non-IQ dimensions.
There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests
Uses other tests than IQ test, right? I do not dispute that a cognitive test can be made which would have the required reliability for detecting the 99.99th percentile. The IQ tests, however, are full of ‘continue a short sequence’ tests that are quite dubious even in principle. It is fundamentally difficult to measure up into 99.99th percentile, you need a highly reliable measurement apparatus, carefully constructed in precisely the way in which IQ tests are not. Extreme rarities like one in 10 000 should not be thrown around lightly.
Mensa is less selective than elite colleges or workplaces for intelligence
There are other societies. They all are not very selective for intelligence either, though, because they all rely on dubious tests.
and much less selective for other things like conscientiousness, height, social ability, family wealth, etc.
I would say that this makes those other places be an unrepresentative sample of the “high IQ” individuals. Even if those individuals who pass highly selective requirements on something else rarely enter mensa, they are rare (tautology on highly selective) and their relative under representation in mensa doesn’t sway mensa’s averages.
edit: for example consider the Nobel Prize winners. They all have high IQs but there is considerable spread and the IQ doesn’t seem to correlate well with the estimate of “how many others worked on this and did not succeed”.
Note: I am using “IQ” in the narrow sense of “what IQ tests measure”, not as shorthand for intelligence. The intelligence has the capacity to learn component which IQ tests do not measure but tests of mathematical aptitude (with hard problems) or verbal aptitude do.
note2: I do not believe that the correlation entirely disappears even for IQ tests past 99th percentile. My argument is that for the typical IQ tests it well could. It’s just that the further you get up the smaller fraction of the excellence is actually being measured.
Administering SATs to younger children, to raise the ceiling.
I would say that this makes those other places be an unrepresentative sample of the “high IQ” individuals.
Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. “Much more” is poor phrasing here, they’re not rejecting 90%. If you look at the linked papers you’ll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions.
Administering SATs to younger children, to raise the ceiling.
Ghmmm. I’m sure this measures a plenty of highly useful personal qualities that correlate with income. E.g. rate of learning. Or inclination to pursue intellectual work.
Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. “Much more” is poor phrasing here, they’re not rejecting 90%. If you look at the linked papers you’ll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions
Well, yes. I think we agree on all substantial points here but disagree on interpretation of my post. I referred specifically to “IQ tests” not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. ‘that guy seems smart’ shouldn’t possibly result in estimate of 1 in 10 000 , and neither could anything that relies on rather subjective estimate of the difficulty of the accomplishments in the settings where you can’t e.g. reliably estimate from number of other people who try and don’t succeed.
I referred specifically to “IQ tests” not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. ‘that guy seems smart’ shouldn’t possibly result in estimate of 1 in 10 000
Note that these studies use the same tests (childhood SAT) that Eliezer excelled on (quite a lot higher than the 1 in 10,000 level), and that I was taking into account in my estimation.
a: while that’d be fairly impressive, keep in mind that if it is quite a lot higher than 1 in 10 000 then my prior for it is quite a lot lower than 0.0001 with only minor updates up for ‘seeming clever’ , and my prior for someone being a psychopath/liar is 0.01, with updates up for talking other people into giving you money.
b: not having something else likewise concrete to show off (e.g. contest results of some kind and the like) will at most make me up-estimate him to bin with someone like Keith Raniere or Chris Langan (those did SAT well too), which is already the bin that he’s significantly in. Especially as he had been interested in programming, and the programming is the area where you can literally make a LOT of money in just a couple years while gaining the experience and gaining much better cred than childhood SAT. But also an area that heavily tasks general ability to think right and deal with huge amounts of learned information. My impression is that he’s a spoiled ‘math prodigy’ who didn’t really study anything beyond fairly elementary math, and my impression is that it’s his own impression except he thinks he can do advanced math with little effort using some intuition while i’m pretty damn skeptical of such stuff unless well tested.
and the programming is the area where you can literally make a LOT of money in just a couple years while gaining the experience and gaining much better cred than childhood SAT
I don’t think the childhood SAT gives that much “cred” for real-world efficacy, and I don’t conflate intelligence with “everything good a person can be.” Obviously, Eliezer is below average in the combination of conscientiousness, conformity, and so forth that causes most smart people to do more schooling. So I would expect lower performance on any given task than from a typical person of his level of intelligence. But it’s not that surprising that he would, say, continue popular blogging with significant influence on a sizable audience, rather than stop that (which he values for its effects) to work as a Google engineer to sack away a typical salary, or to do a software startup (which the stats show is pretty uncertain even for those with VC backing and previous successful startups).
‘math prodigy’ who didn’t really study anything beyond fairly elementary math, and my impression is that it’s his own impression
I agree on not having deep math knowledge, and this being reason to be skeptical of making very unusual progress in AI or FAI. However while his math scores were high, “math prodigy” isn’t quite right, since his verbal scores were even higher. There are real differences in what you expect to happen depending on the “top skill.” In the SMPY data such people often take up professions like science (or science fiction) writer (or philosopher) that use the verbal skills too, even when they have higher raw math performance than others who go to on to become hard science professors. It’s pretty mundane when such a person leans towards being a blogger rather than an engineer, especially when they are doing pretty well as the former. Eliezer has said that if not worried about x-risk he would want to become a science fiction writer, as opposed to a scientist.
Especially as he had been interested in programming, and the programming is the area where you can literally make a LOT of money in just a couple years while gaining the experience and gaining much better cred than childhood SAT.
What salary level is good enough evidence for you to consider someone clever?
Notice that your criteria for impressive cleverness excludes practically every graduate student—the vast majority make next to nothing, have few “concrete” things to show off, etc.
My impression is that he’s a spoiled ‘math prodigy’ who didn’t really study anything beyond fairly elementary math, and my impression is that it’s his own impression except he thinks he can do advanced math with little effort using some intuition while i’m pretty damn skeptical of such stuff unless well tested.
Except the interview you quoted says none of that.
JB: I can think of lots of big questions at this point, and I’ll try to get to some of those, but first I can’t resist asking: why do you want to study math?
EY: A sense of inadequacy.
[...]
[EY:] Even so, I was a spoiled math prodigy as a child—one who was merely amazingly good at math for someone his age, instead of competing with other math prodigies and training to beat them. My sometime coworker Marcello (he works with me over the summer and attends Stanford at other times) is a non-spoiled math prodigy who trained to compete in math competitions and I have literally seen him prove a result in 30 seconds that I failed to prove in an hour.
This is substantially different from EY currently being a math prodigy.
[EY:] I’ve come to accept that to some extent [Marcello and I] have different and complementary abilities—now and then he’ll go into a complicated blaze of derivations and I’ll look at his final result and say “That’s not right” and maybe half the time it will actually be wrong.
In other words, he’s no better than random chance, which is vastly different from “[thinking] he can do advanced math with little effort using some intuition.” By the same logic, you’d accept P=NP trivially.
[EY:] I’ve come to accept that to some extent [Marcello and I] have different and complementary abilities—now and then he’ll go into a complicated blaze of derivations and I’ll look at his final result and say “That’s not right” and maybe half the time it will actually be wrong.
In other words, he’s no better than random chance, which is vastly different from “[thinking] he can do advanced math with little effort using some intuition.” By the same logic, you’d accept P=NP trivially.
I don’t understand. The base rate for Marcello being right is greater than 0.5.
Maybe EY meant that, on the occasions that Eliezer objected to the final result, he was correct to object half the time. So if Eliezer objected to just 1% of the derivations, on that 1% our confidence in the result of the black box would suddenly drop down to 50% from 99.5% or whatever.
[EY:] I’ve come to accept that to some extent [Marcello and I] have different and complementary abilities—now and then he’ll go into a complicated blaze of derivations and I’ll look at his final result and say “That’s not right” and maybe half the time it will actually be wrong.
In other words, he’s no better than random chance, which is vastly different from “[thinking] he can do advanced math with little effort using some intuition.” By the same logic, you’d accept P=NP trivially.
If a device gives a correct diagnosis 999,999 times out of 1,000,000 and is applied to a population that has about 1 in 1,000,000 chance of being positive then a positive diagnosis by the device has approximately 50% chance of being correct. That doesn’t make it “no better than random chance”. It makes it amazingly good.
Notice that your criteria for impressive cleverness excludes practically every graduate student—the vast majority make next to nothing, have few “concrete” things to show off, etc.
It’s not criteria for cleverness, it is criteria for evidence when the prior is 0.0001 (for 1 in 10 000) . One can be clever at one in 7 billions level, and never having done anything of interest, but I can’t detect such person as clever at one in 10 000 level with any confidence without seriously strong evidence.
This is substantially different from EY currently being a math prodigy.
I meant, a childhood math prodigy.
In other words, he’s no better than random chance
If Marcello failed one time out of ten and Eliezer detected it half of the time, that would be better than chance. Without knowing failure rate of Marcello (or without knowing how the failures are detected besides being pointed out by EY), one can’t say whenever it is better than chance or not.
The Bureau of Labor Statistics reports 728,000 lawyers in the U.S
I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.
Since my posts seem to be being read so carelessly, I will no longer be posting on this thread. I highly recommend folks who want to learn more about where I’m coming from to visit my blog, Unenumerated. Also, to learn more about the evolutionary emergence of ethical and legal rules, I highly recommend Hayek—Fatal Conceit makes a good startng point.
I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.
Since my posts seem to be being read so carelessly, I will no longer be posting on this thread.
A careful reading of my own comment would have revealed my references to the US as only one heavily lawyered society (useful for an upper bound on lawyer density, and representing a large portion of the developed world and legal population), and to the low population of past centuries (which make them of lesser importance for a population estimate), indicating that I was talking about the total over time and space (above some threshold of intelligence) as well.
I was presenting figures as the start of an estimate of long term lawyer population, and to indicate that to get “millions” one could not pick a high percentile within the population of lawyers, problematic given the intelligence of even 90th percentile attorneys.
Is it really so hard to believe that there have been more than a million highly intelligent judges and influential lawyers since the Magna Carta was issued? (In my mind, the reference is to English Common Law—Civil Law works differently enough that counting participants is much harder).
As I said, I don’t think this proves what nickLW asserts follows from it, but I think the statement “More than a million fairly intelligent individuals have put in substantial amounts of work to make the legal system capable of solving social problems decently well” is true, if mostly irrelevant to AI.
The number of solicitors qualified to work in England and Wales has rocketed over the past 30 years, according to new figures from the Law Society. The number holding certificates—which excludes retired lawyers and those no longer following a legal career—are at nearly 118,000, up 36% on ten years ago.
There were 2,500 barristers and 32,000 solicitors in England and Wales in the early 1970s. Now there are 15,000 barristers and 115,000 solicitors.
And further in the past the overall population was much smaller, as well as poorer and with fewer lawyers (who were less educated, and more impaired by lead, micronutrient deficiencies, etc):
1315 – Between 4 and 6 million.[3]
1350 – 3 million or less.[4]
1541 – 2,774,000 [note 1][5]
1601 – 4,110,000 [5]
1651 – 5,228,000 [5]
1701 – 5,058,000 [5]
1751 – 5,772,000 [5]
1801 – 8,308,000 at the time of the first census. Census officials estimated at the time that there had been an increase of 77% in the preceding 100 years. In each county women were in the majority.[6] Wrigley and Schofield estimate 8,664,000 based on birth and death records.[5]
1811 – 9,496,000
“More than a million fairly intelligent individuals have put in substantial amounts of work
If we count litigating for particular clients on humdrum matters (the great majority of cases) in all legal systems everywhere, I would agree with this.
“have put in substantial amounts of work to make the legal system capable of solving social problems decently well”
It seems almost all the work is not directed at that task, or duplicative, or specialized to particular situations in ways that obsolesce. I didn’t apply much of this filter in the initial comment, but it seems pretty intense too.
Ok, you’ve convinced me that millions is an overestimate.
Summing the top 60% of judges, top 10% of practicing lawyers, and the top 10% of legal thinkers who were not practicing lawyers—since 1215, that’s more than 100,00 people. What other intellectual enterprise has that commitment for that period of time? The military has more people total, but far fewer deep thinkers. Religious institutions, maybe? I’d need to think harder about how to appropriately play reference class tennis—the whole Catholic Church is not a fair comparison because it covers more people than the common law.
Stepping back for a moment, I still think your particular criticism of nickLW’s point is misplaced. Assuming that he’s referencing the intellectual heft and success of the common law tradition, he’s right that there’s a fair amount of heft there, regardless of his overestimate of the raw numbers.
The existence of that heft doesn’t prove what he suggests, but your argument seems to be assaulting the strongest part of his argument by asserting that there has not be a relatively enormous intellectual investment in developing the common law tradition. There has been a very large investment, and the investment has created a powerful institution.
I agree that the common law is a pretty effective legal system, reflecting the work of smart people adjudicating particular cases, and feedback over time (from competition between courts, reversals, reactions to and enforcement difficulties with judgments, and so forth). I would recommend it over civil law for a charter city importing a legal system.
But that’s no reason to exaggerate the underlying mechanisms and virtues. I also think that there is an active tendency in some circles to overhype those virtues, as they are tied to ideological disputes. [Edited to remove political label.]
but your argument seems to be assaulting the strongest part of his argument
Perhaps a strong individual claim, but I didn’t see it clearly connected to a conclusion.
Perhaps a strong individual claim, but I didn’t see it clearly connected to a conclusion.
I agree with you that it isn’t connected at all with his conclusions. Therefore, challenging it doesn’t challenge his conclusion. Nitpicking something that you think is irrelevant to the opposing side’s conclusion in a debate is logically rude.
And why one should pick a high percentile, exactly, if the priors for high percentiles are proportionally low and strong evidence is absent? What’s wrong with assuming ‘somewhat above median’, i.e. close to 50th percentile? Why is that even really harsh?
Extreme standardized testing (after adjusting for regression to the mean), successful writer (by hits, readers, reviews; even vocabulary, which is fairly strongly associated with intelligence in large statistical samples), impressing top philosophers with his decision theory work, impressing very smart and influential people (e.g. Peter Thiel) in real-time conversation.
Why is that even really harsh?
It would be harsh to a graduate student from a top hard science program or law school. The median attorney figure in the US today, let alone over the world and history, is just not that high.
impressing top philosophers with his decision theory work,
The TDT paper from 2012 reads like popularization of something, not like normal science paper on some formalized theory. I don’t think impressing ‘top philosophers’ is impressive.
It would be harsh to a graduate student from a top hard science program or law school.
Or to a writer that gets royalties larger than typical lawyer. Or a smart and influential person, e.g. Peter Thiel.
But a blogger that successfully talked small-ish percentage of people he could reach, into giving him money for work on AI? That’s hardly the evidence to sway 0.0001 prior. I do concede though that median lawyer might be unable to do that (but I dunno—only small percentage would be self deluded or bad enough to try). The world is full of pseudoscientists, cranks, and hustlers that manage this, and more, and who do not seem to be particularly bright.
Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I’m coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I’ve just linked to on Unenumerated). It’s a very different world view than the typical “Less Wrong” worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don’t typically hang out here. As for your questions on this topic:
(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.
(2) To state a probability would be an exercise in false precision, but at least it’s a clearly stated goal that one can start gathering evidence for and against.
(3) It depends on how clearly and formally the goal is stated, including the design of observatons and/or experiments that can be done to accurately (not just precisely) measure progress towards and attainment or non-attainment of that goal.
As for what I’m currently working on, my blog Unenumerated is a good indication of my publicly accessible work. Also feel free to ask any follow-up questions or comments you have stemming from this thread there.
I’ve actually already read those essays (which I really enjoyed, BTW), but still often cannot see how you’ve arrived at your conclusions on the topics we’ve been talking about recently.
For the rest of your comment, you seem to have misunderstood my grandparent comment. I was asking you to respond to my arguments on each of the threads we were discussing, not just to tell me how you would answer each of my questions. (I was using the questions to refer to our discussions, not literally asking them. Sorry if I didn’t make that clear.)
It’s not something you can ever come close to competing with by a philosophy invented from scratch.
I don’t understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean “compete” in the sense of providing the most social good. Or something else?
I stand by my comment that “AGI” and “friendliness” are hopelessly anthropomorphic, infeasible, and/or vague.
I disagree with “hopelessly” “anthropomorphic” and “vague”, but “infeasible” I may very well agree with, if you mean something like it’s highly unlikely that a human team would succeed in creating a Friendly AGI before it’s too late to make a difference and without creating unacceptable risk, which is why I advocate more indirectmethods of achieving it.
Computer “goals” are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts.
People are trying to design such algorithms, things like practical approximations to AIXI, or better alternatives to AIXI. Are you saying they should refrain from using the word “goals” until they have actually come up with concrete designs, or what? (Again I don’t advocate people trying to directly build AGIs, Friendly or otherwise, but your objection doesn’t seem to make sense.)
It’s not something you can ever come close to competing with by a philosophy invented from scratch.
I don’t understand what you mean by this.
A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.
This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.
One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level (history and the evolution of institutions) are completely inacessible to the powers of the lesser level (the human). (The machines representing the two levels in this case might be “the mental states accessible to a single armchair philosophy community”, or, alternatively, “fledgling AI which, per a priori economic intuition, has no advantage over a few philosophers”, versus “the physical states accessible in human history”.)
This belief of his might be charged with a sort of independent half-intuitive aversion to making the sorts of (frequently catastrophic) mistakes that are routinely made by people who think they can metaphorically breach this complexity barrier. One effect of such an aversion would be that he would intuitively anticipate that he would always be, at least in expected value, wrong to agree with such people, no matter what arguments they could turn out to have. That is, it wouldn’t increase his expected rightness to check to see if they were right about some proposed procedure to get around the complexity barrier, because, intuitively, the prior probability that they were wrong, the conditional probability that they would still be wrong despite being persuasive by any conventional threshold, and the wrongness of the cost that had empirically been inflicted on the world by mistakes of that sort, would all be so high. (I took his reference to Hayek’s Fatal Conceit, and the general indirect and implicitly argued emotional dynamic of this interaction, to be confirmation of this intuitive aversion.) By describing this effect explicitly, I don’t mean to completely psychologize here, or make a status move by objectification. Intuitions like the one I’m attributing can (and very much should!), of course, be raised to the level of verbally presented propositions, and argued for explicitly.
(For what it’s worth, the most direct counter to the complexity argument expressed this way is: “with enough effort it is almost certainly possible, even from this side of the barrier, to formalize how to set into motion entities that would be on the other side of the barrier”. To cover the pragmatics of the argument, one would also need to add: “and agreeing that this amount of effort is possible can even be safe, so long as everyone who heard of your agreement was sufficiently strongly motivated not to attempt shortcuts”.)
Another, possibly overlapping reason would have to do with the meta level that people around here normally imagine approaching AI safety problems from—that being, “don’t even bother trying to invent all the required philosophy yourself; instead do your best to try to formalize how to mechanically refer to the process that generated, and could continue to generate, something equivalent to the necessary philosophy, so as to make that process happen better or at least to maximally stay out of its way” (“even if this formalization turns out to be very hard to do, as the alternatives are even worse”). That meta level might be one that he doesn’t really think of as even being possible. One possible reason for this would be that he weren’t aware that anyone actually ever meant to refer to a meta level that high, so that he never developed a separate concept for it. Perhaps when he first encountered e.g. Eliezer’s account of the AI safety philosophy/engineering problem, the concept he came away with was based on a filled-in assumption about the default mistake that Eliezer must have made and the consequent meta level at which Eliezer meant to propose that the problem should be attacked, and that meta level was far too low for success to be conceivable, and he didn’t afterwards ever spontaneously find any reason to suppose you or Eliezer might not have made that mistake. Another possible reason would be that he disbelieved, on the above-mentioned a priori grounds, that the proposed meta level was possible at all. (Or, at least, that it could ever be safe to believe that it were possible, given the horrors perpetrated and threatened by other people who were comparably confident in their reasons for believing similar things.)
I only have time for a short reply:
(1) I’d rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.
(2) The other important thing is law. Law is the “offensive approach to the problem of security” in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It’s not something you can ever come close to competing with by a philosophy invented from scratch.
(3) I stand by my comment that “AGI” and “friendliness” are hopelessly anthropomorphic, infeasible, and/or vague.
(4) Computer “goals” are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.
As a lawyer, I strongly suspect this statement is false. As you seem to be referring to the term, Law is society’s organizational rules about how and when to implement coercive violence. In the abstract, this is powerful, but concretely, this power is implemented by individuals. Some of them (i.e. police officers), care relatively little about the abstract issues—in other words, they aren’t careful about the issues that are relevant to AI.
Further, law is filled with backdoors—they are called legislators. In the United States, Congress can make almost any judicially announced rule irrelevant by passing a statute. If you call that process “Law,” then you aren’t talking about the institution that draws on “the work of millions of smart people” over time.
Finally, individual lawyers’ day-to-day work has almost no relationship to the parts of Law that you are suggesting is relevant to AI. Worse for your point, lawyers don’t even engage with the policy issues of law with any frequency. For example, a lawyer litigating contracts might never engage with what promises should be enforced in her entire career.
In short, your paragraph about law is misdirected and misleading.
That seems pretty harsh! The Bureau of Labor Statistics reports 728,000 lawyers in the U.S., a notably attorney-heavy society within the developed world. The SMPY study of kids with 1 in 10,000 cognitive test scores found (see page 722) only a small minority studying law. The 90th percentile IQ for “legal occupations” in this chart is a little over 130. Historically populations were much lower, nutrition was worse, legal education or authority was only available to a small minority, and the Flynn Effect had not occurred. Not to mention that law is disproportionately made by politicians who are selected for charisma and other factors in addition to intelligence.
It’s hard to know what to make of this.
Perhaps that the legal system is good at creating incentives that closely align the interests of those it governs with the social good, and that this will work on new types of being without much dependence on their decisionmaking processes?
Contracts and basic property rights certainly do seem to help produce wealth. On the other hand, financial regulation is regularly adjusted to try to nullify new innovation by financiers that poses systemic risks or exploits government guarantees, but the financial industry still frequently outmaneuvers the legal system. And of course the legal system depends on the loyalty of the security forces for enforcement, and makes use of ideological agreement among the citizenry that various things are right or wrong.
Restraining those who are much weaker is easier than restraining those who are strong. A more powerful analogy would be civilian control over military and security forces. There do seem to have been big advances in civilian control over the military in the developed countries (fewer coups, etc), but they seem to reflect changes in ideology and technology more than law.
If it is easy to enforce laws on new AGI systems, then the situation seems fairly tractable, even for AGI systems with across-the-board superhuman performance which take action based on alien and inhumane cost functions. But it doesn’t seem guaranteed that it will be easy to enforce such laws on smart AGIs, or that the trajectory of development will be “all narrow AI, all the time,” given the great economic value of human generality.
There’s 0.0001 prior for 1 in 10000 intelligence level. It’s a low prior, you need a genius detector with an incredibly low false positive rate before most of your ‘geniuses’ are actually smart. A very well defined problems with very clear ‘solved’ condition (such as multiple novel mathematical proofs or novel algorithmic solution to hard problem that others try to solve) would maybe suffice, but ‘he seems smart’ certainly would not. This also goes for IQ tests themselves—while a genius would have high IQ score, high IQ scored person would most likely be someone somewhat smart slipping through the crack between what IQ test measures and what intelligence is (case example, Chris Langan, or Keith Raniere, or other high IQ ‘geniuses’ we would never suspect of being particularly smart if not for IQ tests).
Weak and/or subjective evidence of intelligence, especially given lack of statistical independence of evidence, should not get your estimate of intelligence of anyone very high.
This is rather tangential, but I’m curious, out of those who score 1 in 10000 on a standard IQ test, what percentage is actually at least, say, 1 in 5000 in actual intelligence? Do you have a citation or personal estimate?
Depends what you call “actual intelligence” as distinct from what IQ tests measure. private_messaging talks a lot in terms of observable real-world achievements, so presumably is thinking of something along those lines.
The easiest interpretation to measure would be a regression toward the mean effect. Putting a lower bound on the IQ scores in your sample means that you have a relevant fraction of people who tested higher than their average test score. I suspect that at the high end, IQ tests have few enough questions scored incorrectly that noise can let some < 1 in 5000 IQ test takers into your 1 in 10000 cutoff.
I also didn’t note the other problem: 1 in 10,000 is around IQ=155; the ceiling of most standardized (validated and normed) intelligence tests is around 1 in 1000 (IQ~=149). Tests above this tend to be constructed by people who consider themselves in this range, to see who can join their high IQ society and not substantially for any other purpose.
Would depend to how you evaluate actual intelligence. IQ test, at high range, measures reliability in solving simple problems (combined with, maybe, environmental exposure similarity to test maker when it comes to progressive matrices and other ‘continue sequence’ cases—the predictions by Solomonoff induction depend to machine and prior exposure, too). As an extreme example consider an intelligence test of very many very simple and straightforward logical questions. It will correlate with IQ but at the high range it will clearly measure something different from intelligence. All the intelligent individuals will score highly on that test, but so will a lot of people who are simply very good at simple questions.
A thought experiment: picture a class room of mind uploads, set for a half the procedural skills to read only, and teach them the algebra class. Same IQ, utterly different outcome.
I would expect that if the actual intelligence correlates with IQ to the factor of 0.9 (VERY generous assumption), the IQ could easily become non-predictive at as low as 99th percentile without creating any contradiction with the observed general correlation. edit: that would make about one out of 50 people with IQ of one in 10 000 (or one in 1000 or 1 in 1000 0000 for that matter) be intelligent at level of 1 in 5 000. That seems kind of low, but then, we mostly don’t hear of the high IQ people just for IQ alone. edit: and the high IQ organizations like Mensa and the like are hopelessly unremarkable, rather than some ultra powerful groups of super-intelligences.
In any case the point is that the higher is the percentile the more confident you must be that you have no common failure mode between parts of your test.
edit: and for the record my IQ is 148 as measured on a (crappy) test in English which is not my native tongue. I also got very high percentile ratings in programming contest, and I used to be good at chess. I have no need to rationalize something here. I feel that a lot of this sheepish innumerate assumption that you can infer one in 10 000 level performance from a test in absence of failure modes of which you are far less certain than 99.99% , comes simply from signalling—to argue against applicability of IQ test in the implausibly high percentiles lets idiots claim that you must be stupid. When you want to select one in 10 000 level of performance in running 100 meters you can’t do it by measuring performance at a standing jump.
There are longitudinal studies showing that people with 99.99th percentile performance on cognitive tests have substantially better performance (on patents, income, tenure at top universities) than those at the 99.9th or 99th percentiles. More here.
Mensa is less selective than elite colleges or workplaces for intelligence, and much less selective for other things like conscientiousness, height, social ability, family wealth, etc. Far more very high IQ people are in top academic departments, Wall Street, and Silicon Valley than in high-IQ societies more selective than Mensa. So high-IQ societies are a very unrepresentative sample, selected to be less awesome in non-IQ dimensions.
Uses other tests than IQ test, right? I do not dispute that a cognitive test can be made which would have the required reliability for detecting the 99.99th percentile. The IQ tests, however, are full of ‘continue a short sequence’ tests that are quite dubious even in principle. It is fundamentally difficult to measure up into 99.99th percentile, you need a highly reliable measurement apparatus, carefully constructed in precisely the way in which IQ tests are not. Extreme rarities like one in 10 000 should not be thrown around lightly.
There are other societies. They all are not very selective for intelligence either, though, because they all rely on dubious tests.
I would say that this makes those other places be an unrepresentative sample of the “high IQ” individuals. Even if those individuals who pass highly selective requirements on something else rarely enter mensa, they are rare (tautology on highly selective) and their relative under representation in mensa doesn’t sway mensa’s averages.
edit: for example consider the Nobel Prize winners. They all have high IQs but there is considerable spread and the IQ doesn’t seem to correlate well with the estimate of “how many others worked on this and did not succeed”.
Note: I am using “IQ” in the narrow sense of “what IQ tests measure”, not as shorthand for intelligence. The intelligence has the capacity to learn component which IQ tests do not measure but tests of mathematical aptitude (with hard problems) or verbal aptitude do.
note2: I do not believe that the correlation entirely disappears even for IQ tests past 99th percentile. My argument is that for the typical IQ tests it well could. It’s just that the further you get up the smaller fraction of the excellence is actually being measured.
Administering SATs to younger children, to raise the ceiling.
Well Mensa is ~0 selectivity beyond the IQ threshold, and is a substitute good for other social networks, leaving it with the dregs. “Much more” is poor phrasing here, they’re not rejecting 90%. If you look at the linked papers you’ll see that a good majority of those at the 1 in 10,000 level on those childhood tests wind up with elite university/alumni or professional networks with better than Mensa IQ distributions.
Ghmmm. I’m sure this measures a plenty of highly useful personal qualities that correlate with income. E.g. rate of learning. Or inclination to pursue intellectual work.
Well, yes. I think we agree on all substantial points here but disagree on interpretation of my post. I referred specifically to “IQ tests” not to SAT, as lacking the rigour required for establishing 1 in 10 000 performance with any confidence, to balance on my point that e.g. ‘that guy seems smart’ shouldn’t possibly result in estimate of 1 in 10 000 , and neither could anything that relies on rather subjective estimate of the difficulty of the accomplishments in the settings where you can’t e.g. reliably estimate from number of other people who try and don’t succeed.
Note that these studies use the same tests (childhood SAT) that Eliezer excelled on (quite a lot higher than the 1 in 10,000 level), and that I was taking into account in my estimation.
Sources?
Also,
a: while that’d be fairly impressive, keep in mind that if it is quite a lot higher than 1 in 10 000 then my prior for it is quite a lot lower than 0.0001 with only minor updates up for ‘seeming clever’ , and my prior for someone being a psychopath/liar is 0.01, with updates up for talking other people into giving you money.
b: not having something else likewise concrete to show off (e.g. contest results of some kind and the like) will at most make me up-estimate him to bin with someone like Keith Raniere or Chris Langan (those did SAT well too), which is already the bin that he’s significantly in. Especially as he had been interested in programming, and the programming is the area where you can literally make a LOT of money in just a couple years while gaining the experience and gaining much better cred than childhood SAT. But also an area that heavily tasks general ability to think right and deal with huge amounts of learned information. My impression is that he’s a spoiled ‘math prodigy’ who didn’t really study anything beyond fairly elementary math, and my impression is that it’s his own impression except he thinks he can do advanced math with little effort using some intuition while i’m pretty damn skeptical of such stuff unless well tested.
I don’t think the childhood SAT gives that much “cred” for real-world efficacy, and I don’t conflate intelligence with “everything good a person can be.” Obviously, Eliezer is below average in the combination of conscientiousness, conformity, and so forth that causes most smart people to do more schooling. So I would expect lower performance on any given task than from a typical person of his level of intelligence. But it’s not that surprising that he would, say, continue popular blogging with significant influence on a sizable audience, rather than stop that (which he values for its effects) to work as a Google engineer to sack away a typical salary, or to do a software startup (which the stats show is pretty uncertain even for those with VC backing and previous successful startups).
I agree on not having deep math knowledge, and this being reason to be skeptical of making very unusual progress in AI or FAI. However while his math scores were high, “math prodigy” isn’t quite right, since his verbal scores were even higher. There are real differences in what you expect to happen depending on the “top skill.” In the SMPY data such people often take up professions like science (or science fiction) writer (or philosopher) that use the verbal skills too, even when they have higher raw math performance than others who go to on to become hard science professors. It’s pretty mundane when such a person leans towards being a blogger rather than an engineer, especially when they are doing pretty well as the former. Eliezer has said that if not worried about x-risk he would want to become a science fiction writer, as opposed to a scientist.
Hey, Raniere was smart enough to get his own cult going.
Or old enough and disillusioned enough not to fight the cultist’s desire to admire someone.
What salary level is good enough evidence for you to consider someone clever?
Notice that your criteria for impressive cleverness excludes practically every graduate student—the vast majority make next to nothing, have few “concrete” things to show off, etc.
Except the interview you quoted says none of that.
[...]
This is substantially different from EY currently being a math prodigy.
In other words, he’s no better than random chance, which is vastly different from “[thinking] he can do advanced math with little effort using some intuition.” By the same logic, you’d accept P=NP trivially.
I don’t understand. The base rate for Marcello being right is greater than 0.5.
Maybe EY meant that, on the occasions that Eliezer objected to the final result, he was correct to object half the time. So if Eliezer objected to just 1% of the derivations, on that 1% our confidence in the result of the black box would suddenly drop down to 50% from 99.5% or whatever.
Yes, but that’s not “no better than random chance.”
Sure. I was suggesting a way in which an objection which is itself only 50% correct could be useful, contra Dmytry.
Oh, right. The point remains that even a perfect Oracle isn’t an efficient source of math proofs.
You do not understand how basic probability works. I recommend An Intuitive Explanation of Bayes’ Theorem.
If a device gives a correct diagnosis 999,999 times out of 1,000,000 and is applied to a population that has about 1 in 1,000,000 chance of being positive then a positive diagnosis by the device has approximately 50% chance of being correct. That doesn’t make it “no better than random chance”. It makes it amazingly good.
It’s not criteria for cleverness, it is criteria for evidence when the prior is 0.0001 (for 1 in 10 000) . One can be clever at one in 7 billions level, and never having done anything of interest, but I can’t detect such person as clever at one in 10 000 level with any confidence without seriously strong evidence.
I meant, a childhood math prodigy.
If Marcello failed one time out of ten and Eliezer detected it half of the time, that would be better than chance. Without knowing failure rate of Marcello (or without knowing how the failures are detected besides being pointed out by EY), one can’t say whenever it is better than chance or not.
I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.
Since my posts seem to be being read so carelessly, I will no longer be posting on this thread. I highly recommend folks who want to learn more about where I’m coming from to visit my blog, Unenumerated. Also, to learn more about the evolutionary emergence of ethical and legal rules, I highly recommend Hayek—Fatal Conceit makes a good startng point.
A careful reading of my own comment would have revealed my references to the US as only one heavily lawyered society (useful for an upper bound on lawyer density, and representing a large portion of the developed world and legal population), and to the low population of past centuries (which make them of lesser importance for a population estimate), indicating that I was talking about the total over time and space (above some threshold of intelligence) as well.
I was presenting figures as the start of an estimate of long term lawyer population, and to indicate that to get “millions” one could not pick a high percentile within the population of lawyers, problematic given the intelligence of even 90th percentile attorneys.
Is it really so hard to believe that there have been more than a million highly intelligent judges and influential lawyers since the Magna Carta was issued? (In my mind, the reference is to English Common Law—Civil Law works differently enough that counting participants is much harder).
As I said, I don’t think this proves what nickLW asserts follows from it, but I think the statement “More than a million fairly intelligent individuals have put in substantial amounts of work to make the legal system capable of solving social problems decently well” is true, if mostly irrelevant to AI.
Limiting to the common law tradition makes it even more dubious. Today, the population of England and Wales is around 60 million. Wikipedia says:
On the number of solicitors (barristers are much less numerous):
Or this:
And further in the past the overall population was much smaller, as well as poorer and with fewer lawyers (who were less educated, and more impaired by lead, micronutrient deficiencies, etc):
1315 – Between 4 and 6 million.[3] 1350 – 3 million or less.[4] 1541 – 2,774,000 [note 1][5] 1601 – 4,110,000 [5] 1651 – 5,228,000 [5] 1701 – 5,058,000 [5] 1751 – 5,772,000 [5] 1801 – 8,308,000 at the time of the first census. Census officials estimated at the time that there had been an increase of 77% in the preceding 100 years. In each county women were in the majority.[6] Wrigley and Schofield estimate 8,664,000 based on birth and death records.[5] 1811 – 9,496,000
If we count litigating for particular clients on humdrum matters (the great majority of cases) in all legal systems everywhere, I would agree with this.
It seems almost all the work is not directed at that task, or duplicative, or specialized to particular situations in ways that obsolesce. I didn’t apply much of this filter in the initial comment, but it seems pretty intense too.
Ok, you’ve convinced me that millions is an overestimate.
Summing the top 60% of judges, top 10% of practicing lawyers, and the top 10% of legal thinkers who were not practicing lawyers—since 1215, that’s more than 100,00 people. What other intellectual enterprise has that commitment for that period of time? The military has more people total, but far fewer deep thinkers. Religious institutions, maybe? I’d need to think harder about how to appropriately play reference class tennis—the whole Catholic Church is not a fair comparison because it covers more people than the common law.
Stepping back for a moment, I still think your particular criticism of nickLW’s point is misplaced. Assuming that he’s referencing the intellectual heft and success of the common law tradition, he’s right that there’s a fair amount of heft there, regardless of his overestimate of the raw numbers.
The existence of that heft doesn’t prove what he suggests, but your argument seems to be assaulting the strongest part of his argument by asserting that there has not be a relatively enormous intellectual investment in developing the common law tradition. There has been a very large investment, and the investment has created a powerful institution.
I agree that the common law is a pretty effective legal system, reflecting the work of smart people adjudicating particular cases, and feedback over time (from competition between courts, reversals, reactions to and enforcement difficulties with judgments, and so forth). I would recommend it over civil law for a charter city importing a legal system.
But that’s no reason to exaggerate the underlying mechanisms and virtues. I also think that there is an active tendency in some circles to overhype those virtues, as they are tied to ideological disputes. [Edited to remove political label.]
Perhaps a strong individual claim, but I didn’t see it clearly connected to a conclusion.
I agree with you that it isn’t connected at all with his conclusions. Therefore, challenging it doesn’t challenge his conclusion. Nitpicking something that you think is irrelevant to the opposing side’s conclusion in a debate is logically rude.
And why one should pick a high percentile, exactly, if the priors for high percentiles are proportionally low and strong evidence is absent? What’s wrong with assuming ‘somewhat above median’, i.e. close to 50th percentile? Why is that even really harsh?
Extreme standardized testing (after adjusting for regression to the mean), successful writer (by hits, readers, reviews; even vocabulary, which is fairly strongly associated with intelligence in large statistical samples), impressing top philosophers with his decision theory work, impressing very smart and influential people (e.g. Peter Thiel) in real-time conversation.
It would be harsh to a graduate student from a top hard science program or law school. The median attorney figure in the US today, let alone over the world and history, is just not that high.
The TDT paper from 2012 reads like popularization of something, not like normal science paper on some formalized theory. I don’t think impressing ‘top philosophers’ is impressive.
Or to a writer that gets royalties larger than typical lawyer. Or a smart and influential person, e.g. Peter Thiel.
But a blogger that successfully talked small-ish percentage of people he could reach, into giving him money for work on AI? That’s hardly the evidence to sway 0.0001 prior. I do concede though that median lawyer might be unable to do that (but I dunno—only small percentage would be self deluded or bad enough to try). The world is full of pseudoscientists, cranks, and hustlers that manage this, and more, and who do not seem to be particularly bright.
Nick, do you see a fault in how I’ve been carrying on our discussions as well? Because you’ve also left several of our threads dangling, including:
How likely is it that an AGI will be created before all of its potential economic niches have been filled by more specialized algorithms?
How much hope is there for “security against malware as strong as we can achieve for symmetric key cryptography”?
Does “hopelessly anthropomorphic and vague” really apply to “goals”?
(Of course it’s understandable if you’re just too busy. If that’s the case, what kind of projects are you working on these days?)
Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I’m coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I’ve just linked to on Unenumerated). It’s a very different world view than the typical “Less Wrong” worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don’t typically hang out here. As for your questions on this topic:
(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.
(2) To state a probability would be an exercise in false precision, but at least it’s a clearly stated goal that one can start gathering evidence for and against.
(3) It depends on how clearly and formally the goal is stated, including the design of observatons and/or experiments that can be done to accurately (not just precisely) measure progress towards and attainment or non-attainment of that goal.
As for what I’m currently working on, my blog Unenumerated is a good indication of my publicly accessible work. Also feel free to ask any follow-up questions or comments you have stemming from this thread there.
I’ve actually already read those essays (which I really enjoyed, BTW), but still often cannot see how you’ve arrived at your conclusions on the topics we’ve been talking about recently.
For the rest of your comment, you seem to have misunderstood my grandparent comment. I was asking you to respond to my arguments on each of the threads we were discussing, not just to tell me how you would answer each of my questions. (I was using the questions to refer to our discussions, not literally asking them. Sorry if I didn’t make that clear.)
I don’t understand what you mean by this. Are you saying something like if a society was ever taken over by a Friendly AI, it would fail to compete against one ruled by law, in either a military or economic sense? Or do you mean “compete” in the sense of providing the most social good. Or something else?
I disagree with “hopelessly” “anthropomorphic” and “vague”, but “infeasible” I may very well agree with, if you mean something like it’s highly unlikely that a human team would succeed in creating a Friendly AGI before it’s too late to make a difference and without creating unacceptable risk, which is why I advocate more indirect methods of achieving it.
People are trying to design such algorithms, things like practical approximations to AIXI, or better alternatives to AIXI. Are you saying they should refrain from using the word “goals” until they have actually come up with concrete designs, or what? (Again I don’t advocate people trying to directly build AGIs, Friendly or otherwise, but your objection doesn’t seem to make sense.)
A sufficient cause for Nick to claim this would be that he believed that no human-conceivable AI design would be able to incorporate by any means, including by reasoning from first principles or even by reference, anything functionally equivalent to the results of all the various dynamics of updating that have (for instance) made present legal systems as (relatively) robust (against currently engineerable methods of exploitation) as they are.
This seems somewhat strange to you, because you believe humans can conceive of AI designs that could reason some things from first principles (given observations of the world that the reasoning needed to be relevant to, plus reasonably anticipatable advantages of computing power over single humans) or incorporate results by reference.
One possible reason he might believe this would be that he believed that, whenever a human reasons about history or evolved institutions, there are something like two distinct levels of a computational complexity hierarchy at work, and that the powers of the greater level (history and the evolution of institutions) are completely inacessible to the powers of the lesser level (the human). (The machines representing the two levels in this case might be “the mental states accessible to a single armchair philosophy community”, or, alternatively, “fledgling AI which, per a priori economic intuition, has no advantage over a few philosophers”, versus “the physical states accessible in human history”.)
This belief of his might be charged with a sort of independent half-intuitive aversion to making the sorts of (frequently catastrophic) mistakes that are routinely made by people who think they can metaphorically breach this complexity barrier. One effect of such an aversion would be that he would intuitively anticipate that he would always be, at least in expected value, wrong to agree with such people, no matter what arguments they could turn out to have. That is, it wouldn’t increase his expected rightness to check to see if they were right about some proposed procedure to get around the complexity barrier, because, intuitively, the prior probability that they were wrong, the conditional probability that they would still be wrong despite being persuasive by any conventional threshold, and the wrongness of the cost that had empirically been inflicted on the world by mistakes of that sort, would all be so high. (I took his reference to Hayek’s Fatal Conceit, and the general indirect and implicitly argued emotional dynamic of this interaction, to be confirmation of this intuitive aversion.) By describing this effect explicitly, I don’t mean to completely psychologize here, or make a status move by objectification. Intuitions like the one I’m attributing can (and very much should!), of course, be raised to the level of verbally presented propositions, and argued for explicitly.
(For what it’s worth, the most direct counter to the complexity argument expressed this way is: “with enough effort it is almost certainly possible, even from this side of the barrier, to formalize how to set into motion entities that would be on the other side of the barrier”. To cover the pragmatics of the argument, one would also need to add: “and agreeing that this amount of effort is possible can even be safe, so long as everyone who heard of your agreement was sufficiently strongly motivated not to attempt shortcuts”.)
Another, possibly overlapping reason would have to do with the meta level that people around here normally imagine approaching AI safety problems from—that being, “don’t even bother trying to invent all the required philosophy yourself; instead do your best to try to formalize how to mechanically refer to the process that generated, and could continue to generate, something equivalent to the necessary philosophy, so as to make that process happen better or at least to maximally stay out of its way” (“even if this formalization turns out to be very hard to do, as the alternatives are even worse”). That meta level might be one that he doesn’t really think of as even being possible. One possible reason for this would be that he weren’t aware that anyone actually ever meant to refer to a meta level that high, so that he never developed a separate concept for it. Perhaps when he first encountered e.g. Eliezer’s account of the AI safety philosophy/engineering problem, the concept he came away with was based on a filled-in assumption about the default mistake that Eliezer must have made and the consequent meta level at which Eliezer meant to propose that the problem should be attacked, and that meta level was far too low for success to be conceivable, and he didn’t afterwards ever spontaneously find any reason to suppose you or Eliezer might not have made that mistake. Another possible reason would be that he disbelieved, on the above-mentioned a priori grounds, that the proposed meta level was possible at all. (Or, at least, that it could ever be safe to believe that it were possible, given the horrors perpetrated and threatened by other people who were comparably confident in their reasons for believing similar things.)