Book review: On the Edge: The Art of Risking Everything, by Nate Silver.
Nate Silver’s latest work straddles the line between journalistic
inquiry and subject matter expertise.
“On the Edge” offers a valuable lens through which to understand
analytical risk-takers.
The River versus The Village
Silver divides the interesting parts of the world into two tribes.
On his side, we have “The River”—a collection of eccentrics typified
by Silicon Valley entrepreneurs and professional gamblers, who tend to
be analytical, abstract, decoupling, competitive, critical,
independent-minded (contrarian), and risk-tolerant.
On the other, “The Village”—the east coast progressive
establishment, including politicians, journalists, and the more
politicized corners of academia.
Like most tribal divides, there’s some arbitrariness to how some
unrelated beliefs end up getting correlated. So I don’t recommend
trying to find a more rigorous explanation of the tribes than what I’ve
described here.
Here are two anecdotes that Silver offers to illustrate the divide:
In the lead-up to the 2016 US election, Silver gave Trump a 29% chance
of winning, while prediction markets hovered around 17%, and many
pundits went even lower. When Trump won, the Village turned on Silver
for his “bad” forecast. Meanwhile, the River thanked him for helping
them profit by betting against those who underestimated Trump’s
chances.
Wesley had to be bluffing 25 percent of the time to make Dwan’s call
correct; his read on Wesley’s mindset was tentative, but maybe that
was enough to get him from 20 percent to 24. … maybe Wesley’s
physical mannerisms—like how he put his chips in quickly … got
Dwan from 24 percent to 29. … If this kind of thought process seems
alien to you—well, sorry, but your application to the River has been
declined.
Silver is concerned about increasingly polarized attitudes toward risk:
you have Musk at one extreme and people who haven’t left their
apartment since COVID at the other one. The Village and the River are
growing farther apart.
13 Habits of Highly Successful Risk-Takers
The book lists 13 habits associated with the River. I hoped these would
improve on Tetlock’s ten commandments for superforecasters. Some of
Silver’s habits fill that role of better forecasting advice, while
others function more as litmus tests for River membership. Silver
understands the psychological challenges better than Tetlock does. Here
are a few:
Strategic Empathy:
But I’m not talking about coming across an injured puppy and having
it tug at your heartstrings. Instead, I’m speaking about adversarial
situations like poker—or war.
I.e. accurately modeling what’s going on in an opponent’s mind.
Strategic empathy isn’t how I’d phrase what I’m doing on the stock
market, where I’m rarely able to identify who I’m trading against. But
it’s fairly easy to generalize Silver’s advice so that it does
coincide with an important habit of mine: always wonder why a competent
person would take the other side of a trade that I’m making.
This attitude represents an important feature of the River: people in
this tribe aim to respect our adversaries, often because we’ve sought
out fields where we can’t win using other approaches.
This may not be the ideal form of empathy, but it’s pretty effective at
preventing Riverians from treating others as less than human. The
Village may aim to generate more love than does the River, but it also
generates more hate (e.g. of people who use the wrong pronouns).
Abhor mediocrity: take a raise-or-fold attitude toward life.
I should push myself a bit in this direction. But I feel that erring on
the side of caution (being a
nit in poker parlance)
is preferable to becoming the next Sam Bankman-Fried.
Allocate attention carefully. This is one of the most essential
habits. Maybe one third of my stock market errors are due to missing
some important factor because I’m distracted by some slightly less
important evidence. E.g. in February 2020 I failed to ask how disruptive
COVID would be. In normal times, my typical source of my edge comes from
reading lots of earnings reports, effectively looking for a few needles
in a haystack. I got a little too obsessed with comparing individual
companies, and left too little spare attention for broader questions.
Successful risk-takers are not driven by money.
But poker players are distinct for two reasons. First, they’re so
fiercely competitive that money mostly serves as a way to keep score.
… Second, gambling for such high stakes requires a certain
desensitization to them.
Artificial Intelligence
Silver acknowledges the significance of AI, criticizing the Village for
ew: On the Edgeignoring what he calls the scientific
consensus on AI risk.
While “consensus” might be too strong a word—there’s expert
disagreement about the likelihood and nature of AI going rogue, and
whether AI replacing humanity would be bad—expert opinion here differs
markedly from reactions to any historical innovation. I’m disturbed by
the extent to which highly competent people disagree about key
forecasts.
Silver has listened fairly carefully to Eliezer Yudkowsky’s pessimistic
views about AI, and has decided that Eliezer is a bit too much of a
hedgehog. Silver encourages us to adopt a range of more uncertain models
of AI outcomes, such as views suggested by Ajeya
Cotra.
One of Silver’s comments that seems wrong is his claim that
participants in the Existential Risk Persuasion
Tournament
(XPT) who disagreed about AI risk “really didn’t get along” (I’m
unclear whether that’s Silver’s misinterpretation or Tetlock’s).
Silver excels at describing the relevant differences in beliefs about
AI, introducing a “technological Richter scale”. Skeptics say AI is
over-hyped, and will be at most an 8 this century on Silver’s scale
(i.e. the most important invention since the internet). Whereas AI
worriers say it will be more like a 10 (biggest event since humans
became the dominant species). Approximately nobody thinks AI over the
next few decades will be between 8.5 and 9.5 on this scale.
Yet I got along pretty well with the forecasters in that tournament with
whom I disagreed. Everyone who argued about AI seemed sane, and somewhat
competent. It felt like we were all doing our best, given the limited
time that we devoted to this issue, to impartially arrive at the best
forecast. We disagreed only about a small number of (very important)
factual claims.
I have many ways of modeling AI that suggest it will be above 9.5 in a
decade or so, but most of them are hard to convincingly articulate. E.g.
I have an intuitive measure of the rate at which AI has been changing
from very special-purpose a decade ago, to quite general-purpose today.
I could put numbers on that, but skeptics would suspect that I’m
picking those numbers to fit my desired conclusion. I’ve got lots of
little pieces of evidence, mostly from interacting with AIs, but it
takes a large number of those observations to add up to strong evidence.
I was frustrated at how those forecasters allocated their attention. But
part of being a good forecaster involves resisting many attempts to
influence what evidence we should look at.
Many, but not all, parts of the debates over AI feel this way to me. It
reminds me of what Silver reports about competing VCs being friends with
each other.
Utilitarianism and Effective Altruism
Silver takes Effective Altruism (EA) seriously enough to provide a
thoughtful explanation of why he doesn’t consider himself an EA, in
spite of agreeing with most of the reasoning behind EA.
He focuses much of his criticism on the utilitarian aspects of EA.
Silver and I are both uncomfortable with the utilitarianism’s
impartiality rule, i.e. the assertion that all people are equally
valuable, even those in a distant galaxy millions of years in the
future. I’m unsure how much Silver’s reasons overlap with mine.
Many people will agree to something like impartiality among a small
enough group. That doesn’t mean they’ve agreed to accept impartiality
as a universal principle. Nor do I see much of an argument that they
ought to do so.
The relevant
book
by Peter Singer asks us to observe that people sometimes can’t explain
why they treat nearby people as more deserving of help than strangers on
a distant continent, then asserts, based mainly on intuition, that
we’ve got a moral obligation to reject that unequal treatment.
Silver proposes an antonym of Singer’s drowning child parable:
Think about the ten people in the world that are most important to you
on a personal basis. They can be children, parents, siblings, friends,
lovers, mentors—whomever you want. Suppose I offer to humanely
euthanize these ten people. In exchange, eleven random people from
around the world will be saved. Is it moral to kill the ten people to
save the eleven?
I’m too selfish to accept the utilitarian answer on this parable.
There’s a big difference between being altruistic with 10% of my
income, and being altruistic with all the decisions in my life. EAs
don’t agree on a principled answer to how altruistic I should be, and
usually settle for pragmatic answers.
EAs push for extending some sort of impartiality globally and to some
future generations, without having much of a consensus on how far to
extend that (to insects? to our AI descendants centuries in the
future?). I care a bit about people in other galaxies millions of years
in the future, but I’m not willing to value them the same as I value
the people in my life.
Silver portrays the EA movement as being more utilitarian than it is in
practice. A few hardcore utilitarians such as Peter Singer were
influential in starting the EA movement. Those utilitarians have mostly
stuck to philosophical writing. The decisions about where to donate and
what charities to create have been dominated more by people who reject
pure utilitarianism, and lean somewhat toward the moral
parliament
approach to ethics that Silver prefers. It seems like the main (only?)
difference between Silver’s rejection of EA and my habit of usually
classifying myself as an EA is that we focus on different wings of the
movement.
SBF
Silver is in an unusually good position to evaluate Sam Bankman-Fried
(SBF), being an expert at handling risk, and having interviewed SBF at
key times. Silver does a good job of focusing on the most important
facts of SBF’s character.
Silver detected no advance warning that SBF was committing crimes, but
did notice some signs that FTX might collapse:
SBF was quite specifically insistent that people ought to be willing
to risk having their lives end in ruin.
One warning that EAs neglected was SBF’s decision to spend millions of
dollars on Carrick Flynn’s congressional candidacy. Silver blames SBF
for causing Flynn’s defeat, by thoughtlessly spending money in a way
that looked weird and annoying. It wasn’t obvious to me at the time
what went wrong there, but Silver knows a good deal more about elections
than I do, so he’s likely correct.
Silver tries to shoehorn SBF into categories of cognizant versus
negligent, and proficient versus deficient. Thankfully, he avoids such
simple categorization when evaluating SBF’s altruism (it seems clear to
me that SBF had some altruistic motives for being vegan, and also had
some less noble instincts, maybe Trump-like megalomania?).
Silver classifies SBF as cognizant and deficient. That seems too
simplistic.
My read is that SBF was mostly quite competent, and mostly knew what he
was doing. But he was wildly inconsistent about those abilities, in a
way that suggests his initial successes led to extreme overconfidence.
He reminds me in some ways of Trump, who showed remarkable skill at
finding a nearly impossible path to victory in 2016, while also showing
a remarkable lack of skill at handling his 2020 defeat, and poor skills
at handling his criminal trials. Both Trump and SBF seem selectively
delusional, particularly about the possibility that they might make
mistakes.
The VC World
The best VC firms have a 50% success rate at picking investments, as
measured by how often they at least break even. In order to justify the
high risks they take, they mostly depend on the 10x+ returns that they
get 10% of the time. The average VC firm has a much lower success rate.
What’s it like to be a bit less than the best VC?
It’s hard for most people to make bets that they know will usually
fail. Many of the distinctive traits of Silicon Valley—from the
increasing openness of psychedelic drug use, to the tolerance for
difficult founders, to the tendency of VCs to pontificate on political
issues—reflect a lack of fear of looking stupid.
The startup world selects for founders who had a moderately comfortable
childhood. But it selects against the super-rich. Founders need to take
big risks, and feel a strong need to be more successful. Above a certain
level (upper middle class?) additional wealth as a child makes a person
less willing to risk years of his life working hard to get ahead.
Silver presents some evidence that VCs discriminate against minorities,
especially black women founders. I find it hard to tell how strong this
effect is. There are certainly some important times when VCs decide not
to invest on the grounds that other VCs are unlikely to invest in that
startup. It sometimes only takes a mild amount of expected stereotyping
for a VC to reject a startup, feeling it will fail due to inadequate
funding.
But that only applies to startups that depend on multiple rounds of
funding. Aren’t there some valuable startups that can be adequately
funded by a single VC? If so, any VC who can spot the neglected startups
can succeed by investing in them. That would tend to limit the
discrimination. I’m confident that such startups existed in the 90s.
I’m less sure what the current situation is.
Silver implies that something is wrong with my hypothesis, because it’s
surprisingly rare for a new VC firm to displace existing ones. Silver
presents good reasons to expect that VC firms will be self-perpetuating,
due to startups preferring deals with prestigious firms. But that only
works if less prestigious VC firms can’t exploit the mistakes of the
prestigious ones.
That leaves me feeling confused as to how many valuable startups fail to
get funded.
Concluding Thoughts
Silver ends by proposing to replace the French national motto with a
motto that’s more appropriate for an age of AI: Agency, Plurality, and
Reciprocity. That feels kind of good, yet doesn’t express my goals
clearly enough that I want to adopt it as my motto.
The book mostly describes my tribe, although I want to somewhat downplay
the risk-tolerant aspects of it. I put up with stock market risks,
because doing so has helped with my financial security. I’m preparing
for high-risk decisions about AI, because I don’t see how to avoid
them.
I normally do some research to fact-check books like this. Instead, I
can confirm, partly from direct experience, that the book is at least
95% correct. The places where he seems to have some facts wrong involve
things where I have insider-type information that I wouldn’t expect an
author to uncover with merely a year’s worth of research.
I’m glad to have a book that I can usefully point to when explaining my
worldview.
Book review: On the Edge
Link post
Book review: On the Edge: The Art of Risking Everything, by Nate Silver.
Nate Silver’s latest work straddles the line between journalistic inquiry and subject matter expertise.
“On the Edge” offers a valuable lens through which to understand analytical risk-takers.
The River versus The Village
Silver divides the interesting parts of the world into two tribes.
On his side, we have “The River”—a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.
On the other, “The Village”—the east coast progressive establishment, including politicians, journalists, and the more politicized corners of academia.
Like most tribal divides, there’s some arbitrariness to how some unrelated beliefs end up getting correlated. So I don’t recommend trying to find a more rigorous explanation of the tribes than what I’ve described here.
Here are two anecdotes that Silver offers to illustrate the divide:
In the lead-up to the 2016 US election, Silver gave Trump a 29% chance of winning, while prediction markets hovered around 17%, and many pundits went even lower. When Trump won, the Village turned on Silver for his “bad” forecast. Meanwhile, the River thanked him for helping them profit by betting against those who underestimated Trump’s chances.
Silver is concerned about increasingly polarized attitudes toward risk:
13 Habits of Highly Successful Risk-Takers
The book lists 13 habits associated with the River. I hoped these would improve on Tetlock’s ten commandments for superforecasters. Some of Silver’s habits fill that role of better forecasting advice, while others function more as litmus tests for River membership. Silver understands the psychological challenges better than Tetlock does. Here are a few:
Strategic Empathy:
I.e. accurately modeling what’s going on in an opponent’s mind.
Strategic empathy isn’t how I’d phrase what I’m doing on the stock market, where I’m rarely able to identify who I’m trading against. But it’s fairly easy to generalize Silver’s advice so that it does coincide with an important habit of mine: always wonder why a competent person would take the other side of a trade that I’m making.
This attitude represents an important feature of the River: people in this tribe aim to respect our adversaries, often because we’ve sought out fields where we can’t win using other approaches.
This may not be the ideal form of empathy, but it’s pretty effective at preventing Riverians from treating others as less than human. The Village may aim to generate more love than does the River, but it also generates more hate (e.g. of people who use the wrong pronouns).
Abhor mediocrity: take a raise-or-fold attitude toward life.
I should push myself a bit in this direction. But I feel that erring on the side of caution (being a nit in poker parlance) is preferable to becoming the next Sam Bankman-Fried.
Allocate attention carefully. This is one of the most essential habits. Maybe one third of my stock market errors are due to missing some important factor because I’m distracted by some slightly less important evidence. E.g. in February 2020 I failed to ask how disruptive COVID would be. In normal times, my typical source of my edge comes from reading lots of earnings reports, effectively looking for a few needles in a haystack. I got a little too obsessed with comparing individual companies, and left too little spare attention for broader questions.
Successful risk-takers are not driven by money.
Artificial Intelligence
Silver acknowledges the significance of AI, criticizing the Village for ew: On the Edgeignoring what he calls the scientific consensus on AI risk.
While “consensus” might be too strong a word—there’s expert disagreement about the likelihood and nature of AI going rogue, and whether AI replacing humanity would be bad—expert opinion here differs markedly from reactions to any historical innovation. I’m disturbed by the extent to which highly competent people disagree about key forecasts.
Silver has listened fairly carefully to Eliezer Yudkowsky’s pessimistic views about AI, and has decided that Eliezer is a bit too much of a hedgehog. Silver encourages us to adopt a range of more uncertain models of AI outcomes, such as views suggested by Ajeya Cotra.
One of Silver’s comments that seems wrong is his claim that participants in the Existential Risk Persuasion Tournament (XPT) who disagreed about AI risk “really didn’t get along” (I’m unclear whether that’s Silver’s misinterpretation or Tetlock’s).
Silver excels at describing the relevant differences in beliefs about AI, introducing a “technological Richter scale”. Skeptics say AI is over-hyped, and will be at most an 8 this century on Silver’s scale (i.e. the most important invention since the internet). Whereas AI worriers say it will be more like a 10 (biggest event since humans became the dominant species). Approximately nobody thinks AI over the next few decades will be between 8.5 and 9.5 on this scale.
Yet I got along pretty well with the forecasters in that tournament with whom I disagreed. Everyone who argued about AI seemed sane, and somewhat competent. It felt like we were all doing our best, given the limited time that we devoted to this issue, to impartially arrive at the best forecast. We disagreed only about a small number of (very important) factual claims.
I have many ways of modeling AI that suggest it will be above 9.5 in a decade or so, but most of them are hard to convincingly articulate. E.g. I have an intuitive measure of the rate at which AI has been changing from very special-purpose a decade ago, to quite general-purpose today. I could put numbers on that, but skeptics would suspect that I’m picking those numbers to fit my desired conclusion. I’ve got lots of little pieces of evidence, mostly from interacting with AIs, but it takes a large number of those observations to add up to strong evidence.
I was frustrated at how those forecasters allocated their attention. But part of being a good forecaster involves resisting many attempts to influence what evidence we should look at.
Many, but not all, parts of the debates over AI feel this way to me. It reminds me of what Silver reports about competing VCs being friends with each other.
Utilitarianism and Effective Altruism
Silver takes Effective Altruism (EA) seriously enough to provide a thoughtful explanation of why he doesn’t consider himself an EA, in spite of agreeing with most of the reasoning behind EA.
He focuses much of his criticism on the utilitarian aspects of EA.
Silver and I are both uncomfortable with the utilitarianism’s impartiality rule, i.e. the assertion that all people are equally valuable, even those in a distant galaxy millions of years in the future. I’m unsure how much Silver’s reasons overlap with mine.
Many people will agree to something like impartiality among a small enough group. That doesn’t mean they’ve agreed to accept impartiality as a universal principle. Nor do I see much of an argument that they ought to do so.
The relevant book by Peter Singer asks us to observe that people sometimes can’t explain why they treat nearby people as more deserving of help than strangers on a distant continent, then asserts, based mainly on intuition, that we’ve got a moral obligation to reject that unequal treatment.
Silver proposes an antonym of Singer’s drowning child parable:
I’m too selfish to accept the utilitarian answer on this parable. There’s a big difference between being altruistic with 10% of my income, and being altruistic with all the decisions in my life. EAs don’t agree on a principled answer to how altruistic I should be, and usually settle for pragmatic answers.
EAs push for extending some sort of impartiality globally and to some future generations, without having much of a consensus on how far to extend that (to insects? to our AI descendants centuries in the future?). I care a bit about people in other galaxies millions of years in the future, but I’m not willing to value them the same as I value the people in my life.
Silver portrays the EA movement as being more utilitarian than it is in practice. A few hardcore utilitarians such as Peter Singer were influential in starting the EA movement. Those utilitarians have mostly stuck to philosophical writing. The decisions about where to donate and what charities to create have been dominated more by people who reject pure utilitarianism, and lean somewhat toward the moral parliament approach to ethics that Silver prefers. It seems like the main (only?) difference between Silver’s rejection of EA and my habit of usually classifying myself as an EA is that we focus on different wings of the movement.
SBF
Silver is in an unusually good position to evaluate Sam Bankman-Fried (SBF), being an expert at handling risk, and having interviewed SBF at key times. Silver does a good job of focusing on the most important facts of SBF’s character.
Silver detected no advance warning that SBF was committing crimes, but did notice some signs that FTX might collapse:
One warning that EAs neglected was SBF’s decision to spend millions of dollars on Carrick Flynn’s congressional candidacy. Silver blames SBF for causing Flynn’s defeat, by thoughtlessly spending money in a way that looked weird and annoying. It wasn’t obvious to me at the time what went wrong there, but Silver knows a good deal more about elections than I do, so he’s likely correct.
Silver tries to shoehorn SBF into categories of cognizant versus negligent, and proficient versus deficient. Thankfully, he avoids such simple categorization when evaluating SBF’s altruism (it seems clear to me that SBF had some altruistic motives for being vegan, and also had some less noble instincts, maybe Trump-like megalomania?).
Silver classifies SBF as cognizant and deficient. That seems too simplistic.
My read is that SBF was mostly quite competent, and mostly knew what he was doing. But he was wildly inconsistent about those abilities, in a way that suggests his initial successes led to extreme overconfidence. He reminds me in some ways of Trump, who showed remarkable skill at finding a nearly impossible path to victory in 2016, while also showing a remarkable lack of skill at handling his 2020 defeat, and poor skills at handling his criminal trials. Both Trump and SBF seem selectively delusional, particularly about the possibility that they might make mistakes.
The VC World
The best VC firms have a 50% success rate at picking investments, as measured by how often they at least break even. In order to justify the high risks they take, they mostly depend on the 10x+ returns that they get 10% of the time. The average VC firm has a much lower success rate. What’s it like to be a bit less than the best VC?
The startup world selects for founders who had a moderately comfortable childhood. But it selects against the super-rich. Founders need to take big risks, and feel a strong need to be more successful. Above a certain level (upper middle class?) additional wealth as a child makes a person less willing to risk years of his life working hard to get ahead.
Silver presents some evidence that VCs discriminate against minorities, especially black women founders. I find it hard to tell how strong this effect is. There are certainly some important times when VCs decide not to invest on the grounds that other VCs are unlikely to invest in that startup. It sometimes only takes a mild amount of expected stereotyping for a VC to reject a startup, feeling it will fail due to inadequate funding.
But that only applies to startups that depend on multiple rounds of funding. Aren’t there some valuable startups that can be adequately funded by a single VC? If so, any VC who can spot the neglected startups can succeed by investing in them. That would tend to limit the discrimination. I’m confident that such startups existed in the 90s. I’m less sure what the current situation is.
Silver implies that something is wrong with my hypothesis, because it’s surprisingly rare for a new VC firm to displace existing ones. Silver presents good reasons to expect that VC firms will be self-perpetuating, due to startups preferring deals with prestigious firms. But that only works if less prestigious VC firms can’t exploit the mistakes of the prestigious ones.
That leaves me feeling confused as to how many valuable startups fail to get funded.
Concluding Thoughts
Silver ends by proposing to replace the French national motto with a motto that’s more appropriate for an age of AI: Agency, Plurality, and Reciprocity. That feels kind of good, yet doesn’t express my goals clearly enough that I want to adopt it as my motto.
The book mostly describes my tribe, although I want to somewhat downplay the risk-tolerant aspects of it. I put up with stock market risks, because doing so has helped with my financial security. I’m preparing for high-risk decisions about AI, because I don’t see how to avoid them.
I normally do some research to fact-check books like this. Instead, I can confirm, partly from direct experience, that the book is at least 95% correct. The places where he seems to have some facts wrong involve things where I have insider-type information that I wouldn’t expect an author to uncover with merely a year’s worth of research.
I’m glad to have a book that I can usefully point to when explaining my worldview.