Commentary (there will be a lot of “to me”s because I have been a bystander to this exchange so far):
I think this post misunderstands Holden’s point, because it looks like it’s still talking about agents. Tool AI, to me, is a decision support system: I tell Google Maps where I will start from and where I will leave from, and it generates a route using its algorithm. Similarly, I could tell Dr. Watson my medical data, and it will supply a diagnosis and a treatment plan that has a high score based on the utility function I provide.
In neither case are the skills of “looking at the equations and determining real-world consequences” that necessary. There are no dark secrets lurking in the soul of A*. Indeed, that might be the heart of the issue: tool AI might be those situations where you can make a network that represents the world, identify two nodes, and call your optimization algorithm of choice to determine the best actions to choose to attempt to make it from the start node to the end node.
Reducing the world to a network is really hard. Determining preferences between outcomes is hard. But Tool AI looks to me like saying “well, the whole world is really too much. I’m just going to deal with planning routes, which is a simple world that I can understand,” where the FAI tools aren’t that relevant. The network might be out of line with reality, the optimization algorithm might be buggy or clumsy, but the horror stories that keep FAI researchers up at night seem impossible because of the inherently limited scope, and the ability to do dry runs and simulations until the AI’s model of reality is trusted enough to give it control.
Now, this requires that AI only be used for things like planning where to put products on shelves, not planning corporate strategy- but if you work from the current stuff up rather than from the God algorithm down, it doesn’t look like corporate strategy will be on the table until AI is developed to the point where it could be trusted with that. If someone gave me a black box that spit out plans based on English input, then I wouldn’t trust it and I imagine you wouldn’t either- but I don’t think that’s what we’re looking at, and I don’t know if planning for that scenario is valuable.
It seems to me that SI has discussed Holden’s Tool AI idea- when it made the distinction between AI and AGI. Holden seems to me to be asking “well, if AGI is such a tough problem, why even do it?”.
Jaan: so GMAGI would—effectively—still be a narrow AI that’s designed to augment human capabilities in particularly strategic domains, while not being able to perform tasks such as programming. also, importantly, such GMAGI would not be able to make non-statistical (ie, individual) predictions about the behaviour of human beings, since it is unable to predict their actions in domains where it is inferior.
Holden: [...] I don’t think of the GMAGI I’m describing as necessarily narrow—just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like “how do I develop superweapons”). There are many ways this could be the case.
Jaan: [...] i stand corrected re the GMAGI definition—from now on let’s assume that it is a full blown AGI in the sense that it can perform every intellectual task better than the best of human teams, including programming itself.
It’s not clear to me that everyone involved has the same understanding of AGI, unless in the next statement Holden agrees with the sense that Jaan uses.
I think you’re arguing about Karnovsky’s intention, but it seems clear (to me :) that he is proposing something much more general that a strategy of pursuing best narrow AIs—see the “Here’s how I picture the Google Maps AGI ” code snipped Eliezer is working of.
In any case, taking your interpretation as your proposal, I don’t think anyone is disagreeing with the value of building good narrow AIs where we can, the issue is that the world might be economically driven towards AGI, and someone needs to do the safety research, which is essentially the SI mission.
I agree the code snippet is relevant, but it looks like pseudocode for the “optimization algorithm of choice” part- the question is what dataset and sets of alternatives you’re calling it over. Is it a narrow environment where we can be reasonably confident that the model of reality is close to reality, and the model of our objective is close to our objective? Or is it a broad environment where we can’t be confident about the fidelity of our models of reality or our objectives without calling in FAI experts to evaluate the approach and find obvious holes?
Similarly, is it an environment where the optimization algorithm needs to take into account other agents and model them, or one in which the algorithm can just come up with a plan without worrying about how that plan will alter the wider world?
It seems like explaining the difference between narrow AI and AGI and giving a clearer sense of what subcomponents make a decision support system dangerous might work well for SI. Right now, the dominant feature of UFAI as SI describes it is that it’s an agent with a utility function- and so the natural response to SI’s description is “well, get rid of the agency.” That’s a useful response only if it constricts the space of possible AIs we could build- and I think it does, by limiting us to narrow AIs. Spelling out the benefits and costs to various AI designs and components will both help bring other people to SI’s level of understanding and point out holes in SI’s assumptions and arguments.
That’s a useful response only if it constricts the space of possible AIs we could build- and I think it does, by limiting us to narrow AIs
I agree with you that that is a position one might take in response to the UFAI risks, but it seems from reading Karnovsky that he thinks some Oracle/”Tool” AI (quite general) is safe if you get rid of that darned explicit utility function. Eliezer is trying to disabuse him of the notion. If your understanding of Karnovsky is different, mine is more like Eliezers. In any case this is probably mute, since Karnovsky is very likely to respond one way or another, given this turned into a public debate.
it seems from reading Karnovsky that he thinks some Oracle/”Tool” AI (quite general) is safe if you get rid of that darned explicit utility function.
I think agency and utility functions are separate, here, and it looks like agency is the part that should be worrisome. I haven’t thought about that long enough to state that definitively, though.
Eliezer is trying to disabuse him of the notion.
Right, but it looks like by moving from where Eliezer is towards where Holden is, where I would rather see him move from where Holden is to where Eliezer is. Much of point 2, for example, is discussing how hard AGI is- which, to me, suggests we should worry less about it, because it is unlikely to be implemented successfully, and any AIs we will see will be narrow- in which case AGI thinking isn’t that relevant.
My approach would have been along the lines of: start off with a safe AI, add wrinkles until its safety is no longer clear, and then discuss the value of FAI researchers.
For example, we might imagine a narrow AI that takes in labor stats data, econ models, psych models, and psych data and advises schoolchildren on what subjects to study and what careers to pursue. Providing a GoogleLifeMap to one person doesn’t seem very dangerous- but what about when it’s ubiquitous? Then there will be a number of tradeoffs that need to be weighed against each other and it’s not at all clear that the AI will get them right. (If the AI tells too many people to become doctors, the economic value of being a doctor will decrease- and so the AI has to decide who of a set of potential doctors to guide towards being a doctor. How will it select between people?)
In addition to providing advice to people, it can aggregate the advice it has provided, translate it into economic terms, and hand it off to some independent economy-modeling service which is (from GoogleLifeMap’s perspective) a black box. Economic predictions about the costs and benefits of various careers are compiled, and eventually become GoogleLifeMap’s new dataset. Possibly it has more than one dataset, and presents career recommendations from each of them in parallel: “According to dataset A, you should spend nine hours a week all through high school sculpting with clay, but never show the results to anyone outside your immediate family, and study toward becoming a doctor of dental surgery; according to dataset B, you should work in foodservice for five years and two months, take out a thirty million dollar life insurance policy, and then move to a bunker in southern Arizona.”
Let’s be a bit more specific—that is one important point of the article, that as soon as the “Tool AI” definition becomes more specific, the problems start to appear.
We don’t want just a system that finds a route between points A and B. We have Google Maps already. By speaking about AGI we want a system that can answer “any question”. (Not literally, but it means a wide range of possible question types.) So we don’t need an algorithm to find the shortest way between A and B, but we need an algorithm to answer “any question” (or admit that it cannot find an answer), and of course to answer that question correctly.
So could you be just a bit more specific about the algorithm that provides a correct answer to any question? (“I don’t know” is also a correct answer, if the system does not know.) Because that is the moment when the problems become visible.
Don’t talk about what the Tool AI doesn’t do, say what it does. And with a high probability there will be a problem. Of course until you tell what exactly the Tool AI will do, I can’t tell you how exactly that problem will happen.
This is relevant:
Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button.
Please note that AIXI with outputs connected only to a monitor seems like an instance of the Tool AI.
Please note that AIXI with outputs connected only to a monitor seems like an instance of the Tool AI.
As I read Holden, and on my proposed way of making “agent” precise, this would be an agent rather than a tool. The crucial thing is that this version of AIXI selects actions on the basis of how well they serve certain goals without user approval. If you had a variation on AIXI that identified the action that would maximize a utility function and displayed the action to a user (where the method of display was not done in an open-ended goal-directed way), that would count as a tool.
Let’s be a bit more specific—that is one important point of the article, that as soon as the “Tool AI” definition becomes more specific, the problems start to appear.
Sure, but part of my point is that there are multiple options for a Tool AI definition. The one I prefer is narrow AIs that can answer particular questions well- and so to answer any question, you need a Tool that decides which Tools to call on the question, each of those Tools, and then a Tool that selects which answers to present to the user.
What would be awesome is if we could write an AI that would write those Tools itself. But that requires general intelligence, because it needs to understand the questions to write the Tools. (This is what the Oracle in a box looks like to me.) But that’s also really difficult and dangerous, for reasons that we don’t need to go over again. Notice Holden’s claim- that his Tools don’t need to gather data because they’ve already been supplied with a dataset- couldn’t be a reasonable limitation for an Oracle in a box (unless it’s a really big box).
I think the discussion would be improved by making more distinctions like that, and trying to identify the risk and reward of particular features. That would be demonstrating what FAI thinkers are good at.
I don’t think the distinction is supposed to be merely the distinction between Narrow AI and AGI. The “tool AI” oracle is still supposed to be a general AI that can solve many varied sorts of problems, especially important problems like existential risk.
And it doesn’t make sense to “propose” Narrow AI—we have plenty of that already, and nobody around here seems to be proposing that we stop that.
I don’t think the distinction is supposed to be merely the distinction between Narrow AI and AGI. The “tool AI” oracle is still supposed to be a general AI that can solve many varied sorts of problems, especially important problems like existential risk.
I think this depends on the development path. A situation in which a team writes a piece of code that can solve any problem is very different from a situation in which thousands of teams write thousands of programs that interface together, with a number of humans interspersed throughout the mix, each of which is a narrow AI designed to solve some subset of the problem. The first seems incredibly dangerous (but also incredibly hard); the second seems like the sort of thing that will be difficult to implement if its reach exceeds its grasp. FAI style thinkers are still useful in the second scenario- but they’re no longer the core component. The first seems like the future according to EY, the second like the future according to Hanson, and the second would be able to help solve many varied sorts of problems, especially important problems like existential risk.
This really gets at the heart of what intuitively struck me wrong (read: “confused me”) in Eliezer’s reply. Both Eliezer and Holden engage with the example “Google Maps AGI”; I’m not sure what the difference is—if any—between “Google Maps AGI” and the sort of search/decision-support algorithms that Google Maps and other GPS systems currently use. The algorithm Holdon describes and the neat A* algorithm Eliezer presents seem to just do exactly what the GPS on my phone already does. If the Tool AI we’re discussing is different than current GPS systems, then what is the difference? Near as I understand it, AGI is intelligent across different domains in the same way a human is, while Tool AI (= narrow AI?) is the sort of simple-domain search algorithms we see in GPS. Am I missing something here?
But if what Holden is talking about by Tool AI is just this sort of simple(r), non-reflective search algorithm, then I understand why he thinks this is significantly less risky; GPS-style Tool AI only gets me lost when it screws up, instead of killing the whole human species. Sure, this tool is imperfect: sometimes it doesn’t match my utility function, and returns a route that leads me into traffic, or would take too long, or whatever; sometimes it doesn’t correctly model what’s actually going on, and thinks I’m on the wrong street. Even still, gradually building increasingly agentful Tool AIs—ones that take more of the optimization process away from the human user—seems like it would be much safer than just swinging for the fences right away.
So I think that Vaniver is right when he says that the heart of Holden’s Tool AI point is “Well, if AGI is such a tough problem, why even do it?”
This being said, I still think that Eliezer’s reply succeeds. I think his most important point is the one about specialization: AGI and Tool AI demand domain expertise to evaluate arguments about safety, and the best way to cultivate that expertise is with an organization that specializes in FAI-grade programmers. The analogy with the sort of optimal-charity work Holden specializes in was particularly weighty.
I see Eliezer’s response to Holden’s challenge—“why do AGI at all?”—as: “Because you need FAI-grade skills to know if you need to do AGI or not.” If AGI is an existential threat, and you need FAI-grade skills to know how to deal with that threat, then you need FAI-grade programmers.
(Though, I don’t know if “The world needs FAI-grade programmers, even if we just want to do Tool AI right now” carries through to “Invest in SIAI as a charity,” which is what Holden is ultimately interested in.)
This being said, I still think that Eliezer’s reply succeeds.
There are a number of different messages being conveyed here. I agree that it looks like a success for at least one of them, but I’m worried about others.
I see Eliezer’s response to Holden’s challenge—“why do AGI at all?”—as: “Because you need FAI-grade skills to know if you need to do AGI or not.”
I agree with you that that is Eliezer’s strongest point. I am worried that it takes five thousand words to get across: that speaks to clarity and concision, but Holden is the one to ask about what his central point was, and so my worry shouldn’t be stronger than my model of Holden.
Though, I don’t know if “The world needs FAI-grade programmers, even if we just want to do Tool AI right now” carries through to “Invest in SIAI as a charity,” which is what Holden is ultimately interested in.
Agreed- and it looks like that agrees with Holden’s ultimate recommendation, of “SI should probably be funded at some level, but its current level seems too high.”
Commentary (there will be a lot of “to me”s because I have been a bystander to this exchange so far):
I think this post misunderstands Holden’s point, because it looks like it’s still talking about agents. Tool AI, to me, is a decision support system: I tell Google Maps where I will start from and where I will leave from, and it generates a route using its algorithm. Similarly, I could tell Dr. Watson my medical data, and it will supply a diagnosis and a treatment plan that has a high score based on the utility function I provide.
In neither case are the skills of “looking at the equations and determining real-world consequences” that necessary. There are no dark secrets lurking in the soul of A*. Indeed, that might be the heart of the issue: tool AI might be those situations where you can make a network that represents the world, identify two nodes, and call your optimization algorithm of choice to determine the best actions to choose to attempt to make it from the start node to the end node.
Reducing the world to a network is really hard. Determining preferences between outcomes is hard. But Tool AI looks to me like saying “well, the whole world is really too much. I’m just going to deal with planning routes, which is a simple world that I can understand,” where the FAI tools aren’t that relevant. The network might be out of line with reality, the optimization algorithm might be buggy or clumsy, but the horror stories that keep FAI researchers up at night seem impossible because of the inherently limited scope, and the ability to do dry runs and simulations until the AI’s model of reality is trusted enough to give it control.
Now, this requires that AI only be used for things like planning where to put products on shelves, not planning corporate strategy- but if you work from the current stuff up rather than from the God algorithm down, it doesn’t look like corporate strategy will be on the table until AI is developed to the point where it could be trusted with that. If someone gave me a black box that spit out plans based on English input, then I wouldn’t trust it and I imagine you wouldn’t either- but I don’t think that’s what we’re looking at, and I don’t know if planning for that scenario is valuable.
It seems to me that SI has discussed Holden’s Tool AI idea- when it made the distinction between AI and AGI. Holden seems to me to be asking “well, if AGI is such a tough problem, why even do it?”.
Holden explicitly said that he was talking about AGI in his dialogue with Jaan Tallinn:
Jaan: so GMAGI would—effectively—still be a narrow AI that’s designed to augment human capabilities in particularly strategic domains, while not being able to perform tasks such as programming. also, importantly, such GMAGI would not be able to make non-statistical (ie, individual) predictions about the behaviour of human beings, since it is unable to predict their actions in domains where it is inferior.
Holden: [...] I don’t think of the GMAGI I’m describing as necessarily narrow—just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like “how do I develop superweapons”). There are many ways this could be the case.
Jaan: [...] i stand corrected re the GMAGI definition—from now on let’s assume that it is a full blown AGI in the sense that it can perform every intellectual task better than the best of human teams, including programming itself.
It’s not clear to me that everyone involved has the same understanding of AGI, unless in the next statement Holden agrees with the sense that Jaan uses.
I think you’re arguing about Karnovsky’s intention, but it seems clear (to me :) that he is proposing something much more general that a strategy of pursuing best narrow AIs—see the “Here’s how I picture the Google Maps AGI ” code snipped Eliezer is working of.
In any case, taking your interpretation as your proposal, I don’t think anyone is disagreeing with the value of building good narrow AIs where we can, the issue is that the world might be economically driven towards AGI, and someone needs to do the safety research, which is essentially the SI mission.
I agree the code snippet is relevant, but it looks like pseudocode for the “optimization algorithm of choice” part- the question is what dataset and sets of alternatives you’re calling it over. Is it a narrow environment where we can be reasonably confident that the model of reality is close to reality, and the model of our objective is close to our objective? Or is it a broad environment where we can’t be confident about the fidelity of our models of reality or our objectives without calling in FAI experts to evaluate the approach and find obvious holes?
Similarly, is it an environment where the optimization algorithm needs to take into account other agents and model them, or one in which the algorithm can just come up with a plan without worrying about how that plan will alter the wider world?
It seems like explaining the difference between narrow AI and AGI and giving a clearer sense of what subcomponents make a decision support system dangerous might work well for SI. Right now, the dominant feature of UFAI as SI describes it is that it’s an agent with a utility function- and so the natural response to SI’s description is “well, get rid of the agency.” That’s a useful response only if it constricts the space of possible AIs we could build- and I think it does, by limiting us to narrow AIs. Spelling out the benefits and costs to various AI designs and components will both help bring other people to SI’s level of understanding and point out holes in SI’s assumptions and arguments.
I agree with you that that is a position one might take in response to the UFAI risks, but it seems from reading Karnovsky that he thinks some Oracle/”Tool” AI (quite general) is safe if you get rid of that darned explicit utility function. Eliezer is trying to disabuse him of the notion. If your understanding of Karnovsky is different, mine is more like Eliezers. In any case this is probably mute, since Karnovsky is very likely to respond one way or another, given this turned into a public debate.
I think agency and utility functions are separate, here, and it looks like agency is the part that should be worrisome. I haven’t thought about that long enough to state that definitively, though.
Right, but it looks like by moving from where Eliezer is towards where Holden is, where I would rather see him move from where Holden is to where Eliezer is. Much of point 2, for example, is discussing how hard AGI is- which, to me, suggests we should worry less about it, because it is unlikely to be implemented successfully, and any AIs we will see will be narrow- in which case AGI thinking isn’t that relevant.
My approach would have been along the lines of: start off with a safe AI, add wrinkles until its safety is no longer clear, and then discuss the value of FAI researchers.
For example, we might imagine a narrow AI that takes in labor stats data, econ models, psych models, and psych data and advises schoolchildren on what subjects to study and what careers to pursue. Providing a GoogleLifeMap to one person doesn’t seem very dangerous- but what about when it’s ubiquitous? Then there will be a number of tradeoffs that need to be weighed against each other and it’s not at all clear that the AI will get them right. (If the AI tells too many people to become doctors, the economic value of being a doctor will decrease- and so the AI has to decide who of a set of potential doctors to guide towards being a doctor. How will it select between people?)
In addition to providing advice to people, it can aggregate the advice it has provided, translate it into economic terms, and hand it off to some independent economy-modeling service which is (from GoogleLifeMap’s perspective) a black box. Economic predictions about the costs and benefits of various careers are compiled, and eventually become GoogleLifeMap’s new dataset. Possibly it has more than one dataset, and presents career recommendations from each of them in parallel: “According to dataset A, you should spend nine hours a week all through high school sculpting with clay, but never show the results to anyone outside your immediate family, and study toward becoming a doctor of dental surgery; according to dataset B, you should work in foodservice for five years and two months, take out a thirty million dollar life insurance policy, and then move to a bunker in southern Arizona.”
Let’s be a bit more specific—that is one important point of the article, that as soon as the “Tool AI” definition becomes more specific, the problems start to appear.
We don’t want just a system that finds a route between points A and B. We have Google Maps already. By speaking about AGI we want a system that can answer “any question”. (Not literally, but it means a wide range of possible question types.) So we don’t need an algorithm to find the shortest way between A and B, but we need an algorithm to answer “any question” (or admit that it cannot find an answer), and of course to answer that question correctly.
So could you be just a bit more specific about the algorithm that provides a correct answer to any question? (“I don’t know” is also a correct answer, if the system does not know.) Because that is the moment when the problems become visible.
Don’t talk about what the Tool AI doesn’t do, say what it does. And with a high probability there will be a problem. Of course until you tell what exactly the Tool AI will do, I can’t tell you how exactly that problem will happen.
This is relevant:
Please note that AIXI with outputs connected only to a monitor seems like an instance of the Tool AI.
As I read Holden, and on my proposed way of making “agent” precise, this would be an agent rather than a tool. The crucial thing is that this version of AIXI selects actions on the basis of how well they serve certain goals without user approval. If you had a variation on AIXI that identified the action that would maximize a utility function and displayed the action to a user (where the method of display was not done in an open-ended goal-directed way), that would count as a tool.
Sure, but part of my point is that there are multiple options for a Tool AI definition. The one I prefer is narrow AIs that can answer particular questions well- and so to answer any question, you need a Tool that decides which Tools to call on the question, each of those Tools, and then a Tool that selects which answers to present to the user.
What would be awesome is if we could write an AI that would write those Tools itself. But that requires general intelligence, because it needs to understand the questions to write the Tools. (This is what the Oracle in a box looks like to me.) But that’s also really difficult and dangerous, for reasons that we don’t need to go over again. Notice Holden’s claim- that his Tools don’t need to gather data because they’ve already been supplied with a dataset- couldn’t be a reasonable limitation for an Oracle in a box (unless it’s a really big box).
I think the discussion would be improved by making more distinctions like that, and trying to identify the risk and reward of particular features. That would be demonstrating what FAI thinkers are good at.
I don’t think the distinction is supposed to be merely the distinction between Narrow AI and AGI. The “tool AI” oracle is still supposed to be a general AI that can solve many varied sorts of problems, especially important problems like existential risk.
And it doesn’t make sense to “propose” Narrow AI—we have plenty of that already, and nobody around here seems to be proposing that we stop that.
I think this depends on the development path. A situation in which a team writes a piece of code that can solve any problem is very different from a situation in which thousands of teams write thousands of programs that interface together, with a number of humans interspersed throughout the mix, each of which is a narrow AI designed to solve some subset of the problem. The first seems incredibly dangerous (but also incredibly hard); the second seems like the sort of thing that will be difficult to implement if its reach exceeds its grasp. FAI style thinkers are still useful in the second scenario- but they’re no longer the core component. The first seems like the future according to EY, the second like the future according to Hanson, and the second would be able to help solve many varied sorts of problems, especially important problems like existential risk.
This really gets at the heart of what intuitively struck me wrong (read: “confused me”) in Eliezer’s reply. Both Eliezer and Holden engage with the example “Google Maps AGI”; I’m not sure what the difference is—if any—between “Google Maps AGI” and the sort of search/decision-support algorithms that Google Maps and other GPS systems currently use. The algorithm Holdon describes and the neat A* algorithm Eliezer presents seem to just do exactly what the GPS on my phone already does. If the Tool AI we’re discussing is different than current GPS systems, then what is the difference? Near as I understand it, AGI is intelligent across different domains in the same way a human is, while Tool AI (= narrow AI?) is the sort of simple-domain search algorithms we see in GPS. Am I missing something here?
But if what Holden is talking about by Tool AI is just this sort of simple(r), non-reflective search algorithm, then I understand why he thinks this is significantly less risky; GPS-style Tool AI only gets me lost when it screws up, instead of killing the whole human species. Sure, this tool is imperfect: sometimes it doesn’t match my utility function, and returns a route that leads me into traffic, or would take too long, or whatever; sometimes it doesn’t correctly model what’s actually going on, and thinks I’m on the wrong street. Even still, gradually building increasingly agentful Tool AIs—ones that take more of the optimization process away from the human user—seems like it would be much safer than just swinging for the fences right away.
So I think that Vaniver is right when he says that the heart of Holden’s Tool AI point is “Well, if AGI is such a tough problem, why even do it?”
This being said, I still think that Eliezer’s reply succeeds. I think his most important point is the one about specialization: AGI and Tool AI demand domain expertise to evaluate arguments about safety, and the best way to cultivate that expertise is with an organization that specializes in FAI-grade programmers. The analogy with the sort of optimal-charity work Holden specializes in was particularly weighty.
I see Eliezer’s response to Holden’s challenge—“why do AGI at all?”—as: “Because you need FAI-grade skills to know if you need to do AGI or not.” If AGI is an existential threat, and you need FAI-grade skills to know how to deal with that threat, then you need FAI-grade programmers.
(Though, I don’t know if “The world needs FAI-grade programmers, even if we just want to do Tool AI right now” carries through to “Invest in SIAI as a charity,” which is what Holden is ultimately interested in.)
There are a number of different messages being conveyed here. I agree that it looks like a success for at least one of them, but I’m worried about others.
I agree with you that that is Eliezer’s strongest point. I am worried that it takes five thousand words to get across: that speaks to clarity and concision, but Holden is the one to ask about what his central point was, and so my worry shouldn’t be stronger than my model of Holden.
Agreed- and it looks like that agrees with Holden’s ultimate recommendation, of “SI should probably be funded at some level, but its current level seems too high.”