In what sense is this information warfare or even misleading? The second sentence of the blog post says: “AlphaStar was ranked above 99.8% of active players,” which seems quite clear. They seem to have done a pretty good job of making that comparison as fair as you could expect. What do they say or even imply which is highly misleading?
Perhaps they say “Grandmaster level,” and it’s possible that this gives a misleading impression to people who don’t know what that term means in Starcraft? Though I think chess grandmaster also means roughly “better than 99.8% of ladder players,” and the competitive player pools have similar size. So while it might be misleading in the sense that Chess has a larger player pool a smaller fraction of whom are competitive, it seems fairly straightforward.
Sorry, let me clarify: I was specifically reacting to the OP’s characterization of “throw in the towel while making it look like they were claiming victory.” Now, if that characterization is wrong, then my comment becomes either irrelevant (if you construe it as a conditional whose antecedent turned out to be false: “If DeepMind decided to throw in the towel while making it look like …, then is that good news or bad news”) or itself misleading (if you construe it as me affirming and propagating the misapprehension that DeepMind is propagating misapprehensions—and if you think I’m guilty of that, then you should probably downvote me and the OP so that the Less Wrong karma system isn’t complicit with the propagation of misapprehensions).
I agree that the “Grandmaster-level”/”ranked above 99.8% of active players” claims are accurate. But I also think it’s desirable for intellectuals to aspire to high standards of intent to inform, for which accuracy of claims is necessary but not sufficient, due to the perils of selective reporting.
Imagine that, if you spoke to the researchers in confidence (or after getting them drunk), they would agree with the OP’s commentary that “AlphaStar doesn’t really do the ‘strategy’ part of real-time strategy [...] because there’s no representation of causal thinking.” (This is a hypothetical situation to illustrate the thing I’m trying to say about transparency norms; maybe there is some crushing counterargument to the OP that I’m not aware of because I’m not a specialist in this area.) If that were the case, why not put that in the blog post in similarly blunt language, if it’s information that readers would consider relevant? If the answer to that question is, “That would be contrary to the incentives; why would anyone ‘diss’ their own research like that?” … well, the background situation that makes that reply seem normative is what I’m trying to point at with the “information warfare” metaphor: it’s harder to figure out what’s going on with AI in a world in which the relevant actors are rewarded and selected for reporting impressive-seeming capability results subject to the constraint of not making any false statements, than a world in which actors are directly optimizing for making people more informed about what’s going on with AI.
In what sense is this information warfare or even misleading? The second sentence of the blog post says: “AlphaStar was ranked above 99.8% of active players,” which seems quite clear. They seem to have done a pretty good job of making that comparison as fair as you could expect. What do they say or even imply which is highly misleading?
Perhaps they say “Grandmaster level,” and it’s possible that this gives a misleading impression to people who don’t know what that term means in Starcraft? Though I think chess grandmaster also means roughly “better than 99.8% of ladder players,” and the competitive player pools have similar size. So while it might be misleading in the sense that Chess has a larger player pool a smaller fraction of whom are competitive, it seems fairly straightforward.
Sorry, let me clarify: I was specifically reacting to the OP’s characterization of “throw in the towel while making it look like they were claiming victory.” Now, if that characterization is wrong, then my comment becomes either irrelevant (if you construe it as a conditional whose antecedent turned out to be false: “If DeepMind decided to throw in the towel while making it look like …, then is that good news or bad news”) or itself misleading (if you construe it as me affirming and propagating the misapprehension that DeepMind is propagating misapprehensions—and if you think I’m guilty of that, then you should probably downvote me and the OP so that the Less Wrong karma system isn’t complicit with the propagation of misapprehensions).
I agree that the “Grandmaster-level”/”ranked above 99.8% of active players” claims are accurate. But I also think it’s desirable for intellectuals to aspire to high standards of intent to inform, for which accuracy of claims is necessary but not sufficient, due to the perils of selective reporting.
Imagine that, if you spoke to the researchers in confidence (or after getting them drunk), they would agree with the OP’s commentary that “AlphaStar doesn’t really do the ‘strategy’ part of real-time strategy [...] because there’s no representation of causal thinking.” (This is a hypothetical situation to illustrate the thing I’m trying to say about transparency norms; maybe there is some crushing counterargument to the OP that I’m not aware of because I’m not a specialist in this area.) If that were the case, why not put that in the blog post in similarly blunt language, if it’s information that readers would consider relevant? If the answer to that question is, “That would be contrary to the incentives; why would anyone ‘diss’ their own research like that?” … well, the background situation that makes that reply seem normative is what I’m trying to point at with the “information warfare” metaphor: it’s harder to figure out what’s going on with AI in a world in which the relevant actors are rewarded and selected for reporting impressive-seeming capability results subject to the constraint of not making any false statements, than a world in which actors are directly optimizing for making people more informed about what’s going on with AI.