The group of people who disagree with you and will earnestly go through all the arguments is small.
It is also really small for e.g. perpetual motion device constructed using gears, weights, and levers—very few people would even look at blueprint. It is a bad strategy to dismiss critique on grounds that the critic did not read the whole. Meta considerations work sometimes.
Sensible priors for p(our survival at risk|rather technically unaccomplished are the most aware of the risk) and p(rather technically unaccomplished are the most aware of the risk|our survival at risk) are very, very low. Meanwhile p(rather technically unaccomplished are the most aware of the risk|our survival is not actually at risk) is rather high (its commonly the case that someone’s scared of something). p(high technical ability) is low to start with, p(highest technical ability) is very very low, and p(high technical ability | no technical achievement) is much lower still especially given reasonable awareness that technical achievement is instrumental to being taken seriously. p(ability to self deceive) is not very low, p(ability to deceive oneself and others) is not very low, there is a well known tendency to overspend on safety (see TSA), the notion of the living machine killing it’s creator is very very old, and there’s a plenty of movies to that point. In absence of some sort of achievement that is highly unlikely to be an evaluation error, the probability that you guys matter is very low. That’s partly what Holden told about. The strongest point of his—you are not performing to the standards—even if he buys into AI danger or FAI importance he would not recommend donating to you.
It is also really small for e.g. perpetual motion device constructed using gears, weights, and levers—very few people would even look at blueprint. It is a bad strategy to dismiss critique on grounds that the critic did not read the whole. Meta considerations work sometimes.
Sensible priors for p(our survival at risk|rather technically unaccomplished are the most aware of the risk) and p(rather technically unaccomplished are the most aware of the risk|our survival at risk) are very, very low. Meanwhile p(rather technically unaccomplished are the most aware of the risk|our survival is not actually at risk) is rather high (its commonly the case that someone’s scared of something). p(high technical ability) is low to start with, p(highest technical ability) is very very low, and p(high technical ability | no technical achievement) is much lower still especially given reasonable awareness that technical achievement is instrumental to being taken seriously. p(ability to self deceive) is not very low, p(ability to deceive oneself and others) is not very low, there is a well known tendency to overspend on safety (see TSA), the notion of the living machine killing it’s creator is very very old, and there’s a plenty of movies to that point. In absence of some sort of achievement that is highly unlikely to be an evaluation error, the probability that you guys matter is very low. That’s partly what Holden told about. The strongest point of his—you are not performing to the standards—even if he buys into AI danger or FAI importance he would not recommend donating to you.