Yes, this.
A lot of people talking about AI Alignment and similar topics never touched or even read a line of code that was implementing part of ML system. Yes, it follows the usual “don’t burn the timeline” mantra, but it also means that a lot of what they talk about doesn’t make any sense, because they don’t know what they are talking about. And created as a result “white noise” isn’t good neither for AI nor for AI Alignment research.
I wonder if some (a lot?) of the people on this forum do not suffer from what I would call a sausage maker problem. Being too close to the actual, practical design and engineering of these systems, knowing too much about the way they are made, they cannot fully appreciate their potential for humanlike characteristics, including consciousness, independent volition etc., just like the sausage maker cannot fully appreciate the indisputable deliciousness of sausages, or the lawmaker the inherent righteousness of the law. I even thought of doing a post like that—just to see how many downvotes it would get…
Well—at least I followed the guidelines and made a prediction, regarding downvotes. That my model of the world works regarding this forum has therefore been established, certainly and without a doubt.
Also—I personally think there is something intellectually lazy about downvoting without bothering to express in a sentence or two the nature of the disagreement—but that’s admitedly more of a personal appreciation.
(So my prediction here is: if I were to engage one of these no-justification downvoters in an ad rem debate, I would find him or her to be intellectually lacking. Not sure if it’s a testable hypothesis, in practice, but it sure would be interesting if it were.)
I find the common downvoting-instead-of-arguing mentality frustrating and immature. If I don’t have the energy for a counterargument, I simply don’t react at all. Just doing downvotes is intellectually worthless booing. As feedback it’s worse than useless.
Yes, this. A lot of people talking about AI Alignment and similar topics never touched or even read a line of code that was implementing part of ML system. Yes, it follows the usual “don’t burn the timeline” mantra, but it also means that a lot of what they talk about doesn’t make any sense, because they don’t know what they are talking about. And created as a result “white noise” isn’t good neither for AI nor for AI Alignment research.
I wonder if some (a lot?) of the people on this forum do not suffer from what I would call a sausage maker problem. Being too close to the actual, practical design and engineering of these systems, knowing too much about the way they are made, they cannot fully appreciate their potential for humanlike characteristics, including consciousness, independent volition etc., just like the sausage maker cannot fully appreciate the indisputable deliciousness of sausages, or the lawmaker the inherent righteousness of the law. I even thought of doing a post like that—just to see how many downvotes it would get…
Well—at least I followed the guidelines and made a prediction, regarding downvotes. That my model of the world works regarding this forum has therefore been established, certainly and without a doubt.
Also—I personally think there is something intellectually lazy about downvoting without bothering to express in a sentence or two the nature of the disagreement—but that’s admitedly more of a personal appreciation.
(So my prediction here is: if I were to engage one of these no-justification downvoters in an ad rem debate, I would find him or her to be intellectually lacking. Not sure if it’s a testable hypothesis, in practice, but it sure would be interesting if it were.)
I find the common downvoting-instead-of-arguing mentality frustrating and immature. If I don’t have the energy for a counterargument, I simply don’t react at all. Just doing downvotes is intellectually worthless booing. As feedback it’s worse than useless.
Strong upvote!