Damn. And I tried the strategy “what if I try to predict it only off the text, without looking at csv” :D
Дмитрий Зеленский
Why DEX though? Like, conceptually it’s absolutely unpredictable, this is one of the most useful scores in most TTRPGs.
Yeah, there seems to be a lot of personal preference involved. Removing cell borders is obnoxious and inconvenient, the table below hurts. The table above has the borders a tad too thick, but removing them is a cure that’s, personally, worse than the disease.
In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted
Half-joking—unless the futurist in question is Gerbert Wells. I think there was a quote that showed that he effectively predicted pixelization of early images along with many similar small-level details of early XXI century (although, of course, survivor bias for details probably influences my memory and the retelling I rely on).
Independently,
(in principle it could be figured out by human neuroscientists working without AI, but it’s a bit late for that now)
What? Why? There is no AI as of now, LLMs definitely do not count. I think it is still quite possible that neuroscience will make its breakthrough on its own, without any non-human mind help (again, dressing up the final article doesn’t count, we’re talking about the general insights and analysis here).
To begin with, there is a level of abstraction at which the minds of all four of you are the same, yet different from various nonhuman minds.
I am actually not even sure about that. Your “identify the standard cognitive architecture of this entity’s species” presupposes existence thereof—in a sufficiently specified way to then build its utopia and to derive that identification correctly in all four cases.
But, more importantly, I would say that this algorithm does not derive my CEV in any useful sense.
I like this text but I find your take on Fermi paradox wholly unrealistic.
Let’s even assume, for the sake of the argument, that both P(life) and P(sapience|life) are bigger than 1/googol (though why?) so your hunch on how many planets originally evolve sapient aliens is broadly correct. A very substantial part of alternative histories of the last century (I wanted to say “most” but most, of course, is uninteresting differences such as whether a random human puts a right shoe or a left shoe on first) result in humanity dead or thrown into possibly-irrecoverable barbarism. The default take for aliens that have evolved is to fail their version of Berlin crisis, or Caribbean crisis, or whatever other near-total-destruction situation we’ve had even without AI (not necessarily with nuclear weapons, mind you—say, what if instead of pretty-harmless-in-comparison COVID we got a sterilizing virus on the loose that kills genitalia instead of osmotic nerves? Since its method of proliferation does not depend on the host’s ability to procreate, you could imagine sterilized population of the planet). And then you tack on the fact that you also predict very high chance of AGI ruin; so most of the hypothetical aliens that survived the kind of hurdles humanity somehow survived (again, with possibly totally different specifics) are replaced by misaligned AGI, throwing a huge hurdle into the cosmopolitan result you predict—meeting paperclip-maximiser built by ant-people is more likely than meeting ant-people themselves, given your background beliefs.
Banning gain-of-function research would be a mistake. What would be recklessly foolish is incentivising governments to decide what avenues of research are recklessly foolish. The fact that governments haven’t prohibited it in a panic bout (not even China that otherwise did a lot of panicky things) is a good testament of their abilities, not an inability to react to warning shots.
The expected value of that is infinitesimal, both in general and for x-risk reduction in particular. People who prefer political reasoning (so, the supermajority) will not trust it, people who don’t think COVID was an important thing except in how people reacted to it (like me) won’t care, and most people who both find COVID important (or sign of anything important) and actually prefer logical reasoning have already given it a lot of thought and found out that the bottleneck is data that China will not release anyone soon.
I think the concept that all peoples throughout history would come into near agreement about what is good if they just reflected on it long enough is unrealistic.
Yes. Exactly. You don’t even need to go through time, place and culture on modern-day Earth are sufficient. While I cannot know my CEV (for if I knew, I would be there already), I predict with high confidence that my CEV, my wife’s CEV, Biden’s CEV and Putin’s CEV are four quite different CEVs, even if they all include as a consequence “the planet existing as long as the CEV’s bearer and the beings the CEV’s bearer cares about are on it”.
And in part because it’s socially hard to believe, as a regulator, that you should keep telling everyone “no”, or that almost everything on offer is radically insufficient, when you yourself don’t concretely know what insights and theoretical understanding we’re missing.
That’s not true. We can end up with a regulator that stands in the pose of “prohibit everything”. See IRB in America, for instance: medical experiments are made plainly insurmountable.
I think at this point we should just ask @johnswentworth which one of us understood him correctly. As far as I see, we measure a distance between vectors, not between individual parameters, and that’s why this thing fails.
Erm, I think you’re getting mixed up between comparing parameters and comparing the results of applying some function to parameters. These are not the same, and it’s the latter that become incomparable.
(Also, would your algorithm derive that ln(4+3i)=ln(5) since |4+3i|=|5|? I really don’t expect the “since we measure distances” trick to work, but if it does work, it should also work on this example.)
If you allow complex numbers, comparison “greater than/less than” breaks down.
Huh. In linguistics that’s known as “functional model” vs. “structural model” (Mel’čuk’s terminology): whether you treat linguistic ability as a black box or try model how it works in the brain (Mel’čuk opts for the former as a response to Chomsky’s precommitment to the latter). This neatly explains why structural models are preferable.
Correlation is not causation. And I am rather certain that there are few people who believe a genuine causation is present here (even though it is rather likely to be present in my opinion).
One, where have you seen a foot-long shoe? That would be, what, 48 or 49 European size? This naming was always curious for me, feet are just… noticeably longer than feet.
Two, metric system has the main advantage of easy scalability. Switching from liter to deciliter to centiliter to millimeter is far easier than jumping between gallons, pints and whatever even is there. That’s the main point, not any constant to multiply it on (i.e. a system with inch, dekainch, and so on would be about as good).
Three, I really see no problem in saying things like “36 centimeters” to describe an object’s length. I know that my hand is ~17 centimeters, and I use it as a measurement tool in emergencies, but I always convert back to do any kind of reasonable thinking, I never actually count in “two hands and a phalanx”.
However, just in case, you only covered my first suggestion, not both.
Well, that at least is an experiment one could set up. Time of reaction should probably be a reasonably-appropriate measure for “harder” (perhaps error rate, too, but on many tasks error rate is trivially low). But this requires to determine how “using a function” is detected; you’d need, at the very least, “clear cases” for each function.
The problem raises an important problem. Though I have to admit my gut reaction is “neurotypicals are being weird again” :)