So, should the restrictions on gambling be based on feedback loop length? Should sport betting be broadly legal when about the far enough future?
rotatingpaguro
current inference scaling methods tend to be tied to CoT and the like, which are quite transparent
Aschenbrenner in Situational Awareness predicts illegible chains of thought are going to prevail because they are more efficient. I know of one developer claiming to do this but I guess there must be many.
Related, I have a vague understanding on how product safety certification works in EU, and there are multiple private companies doing the certification in every state.
Half-informed take on “the SNPs explain a small part of the genetic variance”: maybe the regression methods are bad?
Not sure if I missed something because I read quickly, but: all these are purely correlational studies, without causal inference, right?
OpenAI is recklessly scaling AI. Besides accelerating “progress” toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles only, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone’s personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
Could you justify that it causes increasing harms? My intuition is that OpenAI is currently net-positive without taking into account future risks. It’s just an intuition, however, I have not spent time thinking about it and writing down numbers.
(I agree it’s net-negative overall.)
Ok, that. China seems less interventionist, and to use more soft power. The US is more willing to go to war. But is that because the US is more powerful than China, or because Chinese culture is intrinsically more peaceful? If China made the killer robots first, would they say “MUA-HA-HA actually we always wanted to shoot people for no good reason like in yankee movies! Go and kill!”
Since politics is a default-no on lesswrong, I’ll try to muddle the waters by making a distracting unserious figurative narration.
Americans maybe have more of a culture of “if I die in a shooting conflict, I die honorably, guns for everyone”. Instead China is more about harmony&homogenity, “The CCP is proud to announce that in 2025 the Harmonious Agreement Quinquennal Plan in concluded successfully; all disagreements are no more, and everyone is officially friends”. When the Chinese send Uighurs to the adult equivalent of school, Americans freak out: “What? Mandated school? Without the option of shooting back?”
My doubt is mostly contingent on not having first-hand experience of China, while I have of the US. I really don’t trust narratives from outside. In particular I don’t trust narratives from Americans right now! My own impression of the US changed substantially by going there in person, and I even am from an allied country with broad US cultural influence.
[Alert: political content]
About the US vs. China argument: have any proponent made a case that the Americans are the good guys here?
My vague perspective as someone not in China neither in the US, is that the US is overall more violent and reckless than China. My personal cultural preference is for US, but when I think about the future of humanity, I try to set aside what I like for myself.
So far the US is screaming “US or China!” while creating the problem in the first place all along. It could be true that if China developed AGI it would be worse, but that should be argued.
I bet there is some more serious non-selfish analysis of why China developing AGI is worse than US developing AGI, I just have never encountered it, would be glad if someone surfaced it to me.
I agree it’s not a flaw in the grand scheme of things. It’s a flaw for using it for consensus for reasoning.
I start with a very low prior of AGI doom (for the purpose of this discussion, assume I defer to consensus).
You link to a prediction market (Manifold’s “Will AI wipe out humanity before the year 2100”, curretly at 13%).
Problems I see with using it for this question, in random order:
It ends in 2100 so the incentive is effectively about what people will believe a few years from now, not about the question. It is a Keynesian beauty contest. (Better than nothing.)
Even with the stated question, you win only if it resolves NO, so it is strategically correct to bet NO.
It is dynamically inconsistent, if you think that humans have power over the outcome and that such markets influence what humans do about it. Illustrative story: “The market says P(doom)=1%, ok I can relax and not work on AI safety” ⇒ everyone says that ⇒ the market says P(doom)=99% because no AI safety work ⇒ “AAAAH SOMEONE DO SOMETHING” ⇒ marker P(doom)=1% ⇒ …
This type of issue is a huge effective blocker for people with my level of skills. I find myself excited to write actual code that does the things, but the thought of having to set everything up to get to that point fills with dread – I just know that the AI is going to get something stupid wrong, and everything’s going to be screwed up, and it’s going to be hours trying to figure it out and so on, and maybe I’ll just work on something else. Sigh. At some point I need to power through.
Reminds me of this 2009 kalzumeus quote:
I want to quote a real customer of mine, who captures the B2C mindset about installing software very eloquently: “Before I download yet another program to my poor old computer, could you let me know if I can…” Painful experience has taught this woman that downloading software to her computer is a risky activity. Your website, in addition to making this process totally painless, needs to establish to her up-front the benefits of using your software and the safety of doing so. (Communicating safety could be an entire article in itself.)
Ah, sorry for being so cursory.
A common trope about mathematicians vs. other math users is that mathematicians are paranoid persnickety truth-seekers, they want everything to be exactly correct down to every detail. Thus engineers and physicists often perceive mathematicians as a sort of fact-checker caste.
As you say, in some sense mathematicians deal with made-up stuff and engineers with real stuff. But from the engineer’s point of view, they deal with mathematicians when writing math, not when screwing bolts, and so perceive mathematicians as “the annoying people who want everything to be perfectly correct”.
Example: I write “E[E[X|Y]] = E[X]” in a paper, and the mathematician pops up complaining “What’s the measure space? Is it sigma-finite? You have to declare if your random variables are square-integrable. Are X and Y measureable in the same space?” and my reply would be “come on we know it’s true I don’t care about writing it properly”.So to me and many people in STEM your analogy has the opposite vibe, which defeats the purpose of an analogy.
The analogy with mathematicians is very stretched.
Include, in the cue to each note, a hint as to its content, besides just the ordinal pointer. A one-letter abbreviation, standardised thruout the work, may work well, e.g.:
“c” for citation supporting the marked claim
“d” for a definition of the marked term
“f” for further, niche information extending the marked section
“t” for a pedantic detail or technicality modifying the marked clause
Commit to only use notes for one purpose — say, only definitions, or only citations. State this commitment to the reader.
These don’t look like good solutions to me. Just a first impression.
I don’t make eye contact while speaking but fix people while in silence. Were there people like me? Did they managed to reverse this? The way I feel inside is more like I can’t think both about the face of someone and what I am saying at once, too many things to keep track of.
Is champerty legal in California?
I take this as a fun occasion to lose some of my karma in a silly way to remind myself lesswrong karma is not important.
A very interesting problem is measuring something like general intelligence. I’m not going to delve deeply into this topic but simply want to draw attention to an idea that is often implied, though rarely expressed, in the framing of such a problem: the assumption that an “intelligence level,” whatever it may be, corresponds to some inherent properties of a person and can be measured through their manifestations. Moreover, we often talk about measurements with a precision of a few percentage points, which suggests that, in theory, the measurement should be based on very stable indicators.
What’s fascinating is that this assumption receives very little scrutiny, while in cases where we talk about “mechanical” parameters of the human body (such as physical performance), we know that such parameters, aside from a person’s potential, heavily depends on numerous external factors and what that person has been doing over the past couple of weeks.
Do you know about the g-factor?
Still 20$/usd
I somewhat disagree with Tenobrus’ commentary about Wolfram.
I watched the full podcast, and my impression was that Wolfram uses a “scientific hat”, of which he is well aware of, which comes with a certain ritual and method for looking at new things and learning them. Wolfram is doing the ritual of understanding what Yudkowsky says, which involves picking at the details of everything.
Wolfram often recognizes that maybe he feels like agreeing with something, but “scientifically” he has a duty to pick it apart. I think this has to be understood as a learning process rather than as a state of belief.