R&Ds human systems http://aboutmako.makopool.com
mako yass
I think unpacking that kind of feeling is valuable, but yeah it seems like you’ve been assuming we use decision theory to make decisions, when we actually use it as an upper bound model to derive principles of decisionmaking that may be more specific to human decisionmaking, or to anticipate the behavior of idealized agents, or (the distinction between CDT and FDT) as an allegory for toxic consequentialism in humans.
I’m aware of a study that found that the human brain clearly responds to changes in direction of the earth’s magnetic field (iirc, the test chamber isolated the participant from the earth’s field then generated its own, then moved it, while measuring their brain in some way) despite no human having ever been known to consciously perceive the magnetic field/have the abilities of a compass.
So, presumably, compass abilities could be taught through a neurofeedback training exercise.
I don’t think anyone’s tried to do this (“neurofeedback magnetoreception” finds no results)
But I guess the big mystery is why don’t humans already have this.
A relevant FAQ entry: AI development might go underground
I think I disagree here:
By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.
This will change/is only the case for frontier development. I also think we’re probably in the hardware overhang. I don’t think there is anything inherently difficult to hide about AI, that’s likely just a fact about the present iteration of AI.
But I’d be very open to more arguments on this. I guess… I’m convinced there’s a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn’t seem good on net.
Personally, because I don’t believe the policy in the organization’s name is viable or helpful.
As to why I don’t think it’s viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US’s only clear economic lead over the rest of the world.
If you attempted a pause, I think it wouldn’t work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn’t the frontier. The current apparent absence of frontier AI research in a military context is miraculous, strange, and fragile. If you pause in the private context (which is probably all anyone could do) defence AI will become the frontier in about three years, and after that I don’t think any further pause is possible because it would require a treaty against secret military technology R&D. Military secrecy is pretty strong right now. Hundreds of billions yearly is known to be spent on mostly secret military R&D, probably more is actually spent.
(to be interested in a real pause, you have to be interested in secret military R&D. So I am interested in that, and my position right now is that it’s got hands you can’t imagine)To put it another way, after thinking about what pausing would mean, it dawned on me that pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research or to approach the development of AI with a humanitarian perspective. It seems to me like the movement has already ossified a slogan that makes no sense in light of the complex and profane reality that we live in, which is par for the course when it comes to protest activism movements.
I notice they have a Why do you protest section in their FAQ. I hadn’t heard of these studies before
Protests can and often will positively influence public opinion, voting behavior, corporate behavior and policy.
There is no evidence for a “backfire” effect unless the protest is violent. Our protests are peaceful and non-violent.
Check out this amazing article for more insights on why protesting works
Regardless, I still think there’s room to make protests cooler and more fun and less alienating, and when I mentioned this to them they seemed very open to it.
Yeah, I’d seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven’t done that, instead they decided to just insert some pretty specific prompts that, I’d guess, were probably going to be temporary.
It’s real bad, but it’s not bad enough for me to shoot yet.
There is evidence, literal written evidence, of Musk trying to censor Grok from saying bad things about him
I’d like to see this
I wonder if maybe these readers found the story at that time as a result of first being bronies, and I wonder if bronies still think of themselves as a persecuted class.
IIRC, aisafety.info is primarily maintained by Rob Miles, so should be good: https://aisafety.info/how-can-i-help
I’m certain that better resources will arrive but I do have a page for people asking this question on my site, the “what should we do” section. I don’t think these are particularly great recommendations (I keep changing them) but it has something for everyone.
These are not concepts of utility that I’ve ever seen anyone explicitly espouse, especially not here, the place to which it was posted.
The people who think of utility in the way the article is critiquing don’t know what utility actually is, presenting a critque of this tangible utility as a critique of utility in general takes the target audience further away from understanding what utility is.
A Utility function is a property of a system rather than a physical thing (like, eg, voltage, or inertia, or entropy). Not being a simple physical substance doesn’t make it fictional.
It’s extremely non-fictional. A human’s utility function encompasses literally everything they care about, ie, everything they’re willing to kill for.
It seems to be impossible for a human to fully articulate what the human utility function is exactly, but that’s just a peculiarity of humans rather than a universal characteristic of utility functions. Other agents could have very simple utility functions, and humans are likely to grow to be able to definitely know their utility function at some point in the next century.
Contemplating an argument that free response rarely gets more accurate results for questions like this because listing the most common answers as checkboxes helps respondents to remember all of the answers that’re true for of them.
I’d be surprised if LLM use for therapy or sumarization is that low irl, and I’d expect people would’ve just forgot to mention those usecases. Hope they’ll be in the option list this year.
Hmm I wonder if a lot of trends are drastically underestimated because surveyers are getting essentially false statistics from the Other gutter.
Apparently Anthropic in theory could have released claude 1 before chatgpt came out? https://www.youtube.com/live/esCSpbDPJik?si=gLJ4d5ZSKTxXsRVm&t=335
I think the situation would be very different if they had.
Were OpenAI also, in theory, able to release sooner than they did, though?
The assumption that being totally dead/being aerosolised/being decayed vacuum can’t be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.
Which is to say, I believe it’s possible to be dead.
- ^
the negation, that there’s something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the only things that report anything.
- ^
I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.
I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.
It seems like you’re assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn’t matter. I don’t think that’s the right way to salvage the human utility function, personally.
Okay? I said they’re behind in high precision machine tooling, not machine tooling in general. That was the point of the video.
Admittedly, I’m not sure what the significance of this is. To make the fastest missiles I’m sure you’d need the best machine tools, but maybe you don’t need the fastest missiles if you can make twice as many. Manufacturing automation is much harder if there’s random error in the positions of things, but whether we’re dealing with that amount of error, I’m not sure.
I’d guess low grade machine tools also probably require high grade machine tools to make.
I briefly glanced at wikipedia and there seemed to be two articles supporting it. This one might be the one I’m referring to (if not, it’s a bonus) and this one seems to suggest that conscious perception has been trained.