R&Ds human systems http://aboutmako.makopool.com
mako yass
I’ll change a line early on in the manual to “Objects aren’t common, currently. It’s just corpses for now, which are explained on the desire cards they’re relevant to and don’t matter otherwise”. Would that address it? (the card is A Terrible Hunger, which also needs to be changed to “a terrible hunger.\n4 points for every corpse in your possession at the end (killing generally always leaves a corpse, corpses can be carried; when agents are in the same land as a corpse, they can move it along with them as they move)”)
What’s this in response to?
Latter. Unsure where to slot this into the manual. And I’m also kind of unsatisfied with this approach. I think it’s important that players value something beyond their own survival, but also it’s weird that they don’t intrinsically value their survival at all. I could add a rule that survival is +4 points for each agent, but I think not having that could also be funny? Like players pledging their flesh to cannibal players by the end of the game and having to navigate the trust problems of that? So I’d want to play a while before deciding.
I think unpacking that kind of feeling is valuable, but yeah it seems like you’ve been assuming we use decision theory to make decisions, when we actually use it as an upper bound model to derive principles of decisionmaking that may be more specific to human decisionmaking, or to anticipate the behavior of idealized agents, or (the distinction between CDT and FDT) as an allegory for toxic consequentialism in humans.
I’m aware of a study that found that the human brain clearly responds to changes in direction of the earth’s magnetic field (iirc, the test chamber isolated the participant from the earth’s field then generated its own, then moved it, while measuring their brain in some way) despite no human having ever been known to consciously perceive the magnetic field/have the abilities of a compass.
So, presumably, compass abilities could be taught through a neurofeedback training exercise.
I don’t think anyone’s tried to do this (“neurofeedback magnetoreception” finds no results)
But I guess the big mystery is why don’t humans already have this.
A relevant FAQ entry: AI development might go underground
I think I disagree here:
By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.
This will change/is only the case for frontier development. I also think we’re probably in the hardware overhang. I don’t think there is anything inherently difficult to hide about AI, that’s likely just a fact about the present iteration of AI.
But I’d be very open to more arguments on this. I guess… I’m convinced there’s a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn’t seem good on net.
Personally, because I don’t believe the policy in the organization’s name is viable or helpful.
As to why I don’t think it’s viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US’s only clear economic lead over the rest of the world.
If you attempted a pause, I think it wouldn’t work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn’t the frontier. The current apparent absence of frontier AI research in a military context is miraculous, strange, and fragile. If you pause in the private context (which is probably all anyone could do) defence AI will become the frontier in about three years, and after that I don’t think any further pause is possible because it would require a treaty against secret military technology R&D. Military secrecy is pretty strong right now. Hundreds of billions yearly is known to be spent on mostly secret military R&D, probably more is actually spent.
(to be interested in a real pause, you have to be interested in secret military R&D. So I am interested in that, and my position right now is that it’s got hands you can’t imagine)To put it another way, after thinking about what pausing would mean, it dawned on me that pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research or to approach the development of AI with a humanitarian perspective. It seems to me like the movement has already ossified a slogan that makes no sense in light of the complex and profane reality that we live in, which is par for the course when it comes to protest activism movements.
I notice they have a Why do you protest section in their FAQ. I hadn’t heard of these studies before
Protests can and often will positively influence public opinion, voting behavior, corporate behavior and policy.
There is no evidence for a “backfire” effect unless the protest is violent. Our protests are peaceful and non-violent.
Check out this amazing article for more insights on why protesting works
Regardless, I still think there’s room to make protests cooler and more fun and less alienating, and when I mentioned this to them they seemed very open to it.
Yeah, I’d seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven’t done that, instead they decided to just insert some pretty specific prompts that, I’d guess, were probably going to be temporary.
It’s real bad, but it’s not bad enough for me to shoot yet.
There is evidence, literal written evidence, of Musk trying to censor Grok from saying bad things about him
I’d like to see this
I wonder if maybe these readers found the story at that time as a result of first being bronies, and I wonder if bronies still think of themselves as a persecuted class.
IIRC, aisafety.info is primarily maintained by Rob Miles, so should be good: https://aisafety.info/how-can-i-help
I’m certain that better resources will arrive but I do have a page for people asking this question on my site, the “what should we do” section. I don’t think these are particularly great recommendations (I keep changing them) but it has something for everyone.
These are not concepts of utility that I’ve ever seen anyone explicitly espouse, especially not here, the place to which it was posted.
The people who think of utility in the way the article is critiquing don’t know what utility actually is, presenting a critque of this tangible utility as a critique of utility in general takes the target audience further away from understanding what utility is.
A Utility function is a property of a system rather than a physical thing (like, eg, voltage, or inertia, or entropy). Not being a simple physical substance doesn’t make it fictional.
It’s extremely non-fictional. A human’s utility function encompasses literally everything they care about, ie, everything they’re willing to kill for.
It seems to be impossible for a human to fully articulate what the human utility function is exactly, but that’s just a peculiarity of humans rather than a universal characteristic of utility functions. Other agents could have very simple utility functions, and humans are likely to grow to be able to definitely know their utility function at some point in the next century.
Contemplating an argument that free response rarely gets more accurate results for questions like this because listing the most common answers as checkboxes helps respondents to remember all of the answers that’re true for of them.
I’d be surprised if LLM use for therapy or sumarization is that low irl, and I’d expect people would’ve just forgot to mention those usecases. Hope they’ll be in the option list this year.
Hmm I wonder if a lot of trends are drastically underestimated because surveyers are getting essentially false statistics from the Other gutter.
Apparently Anthropic in theory could have released claude 1 before chatgpt came out? https://www.youtube.com/live/esCSpbDPJik?si=gLJ4d5ZSKTxXsRVm&t=335
I think the situation would be very different if they had.
Were OpenAI also, in theory, able to release sooner than they did, though?
The assumption that being totally dead/being aerosolised/being decayed vacuum can’t be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.
Which is to say, I believe it’s possible to be dead.
- ^
the negation, that there’s something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the only things that report anything.
- ^
I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.
I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.
For the US to undertake such a shift, it would help if you could convince them they’d do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible.
I’m listening to an Ecosystemics Futures podcast episode, which, to characterize… it’s a podcast where the host has to keep asking guests whether the things they’re saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who know a lot about government secrets and in the context of talking about situations where excessive secrecy may be doing a lot of harm, quoting Chris Mellon, “We won the cold war against the soviet union not because we were better at keeping secrets, we won the cold war because we knew how to move information and secrets more efficiently across the government than the russians.” I can believe the same thing could potentially be said about China too, censorship cultures don’t seem to be good for ensuring availability of information, so that might be a useful claim if you ever want to convince the US to undertake this.
Right now, though, Vance has asserted straight out many times that working in the open is where the US’s advantage is. That’s probably not true at all, working in the open is how you give your advantage away or at least make it ephemeral, but that’s the sentiment you’re going to be up against over the next four years.