R&Ds human systems http://aboutmako.makopool.com
mako yass
Jellychip seems like a necessary tutorial game. I sense comedy in the fact that everyone’s allowed to keep secrets and intuitively will try to do something with secrecy despite it being totally wrongheaded. Like the only real difficulty of the game is reaching the decision to throw away your secrecy.
Escaping the island is the best outcome for you. Surviving is the second best outcome. Dying is the worst outcome.
You don’t mention how good or bad they are relative to each other though :) an agent cannot make decisions under uncertainty without knowing that.
I usually try to avoid having to explain this to players by either making it a score game or making the outcomes binary. But the draw towards having more than two outcomes is enticing. I guess in a roleplaying scenario, the question of just how good each ending is for your character is something players would like to decide for themselves. I guess as long as people are buying into the theme well enough, it doesn’t need to be made explicit, in fact, not making it explicit makes it clearer that player utilities aren’t comparable and that makes it easier for people to get into the cohabitive mindset.So now I’m imagining a game where different factions have completely different outcomes. None of them are conquest, nor death. They’re all weird stuff like “found my mother’s secret garden” or “fulfilled a promise to a dead friend” or “experienced flight”.
the hook
I generally think of hookness as “oh, this game tests a skill that I really want to have, and I feel myself getting better at it as I engage with the game, so I’ll deepen my engagement”.
There’s another component of it that I’m having difficulty with, which is “I feel like I will not be rejected if I ask friends to play this with me.” (well, I think I could get anyone to play it once, the second time is the difficult one) And for me I see this quality in very few board games, and to get there you need to be better than the best board games out there, because you’re competing with them, so that’s becoming very difficult. But since cohabitive games rule that should be possible for us.
And on that, I glimpsed something recently that I haven’t quite unpacked. There’s a certain something about the way Efka talks about Arcs here … he admitted that it wasn’t necessarily all fun. It was an ordeal. And just visually, the game looks like a serious undertaking. Something you’d look brave for sitting in front of. It also looks kind of fascinating. Like it would draw people in. He presents it with the same kind of energy as one would present the findings of a major government conspiracy investigation, or the melting of the clathrates. It does not matter whether you want to play this game, you have to, there’s no decision to be made as to whether to play it or not, it’s here, it fills the room.
And we really could bring an energy like that, because I think there are some really grim findings along the path to cohabitive enlightenment. But I’m wary of leaning into that, because I think cohabitive enlightenment is also the true name of peace. Arcs is apparently controversial. I do not want cohabitive games to be controversial.
(Plus a certain degree of mathematician crankery: his page on Google Image Search, and how it disproves AI
I’m starting to wonder if a lot/all of the people who are very cynical about the feasibility of ASI have some crank belief or other like that. Plenty of people have private religion, for instance. And sometimes that religion informs their decisions, but they never tell anyone the real reasons underlying these decisions, because they know they could never justify them. They instead say a load of other stuff they made up to support the decisions that never quite adds up to a coherent position because they’re leaving something load-bearing out.
I don’t think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation.
Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn’t have to be large for them to suppress the whole thing.
I’ve always felt the logic of berserker extortion doesn’t work, but occasionally you’d get a species that just earnestly wants the forest to be dark and isn’t very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.
Light speed migrations with no borders means homogeneous ecosystems, which can be very constrained things.
In our ecosystems, we get pockets of experimentation. There are whole islands where the birds were allowed to be impractical aesthetes (indonesia) or flightless blobs (new zealand). In the field-animal world, islands don’t exist, pockets of experimentation like this might not occur anywhere in the observable universe.
If general intelligence for field-animals costs a lot, has no immediate advantages (consistently takes say, a thousand years of ornament status before it becomes profitable), then it wouldn’t get to arise. Could that be the case?
We could back-define “ploitation” as “getting shapley-paid”.
Yeah. But if you give up on reasoning about/approximating solomonoff, then where do you get your priors? Do you have a better approach?
Buried somewhere in most contemporary bayesians’ is the solomonoff prior (the prior that the most likely observations are those that have short generating machine encodings) Do we have standard symbol for the solomonoff prior? Claude suggests that is the most common, but is more often used as a distribution function, or perhaps for Komogorov? (which I like because it can also be thought to stand for “knowledgebase”, although really it doesn’t represent knowledge, it pretty much represents something prior to knowledge)
I’d just define exploitation to be precisely the opposite of shapley bargaining, situations where a person is not being compensated in proportion to their bargaining power.
This definition encompasses any situation where a person has grievances and it makes sense for them to complain about them and take a stand, or, where striking could reasonably be expected to lead to a stable bargaining equilibrium with higher net utility (not all strikes fall into this category).
This definition also doesn’t fully capture the common sense meaning of exploitation, but I don’t think a useful concept can.
As a consumer I would probably only pay about 250$ for the unitree B2-W wheeled robot dog because my only use for it is that I want to ride it like a skateboard, and I’m not sure it can do even that.
I see two major non-consumer applications: Street to door delivery (it can handle stairs and curbs), and war (it can carry heavy things (eg, a gun) over long distances over uneven terrain)
So, Unitree… do they receive any subsidies?
Okay if send rate gives you a reason to think it’s spam. Presumably you can set up a system that lets you invade the messages of new accounts sending large numbers of messages that doesn’t require you to cross the bright line of doing raw queries.
Any point that you can sloganize and wave around on a picket sign is not the true point, but that’s not because the point is fundamentally inarticulable, it just requires more than one picket sign to locate it. Perhaps ten could do it.
The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one’s purpose for political positioning. I doubt there’s anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they’ll figure it out just fine.
Also… you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their loved ones. The people you see at neurips are more likely to be the kind of hungry, high-achieving professionals who are not happy in that way, and perhaps don’t want to be. So maybe you’re diagnosing a legitimately enduring collective issue (the sorts of humans who end up on top tend to be the ones who are capable of divorcing their actions from a direct sense of purpose, or the types of people who are pathologically busy and who lose sight of the point of it all or never have the chance to cultivate a sense for it in the first place). It may not be human nature, but it could be humanity nature. Sure.
But that’s still a problem that can be solved by having more intelligence. If you can find a way to manufacture more intelligence per human than the human baseline, that’s going to be a pretty good approach to it.
Conditions where a collective loss is no worse than an individual loss. A faction who’s on the way to losing will be perfectly willing to risk coal extinction, and may even threaten to cross the threshold deliberately to extort other players.
Do people ever talk about dragons and dinosaurs in the same contexts? If so you’re creating ambiguities. If not (and I’m having difficulty thinking of any such contexts) then it’s not going to create many ambiguities so it’s harder to object.
I think I’ve been calling it “salvaging”. To salvage a concept/word allows us to keep using it mostly the same, and to assign familiar and intuitive symbols to our terms, while intensely annoying people with the fact that our definition is different from the normal one and thus constantly creates confusion.
I’m sure it’s running through a lot of interpretation, but it has to. He’s dealing with people who don’t know or aren’t open about (unclear which) the consequences of their own policies.
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind
I’m not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it’ll probably turn out to be more of the same.
I just wish I knew how to make this scalable (like, how do you do this on the internet?) or work even when you don’t know the example person that well. If you have ideas, let me know!
Immediate thoughts (not actionable) VR socialisation and vibe-recognising AIs (models trained to predict conversation duration and recurring meetings) (But VR wont be good enough for socialisation until like 2027). VR because easier to persistently record, though apple has made great efforts to set precedents that will make it difficult, especially if you want to use eye tracking data, they’ve also developed trusted compute stuff that might make it possible to use the data in privacy-preserving ways.
Better thoughts: Just a twitterlike that has semi-private contexts. Twitter is already like this for a lot of people, it’s good for finding the people you enjoy talking to. The problem with twitter is that a lot of people, especially the healthiest ones, hold back their best material, or don’t post at all, because they don’t want whatever crap they say when they’re just hanging out to be public and on the record forever. Simply add semi-private contexts. I will do this at some point. Iceshrimp probably will too. Mastodon might even do it. X might do it. Spritely definitely will but they might be in the oven for a bit. Bluesky might never, though, because radical openness is a bit baked into the protocol currently, which is based, but not ideal for all applications.
Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, ‘you can’t restrict access to math, it’s already out there’, and he says they said “during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn’t proceed, and if we decide we need to, we’re going to do the same thing to the math underneath AI”.
So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they’re telling Andreeson, they will have told Altman.
And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn’t want to tell the public that this is a thing, and I’d agree with some of those reasons.
2: Vanishing areas of physics? A perplexity search suggests that may be referring to nuclear science, radar, lasers, and some semiconductors. But they said “entire areas of physics”. Does any of that sound like entire areas of physics? To me that phrase is strongly reminiscent of certain stories I’ve heard (possibly overexcited ones), physics that, let’s say, could be used to make much faster missiles, missiles so fast that it’s not obvious that they could be intercepted even using missiles of the same kind. A technology that we’d prefer to consign to secrecy than use, and then later have to defend ourselves against it once our adversaries develop their own. A black ball. If it is that, if that secret exists, that’s very interesting for many reasons, primarily due to the success of the secrecy, and the extent to which it could very conceivably stay secret for basically ever. And that makes me wonder about what might happen with some other things.
Do you believe there’s a god who’ll reward you for adhering to this kind of view-from-nowhere morality? If not, why believe in it?