social system designer http://aboutmako.makopool.com
mako yass
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind
I’m not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it’ll probably turn out to be more of the same.
I just wish I knew how to make this scalable (like, how do you do this on the internet?) or work even when you don’t know the example person that well. If you have ideas, let me know!
Immediate thoughts (not actionable) VR socialisation and vibe-recognising AIs (models trained to predict conversation duration and recurring meetings) (But VR wont be good enough for socialisation until like 2027). VR because easier to persistently record, though apple has made great efforts to set precedents that will make it difficult, especially if you want to use eye tracking data, they’ve also developed trusted compute stuff that might make it possible to use the data in privacy-preserving ways.
Better thoughts: Just a twitterlike that has semi-private contexts. Twitter is already like this for a lot of people, it’s good for finding the people you enjoy talking to. The problem with twitter is that a lot of people, especially the healthiest ones, hold back their best material, or don’t post at all, because they don’t want whatever crap they say when they’re just hanging out to be public and on the record forever. Simply add semi-private contexts. I will do this at some point. Iceshrimp probably will too. Mastodon might even do it. X might do it. Spritely definitely will but they might be in the oven for a bit. Bluesky might never, though, because radical openness is a bit baked into the protocol currently, which is based, but not ideal for all applications.
Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, ‘you can’t restrict access to math, it’s already out there’, and he says they said “during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn’t proceed, and if we decide we need to, we’re going to do the same thing to the math underneath AI”.
So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they’re telling Andreeson, they will have told Altman.
And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn’t want to tell the public that this is a thing, and I’d agree with some of those reasons.
2: Vanishing areas of physics? A perplexity search suggests that may be referring to nuclear science, radar, lasers, and some semiconductors. But they said “entire areas of physics”. Does any of that sound like entire areas of physics? To me that phrase is strongly reminiscent of certain stories I’ve heard (possibly overexcited ones), physics that, let’s say, could be used to make much faster missiles, missiles so fast that it’s not obvious that they could be intercepted even using missiles of the same kind. A technology that we’d prefer to consign to secrecy than use, and then later have to defend ourselves against it once our adversaries develop their own. A black ball. If it is that, if that secret exists, that’s very interesting for many reasons, primarily due to the success of the secrecy, and the extent to which it could very conceivably stay secret for basically ever. And that makes me wonder about what might happen with some other things.
https://x.com/elonmusk/status/1868302204370854026?s=19 O_O
But, government dialog confirmed.
All novel information:
The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.”
Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT
The Mercury News [the writers of this article] and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.
The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.
In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
OpenAI has staunchly refuted those claims, stressing that all of its work remains legal under “fair use” laws.
I found that I lost track of the flow in the bullet points.
I’m aware that that’s quite normal, I do it sometimes too, I also doubt it’s an innate limit, and I think to some extent this is a playful attempt to make people more aware of it. It would be really cool if people could become better at remembering the context of what they’re reading. Context-collapse is like, the main problem in online dialog today.
I guess game designers never stop generating challenges that they think will be fun, even when writing. Sometimes a challenge is frustrating, and sometimes it’s fun, and after looking at a lot of ‘difficult’ video games I think it turns out surprisingly often whether it ends up being fun or frustrating is not totally in the designer’s control, it’s up to the player. Are they engaging deeply, or do they need a nap? Do they just want to be coddled all the way through?
(Looking back… to what extent was Portal and the renaissance it brought to puzzle games actually a raising of the principle “you must coddle the player all the way through, make every step in the difficulty shallow, while making them feel like they’re doing it all on their own”, to what extent do writers also do this (a large extent!), and how should we feel about that?
I don’t think games have to secretly coddle people, I guess it’s just something that a good designer needs to be capable of, it’s a way of demonstrating mastery, but there are other approaches. EG: Demonstrating easy difficulty gradations in tutorials then letting the player choose their difficulty level from then on.)(Yes, ironic given the subject.)
Trying to figure out what it would mean to approach something cooperatively and not cohabitively @_@
I feel like it would always be some kind of trick. The non-cohabitive cooperator invites us to never mind about building real accountability mechanisms, “we can just be good :)” they say. They invite us to act against our incentives, and whether they will act against theirs in return will remain to be seen.
Let’s say it will be cooperative because cooperation is also cohabitive in this situation haha.
Overall Cohabitive Games so Far sprawls a bit in a couple of places, particularly where bullet points create an unordered list.
I don’t think that’s a good criticism, those sections are well labelled, the reader is able to skip them if they’re not going to be interested in the contents. In contrast, your article lacks that kind of structure, meandering for 11 paragraphs defining concepts that basically everyone already has installed before dropping the definition of cohabitive game in a paragraph that looks just like any of the others. I’d prefer if you’d opened with the definition, it doesn’t really require a preamble. But labelling the Background and Definition sections would also resolve this.
I think we should probably write another post in the future that’s better than either. I’m not really satisfied with my definition. It clearly didn’t totally work, given how many people posted games that are not cohabitive, but that could have just been unavoidable for various reasons, some quite tricky to resolve.
but this post has a link to a website that has a link to a .zip file with the rules.
The rules of P1 (now OW.1) aren’t in a zip file, they’re just a web page: https://dreamshrine.org/OW.1/manual.html I guess I’ll add that to the article.
Right now the game is rough enough around the edges I think it doesn’t quite get there for me.
This is why I didn’t dwell on the rules in much depth. OW.1 was always intended as a fairly minimal (but also quite open-ended) example.
I think there’s a decent chance this post inspires someone to develop methods for honing a highly neglected facet of collective rationality. The methods might not end up being a game. Games are exercises but most practical learning exercises aren’t as intuitively engaging or strategically deep as a game. I think the article holds value regardless just for having pointed out that there is this important, neglected skill.
Despite LW’s interest in practical rationality and community thereof, I don’t think there’s been any discussion of this social skill of acknowledging difference, and efficiently converging towards ideal compromises. Past discussion of negotiation has often settled for rough schelling equilibria, arbitrary, often ossified resolutions. People will and should go to war against (excessively) arbitrary equilibria (and in the information age, they should start to expect more agile, intentional coordination processes). After Unifying Bargaining I’d say we know, now, that we can probably do a bit better than arbitrary.
For instance, in the case of abram’s example of national borders: The borders of a territory need not be arbitrary historical features, under higher negotiation efficiencies, the borders correspond directly to our shared understanding of who can defend which areas and how willing they are to do it. Under even higher negotiation efficiencies, borders become an anachronism at their fringes and the uses of land are negotiated dynamically depending on who needs to do what and when.
To most laypeople, today, the notion of a “perfect and correct compromise” will feel like an oxymoron or a social impossibility. At this point, I think I know a perfect compromise when I see it, and I don’t think that sense requires an abnormal cultivation of character. I don’t know if I’ve seen anyone who seemed to be impossible to look in the eye and negotiate with, given a reasonable amount of time, and support. Humans, and especially human organisations have leaky, transparent cognitions, so I believe that it’s possible in general for a human to tell whether another human is acting according to what they see as a fair, good faith compromise, and I believe all it would take to normalise and awaken that in the wider world is a mutual common knowledge of what the dance looks like and how to get better at it.
Do you have similar concerns about humanoid robotics, then?
At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.
Doesn’t progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
Indicating them as a suspect when the leak is discovered.
Generally the set of people who actually read posts worthy of being marked is in a sense small, people know each other. If you had a process for distributing the work, it would be possible to figure out who’s probably doing it.
It would take a lot of energy, but it’s energy that probably should be cultivated anyway, the work of knowing each other and staying aligned.
You can’t see the post body without declaring intent to read.
I don’t think the part that talks can be called the shadow. If you mean you think I lack introspective access to the intuition driving those words, come out and say it, and then we’ll see if that’s true. If you mean that this mask is extroardinarily shadowish in vibe for confessing to things that masks usually flee, yes, probably, I’m fairly sure that’s a necessity for alignment.
Intended for use in vacuum. I guess if it’s more of a cylinder than a ring this wouldn’t always be faster than an elevator system though.
I guess since it sounds like they’re going to be about a km long and 20 stories deep there’ll be enough room for a nice running track with minimal upspin/downspin sections.
Relatedly, iirc, this effect would be more noticeable in smaller spinners than in larger ones? Which is one reason people might disprefer smaller ones. Would it be a significant difference? I’m not sure, but if so, jogging would be a bit difficult, either it would quickly become too easy (and then dangerous, once the levitation kicks in) when you’re running down-spin, or it would become exhausting when you’re running up-spin.
A space where people can’t (or wont) jog isn’t ideal for human health.
issue: material transport
You can become weightless in a ring station by running really fast against the spin of the ring.
More practically, by climbing down and out into a despinner on the side of the ring. After being “launched” from the despinner, you would find yourself hovering stationary next to the ring. The torque exerted on the ring by the despinner will be recovered when you enter a respinner on whichever part of the ring you want to reenter.
In my disambiguations of the really mysterious aspect of consciousness (indexical prior), I haven’t found any support for a concept of continuity. (you could say that continuity over time is likely given that causal entanglement seems to have something to do with the domain of the indexical prior, but I’m not sure we really have a reason to think we can ever observe anything about the indexical prior)
It’s just part of the human survival drive, it has very little to do with the metaphysics of consciousness. To understand the extent to which humans really care about it, you need to know human desires in a direct and holistic way that we don’t really practice here. Human desire is a big messy state machine that changes shape as a person grows. Some of the changes that the desires permit and encourage include situationally appropriate gradual reductions in complexity.
A continuity minder doesn’t need to define their self in terms of any particular quality, they define themselves as continuity with a history of small alterations. They are completely unbothered by the paradox of the ship of theseus.
It’s rare that I meet a continuity minder and cataclysmic identity change accepter who is also a patternist. But they do exist.
But I’ve met plenty of people who do not fear cataclysmic change. I sometimes wonder if we’re all that way, really. Most of us just never have the opportunity to gradually transition into a hedonium blob, so I think we don’t really know whether we’d do it or not. The road to the blob nature may turn out to be paved with acceptable changes.
Disidentifying the consciousness from the body/shadow/subconscious it belongs to and is responsible for coordinating and speaking for, like many of the things some meditators do, wouldn’t be received well by the shadow, and I’d expect it to result in decreased introspective access and control. So, psychonauts be warned.
I’m sure it’s running through a lot of interpretation, but it has to. He’s dealing with people who don’t know or aren’t open about (unclear which) the consequences of their own policies.