That said, you can hide it in your user-settings.
This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.
That said, you can hide it in your user-settings.
This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.
This might just be me, but I really hate the floating action button on LW. It’s an eyesore on what is otherwise a very clean website. The floating action button was designed to “Represent the primary action on a screen” and draw the user’s attention to itself. It does a great job at it, but since “ask us anything, or share your feedback” is not the primary thing you’d want to do, it’s distracting.
Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it with all the other crappy Meteor apps.
The other thing is that when you click on it, it’s doesn’t fit in with the rest of the site theme. LW has this great black-grey-green color scheme, but if you click on the FAB, you are greeted with a yellow waving hand, and when you close it, you get this ugly red (1) in the corner of your screen.
It also kinda pointless since the devs and mods on this website are all very responsive and seem to be aware of everything that gets posted.
I could understand it at the start of LW 2.0 when everything was still on fire, but does anyone use it now?
/rant
I bet this is a side effect of having a large pool of bounded rational agents that all need to communicate with each other, but not necessarily frequently. When two agents only interact briefly, neither agent has enough data to work out what the “meaning” of the other’s words. Each word could mean too many different things. So you can probably show that under the right circumstances, it’s beneficial for agents in a pool to have a protocol that maps speech-acts to inferences the other party should make about reality (amongst other things, such as other actions). For instance, if all agents have shared interests, but only interact briefly with limited bandwidth, both agents would have an incentive to implement either side of the protocol. Furthermore, it makes sense for this protocol to be standardized, because the more standard the protocol, the less bandwidth and resources the agents will need to spend working out the quirks of each others protocol.
This is my model of what languages are.
Now that you have a well defined map from speech-acts to inferences, the notion of lying becomes meaningful. Lying is just when you use speech acts and the current protocol to shift another agents map of reality in a direction that does not correspond to your own map of reality.
I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d… are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone’s outcomes, utilities are comparable, and the number of people involved isn’t too crazy.
Money makes the world turn and it enables research, be it academic or independent. I would just focus on getting a bunch of that. Send out 10x to 20x more resumes than you already have, expand your horizons to the entire planet, and put serious effort into prepping for interviews.
You could also try getting a position at CHAI or some other org that supports AI alignment PhDs, but it’s my impression that those centres are currently funding constrained and already have a big list of very high quality applicants, so your presence or absence might not make that much of a difference.
Other than that, you could also just talk directly with the people working on alignment. Send them emails, and ask them about their opinion on what kind of experiments they’d like to know the result of but don’t have time to run. Then turn those experiments into papers. Once you’ve gotten a taste for it, you can go and do your own thing.
I’d put my money on lowered barriers to entry on the internet and eternal September effects as the primary driver of this. In my experience the people I interact with IRL haven’t really gotten any stupider. People can still code or solve business problems just as well as they used to. The massive spike in stupidity seems to have occurred mostly on the internet.
I think this is because of 2 effects that reinforce each other in a vicious cycle.
Barriers to entry on the internet have been reduced. A long time ago you needed technical know how to even operate a computer, then thing got easier but you still needed a PC, and spending any amount of time on the internet was still the domain of nerds. Now anyone with a mobile phone can jump on twitter and participate.
Social media platforms are evolving to promote ever dumber means of communication. If they don’t they’re out competed by the ones that do. For example, compare a screenshot of the reddit UI back when it started vs now. As another example, the forums of old made it fairly easy to write essays going back and forth arguing with people. Then you’d have things like facebook where you can still have a discussion, but it’s more difficult. Now you have TikTok and Instagram, where the highest form of discourse comes down to a tie between a girl dancing with small text popups and an unusually verbose sign meme. You can forget about rational discussion entirely.
So I hypothesize that you end up with this death spiral, where technology lowers barriers to entry, causing people who would otherwise have been to dumb effectively to participate, causing social media companies to further modify their platforms to appeal to the lowest common denominator, causing more idiots to join… and so on and so forth. To top it off, I’ve found myself and other people I would call “smart” disconnecting from the larger public internet. So you end up with evaporative cooling on top of all the other aforementioned effects.
The end result is what you see today, I’m sure the process is continuing, but I’ve long ago checked out of the greater public internet and started hanging out in the cozyweb or outside.
At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively.
Am I the only one who sees this much less as a statement that the Solomonoff prior is malign, and much more a statement that reality itself is malign? I think that the proper reaction is not to use a different prior, but to build agents that are robust to the possibility that we live in a simulation run by influence seeking malign agents so that they don’t end up like this.
Hmm, at this point it might be just a difference of personalities, but to me what you’re saying sounds like “if you don’t eat, you can’t get good poisoning”. “Dual identity” doesn’t work for me, I feel that social connections are meaningless if I can’t be upfront about myself.
That’s probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don’t completely trust, and for most practical intents and purposes I’m quite a bit more “myself” online than IRL.
But in any case there will many subnetworks in the network. Even if everyone adopt the “village” model, there will be many such villages.
I think that’s easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there’s a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there’s no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
This might be another difference of personalities...you can try to come up with a different set of norms that solves the problem. But that can’t be Crocker’s rules, at least it can’t be only Crocker’s rules.
I think that’s a big part of it. Especially IRL, I’ve taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker’s rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won’t judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn’t a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I’m doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can’t retaliate because they don’t know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you’ll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
I’m breaking this into a separate thread since I think it’s a separate topic.
Second, specifically regarding Crocker’s rules, I’m not their fan at all. I think that you can be honest and tactful at the same time, and it’s reasonable to expect the same from other people.
So I disagree. Obviously you can’t impose Croker’s rules on others, but I find it much easier and far less mentally taxing to communicate with people I don’t expect to get offended. Likewise, I’ve gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I’m dropping the ball, and I don’t think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn’t really an option as they can transparently see what you’re trying to say, and will take issue with it anyways.
I can see the value of “getting offended” when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, “getting offended” strikes me as counter productive, and I do my best to operate in a mode where I don’t do it, which is basically Croker’s rules.
First, when Jacob wrote “join the tribe”, I don’t think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you’re saying here doesn’t seem like an argument against my objection to Zack’s objection to Jacob.
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
Moreover, the relationships between them shouldn’t be purely impersonal and intellectual. Any group endeavour benefits from emotional connections and mutual support.
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don’t think it’s a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I’d definitely be in the group that wouldn’t join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don’t know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I’ve vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
IMO, F*** or F!#@, I feel like it has more impact that way. Since it means you went out of your way to censor yourself, and it’s not just a verbal habit, as would be the case with either fuck or a euphemism.
So full disclosure, I’m on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I’ve picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies’ existing infrastructure for social support and other things. This is not to say we shouldn’t have strong norms, the comment box I’m typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don’t have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily “tell it like it is”.
When you create a situation where a group of rats become interdependent socially or economically, most of what I’ve read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of “catastrophic social failure” I’d mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don’t share the same goals might be interested in.
Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn’t disbanding the tribe, that’s throwing away the baby with the bathwater.
I think we need to be really careful with this and the dangers of becoming a “tribe” shouldn’t be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don’t take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you’re expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven’t seen much discussion of mechanisms for preventing this and other failure modes, so I’m skeptical of associating myself or supporting any IRL “rationalist community/village”.
Another option not discussed is to control who your message reaches in the first place, and in what medium. I’ll claim, without proof or citation, that social media sites like twitter are cesspits that are effectively engineered to prevent constructive conversation and to exploit emotions to keep people on the website. Given that, a choice that can mitigate these kind of situations is to not engage with these social media platforms in the first place. Post your messages on a blog under your own control or a social media platform that isn’t designed to hijack your reward circuitry.
I think you’re missing an option, though. You can specifically disavow and oppose the malicious actions/actors, and point out that they are not part of your cause, and are actively hurting it. No censorship, just clarity that this hurts you and the cause. Depending on your knowledge of the perpetrators and the crimes, backing this up by turning them or actively thwarting them may be in scope as well.
There is a practical issue with this solution in the era of modern social media. Suppose you have malicious actors who go on to act in your name, but you never would have associated yourself with them under normal circumstances because they don’t represent your values. If you tell them to stand down or condemn them, then you’ve associated yourself with them, and that condemnation can be used against you.
Another alternative is to use a 440nm light source and a frequency doubling crystal https://phoseon.com/wp-content/uploads/2019/04/Stable-high-efficiency-low-cost-UV-C-laser-light-source-for-HPLC.pdf although the efficiency is questionable, there are also other variations based on frequency quadrupling https://opg.optica.org/oe/fulltext.cfm?uri=oe-29-26-42485&id=465709.