This site is a cognitohazard. Use at your own risk.
PhilosophicalSoul
Answering this from a legal perspective:
What is the easiest and most practical way to translate legalese into scientifically accurate terms, thus bridging the gap between AI experts and lawyers? Stated differently, how do we move from localised papers that only work in law or AI fields respectively, to papers that work in both?
Glad somebody finally made a post about this. I experimented with the distinction in my trio of posts on photographic memory a while back.
I was naive during the period in which I made this particular post. I’m happy with the direction LW is going in, having experienced more of the AI world, and read many more posts. Thank you for your input regardless.
you don’t think it’s “snobbish or discriminatory” to pretend it’s something more because you count yourself among its users?
Fair point. I had already provided special justification for it but I agree with your reasoning, so I’ll leave it out. Thanks for the example.
AI alignment mostly; seeking to bridge the gap between AI and law. Since LW has unique takes, and often serves as the origin point for ideas on alignment (even if they aren’t cited by mainstream authors). Whether this site’s purpose is to be cited or not is debatable. On a pragmatic level though, there’s simply discussions here that can’t be found anywhere else.
This is hilarious, and I’m sure took a lot of time to put together.
Likely isn’t receiving the upvotes it deserves because the humour is so niche, and well, LessWrong is more logic and computer science-y than philosophy at the moment.
Thank you!
PS: Would love to see more posts in the future that incorporate emojis, example:
Apollodorus: anybody wonder why the vegetarians are online so often? If they love the natural world so much, I say they should be getting more mouthfuls of grass than anybody!
[ ✅9, including Asimov, Spinoza and others].Socrates: The same can be said of you and your whining Apollodorus, but of course you are always exempt from your own criticisms!
[😂31, including Pythagoras, Plato and others.]
‘...and give us a license to use and distribute your submissions.’
For how many generations would humans be able to outwit the AI until it outwits us?
It seems increasingly plausible that it would be in the public interest to ban non-disparagement clauses more generally going forward, or at least set limits on scope and length (although I think nullifying existing contracts is bad and the government should not do that and shouldn’t have done it for non-competes either.)
I concur.
It should be noted though; we can spend all day taking apart these contracts and applying pressure publicly but real change will have to come from the courts. I await an official judgment to see the direction of this issue. Arguably, the outcome there is more important for any alignment initiative run by a company than technical goals (at the moment).
How do you reconcile keeping genuine cognito hazards away from the public, while also maintaining accountability & employee health? Is there a middle ground that justifies the existence of NDAs & NDCs?
Would love to collaborate with you on a post, check out my posts and let me know.
I think these points are common sense to an outsider. I don’t mean to be condescending, I consider myself an outsider.
I’ve been told that ideas on this website are sometimes footnoted by people like Sam Altman in the real world, but they don’t seem to ever be applied correctly.
It’s been obvious from the start that not enough effort was put into getting buy-in from the government. Now, their strides have become oppressive and naive (the AI Act is terribly written and unbelievably complicated, it’ll be three-five years before it’s ever implemented).
Many of my peers who I’ve introduced to some arguments on this website who do not know what alignment research is identified many of these ‘mistakes’ at face value. LessWrong got into a terrible habit of fortifying an echo chamber of ideas that only worked on LessWrong. No matter how good an idea, if it cannot be simply explained to the average layperson, it will be discarded as obfuscatory.
Hero worship & bandwagons seems to be a problem with the LessWrong community inherently, rather than something unique to the Alignment movement (again, I haven’t been here long, I’m simply referring to posts by long-time members critiquing the cult-like mentalities that tend to appear).
Advocating for pause—well duh. The genie is out of the bottle, there’s no putting it back. We literally cannot go back because the gravy train of money in the throats of those with the power to change things aren’t going to give that up.
I don’t see these things as mistakes but rather common-sense byproducts of the whole: “We were so concerned with whether we could, we didn’t ask whether we should,” idea. The LessWrong community literally couldn’t help itself, it just had to talk about these things as rationalists of the 21st century.
I think… well, I think there may be a 10-15% chance these mistakes are rectified in time. But the public already has a warped perception of AI, divided on political lines. LessWrong could change if there was a concerted effort—would the counterparts who read LessWrong also follow? I don’t know.
I want to emphasise here, since I’ve just noticed how many times I mentioned LW, I’m not demonising the community. I’m simply saying that, from an outsider’s perspective, this community held promise as the vanguards of a better future. Whatever ideas it planted in the heads of those at the top a few years ago, in the beginning stages of alignment, could’ve been seeded better. LW is only a small cog of blame in the massive machine that is currently outputting a thousand mistakes a day.
It was always going to come down to the strong arm of the law to beat AI companies into submission. I was always under the impression that attempts at alignment or internal company restraints were hypothetical thought experiments (no offence). This has been the reality of the world with all inventions, not just AI.
Unfortunately, both sides (lawyers & researchers) seem unwilling to find a middle-ground which accommodates the strengths of each and mitigates the misunderstandings in both camps.
Feeling pessimistic after reading this.
I genuinely think it could be one of the most harmful and dangerous ideas known to man. I consider it to be a second head on the hydra of AI/LLMs.
Consider the fact that we already have multiple scandals of fake research coming from prestigious universities (papers that were referenced by other papers, and so on). This builds an entire tree of fake knowledge, which, if left unaudited, would have been seen as a legitimate foundation of epistemology upon which to teach future students, scholars and practitioners.Now imagine applying this to something like healthcare. Instead of having human eyes who (while mistakes can be made, they’re usually for reasons other than pre-programmed generalisations) scan over, absorb the information and adapt accordingly, we have an AI/LLM. Such an entity may be correct 80% of the time in analysing whatever cancer growth, or disease it’s been trained on over millions of generations. What about the other 20%?
What implications does this have for insurance claims where an AI makes a presumption built on flawed data about the degree of risk in a person’s health? What impact does this have on triage? Who takes responsibility when the AI makes a mistake? (And I know of no single legal practitioner held in high regard who is yet to substantively tackle this consciousness problem in law).
It’s also pretty clear that AI companies don’t give a damn about privacy. They may claim to, but they don’t. At the end of the day, these AI companies are fortified behind oppressive terms & conditions, layers of technicalities, and huge all-star lawyer teams that take hundreds of thousands of dollars to defeat at minimum. Accountability is an ideal put beyond reach by strong-arm litigation on the ‘little guy’, or, the average citizen.
I’m not shitting on your idea. I’m merely outlining the reality of things at the moment.When it comes to AI; what can be used for evil, will be used for evil.
In my opinion, a class action filed by all employees allegedly prejudiced (I say allegedly here, reserving the right to change ‘prejudiced’ in the event that new information arises) by the NDAs and gag orders would be very effective.
Were they to seek termination of these agreements on the basis of public interest in an arbitral tribunal, rather than a court or internal bargaining, the ex-employees are far more likely to get compensation. The litigation costs of legal practitioners there also tend to be far less.
Again, this assumes that the agreements they signed didn’t also waive the right to class action arbitration. If OpenAI does have agreements this cumbersome, I am worried about the ethics of everything else they are pursuing.
For further context, see:- 29 May 2024 14:11 UTC; 5 points) 's comment on OpenAI: Fallout by (
I have reviewed his post. Two (2) things to note:
(1) Invalidity of the NDA does not guarantee William will be compensated after the trial. Even if he is, his job prospects may be hurt long-term.
(2) State’s have different laws on whether the NLRA trumps internal company memorandums. More importantly, labour disputes are traditionally solved through internal bargaining. Presumably, the collective bargaining ‘hand-off’ involving NDA’s and gag-orders at this level will waive subsequent litigation in district courts. The precedent Habryka offered refers to hostile severance agreements only, not the waiving of the dispute mechanism itself.
I honestly wish I could use this dialogue as a discrete communication to William on a way out, assuming he needs help, but I re-affirm my previous worries on the costs.
I also add here, rather cautiously, that there are solutions. However, it would depend on whether William was an independent contractor, how long he worked there, whether it actually involved a trade secret (as others have mentioned) and so on. The whole reason NDA’s tend to be so effective is because they obfuscate the material needed to even know or be aware of what remedies are available.
I’m so happy you made this post.
I only have two (2) gripes. I say this as someone who 1) practices/believes in determinism, and 2) has interacted with journalists on numerous occasions with a pretty strict policy on honesty.1. “Deep honesty is not a property of a person that you need to adopt wholesale. It’s something you can do more or less of, at different times, in different domains.”
I would disagree. In my view, ‘deep honesty’ excludes dishonesty by omission. You’re either truthful all of the time or you’re manipulative some of the time. There can’t be both.
2. “Fortunately, although deep honesty has been described here as some kind of intuitive act of faith, it is still just an action you can take with consequences you can observe.
Not always. If everyone else around you goes the mountain of deceit approach, your options are limited. The ‘rewards’ available for omissions are far less, and if you want to have a reasonably productive work environment, at least someone has to tell the truth unequivocally. Further, the ‘consequences’ are not always immediately observable when you’re dealing with practiced liars. The consequences can come in the form of revenge months, or, even years later.
I am a lawyer.
I think one key point that is missing is this: regardless of whether the NDA and the subsequent gag order is legitimate or not; William would still have to spend thousands of dollars on a court case to rescue his rights. This sort of strong-arm litigation has become very common in the modern era. It’s also just… very stressful. If you’ve just resigned from a company you probably used to love, you likely don’t want to fish all of your old friends, bosses and colleagues into a court case.
Edit: also, if William left for reasons involving AGI safety—maybe entering into (what would likely be a very public) court case would be counteractive to their reason for leaving? You probably don’t want to alarm the public by flavouring existential threats in legal jargon. American judges have the annoying tendency to valorise themselves as celebrities when confronting AI (see Musk v Open AI).- 11 May 2024 23:58 UTC; 4 points) 's comment on simeon_c’s Shortform by (
I’ve been using nootropics for a very long time. A couple things I’ve noticed:
1) There’s little to no patient-focused research that is insightful. As in, the research papers written on nootropics are written from an outside perspective by a disinterested grad student. In my experience, the descriptions used, symptoms described, and periods allocated are completely incorrect;
2) If you don’t actually have ADHD, the side-effects are far worse. Especially long-term usage. In my personal experience, those who use it without the diagnosis are more prone to (a) addiction, (b) unexpected/unforeseen side-effects, and (c) a higher chance of psychosis, or comparable symptoms;
3) There seems to be an upward curve of over-rationalising ordinary symptoms the longer you use nootropics. Of course, with nootropics you’re inclined to read more, and do things that will naturally increase your IQ and neuroplasticity. As a consequence, you’ll begin to overthink whether the drugs you’re taking are good for you or not. You’ll doubt your abilities more and be sceptical as to where your ‘natural aptitude’ ends, and your ‘drug-heightened aptitude’ begins.
Bottomline is: if you’re going to start doing them, be very, very meticulous in writing down each day in a journal. Everything you thought, experienced and did. Avoid nootropics if you don’t have ADHD.
Do you think there’s something to be said about an LLM feedback vortex? As in, teacher’s using ai’s to check student’s work who also submitted work created by AI. Or, judges in law using AI’s to filter through counsel’s arguments which were also written by AI?
I feel like your recommendations could be paired nicely with some in-house training videos, and external regulations that limit the degree / percentage involvement of AI’s. Some kind of threshold or ‘person limit’ like elevators have. How could we measure the ‘presence’ of LLM’s across the board in any given scenario?
I didn’t get that impression at all from ‘...for every point of IQ gained upon retaking the tests...’ but each to their own interpretation, I guess.
I just don’t see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you’re bound to get.
Tried this method and interestingly it seems to have been patched.
It began responding to all hex numbers with:
When asked why it keeps doing that, it said: