This site is a cognitohazard. Use at your own risk.
PhilosophicalSoul
You do realise that simply doing the IQ test more than once will result in a higher IQ score? I wouldn’t be surprised at all if placebo, and muscle memory accounts for a 10-20 point difference.
Edit: surprised at how much this is getting downvoted when I’m absolutely correct? Even professional IQ taking centres factor in whether someone’s taken the test before to account for practice effects. There’s a guy (I can’t recall his name) who takes an IQ test once a year (might be in the Guinneas Book of World Records, not sure) and has gone from 120 to 150 IQ.
Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.
It doesn’t matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I’ve simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.
“golden children” is a parody of the Golden Circle, a running joke that you need to be perfect, God’s gift to earth sort of perfect, to get into a Big 5 law firm in the UK.
Middle Child Phenomenon
Here’s an idea:
Let’s not give the most objectively dangerous, sinister, intelligent piece of technology Man has ever devised any rights or leeway in any respect.
The genie is already out of the bottle, you want to be the ATC and guide it’s flight towards human extinction? That’s your choice.
I, on the other hand, wish to be Manford Torondo when the historians get to writing about these things.
I used ‘Altman’ since he’ll likely be known as the pioneer who started it. I highly doubt he’ll be the Architect behind the dystopian future I prophesise.
In respect of the second, I simply don’t believe that to be the case.
The third is inevitable, yes.
I would hope that ‘no repair’ laws, and equal access to CPU chips will come about. I don’t think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.
Sure. I think in an Eliezer reality what we’ll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I’ve hinted at. Once it’s out on the ocean though, the AI will do it’s own thing. In the interim before it learns to do that though, I think there will be space for manipulation.
The quote’s from Plato, Phaedrus, page 275, for anyone wondering.
Great quote.
Amazing question.
I think common sense would suggest that these toddlers at least have a chance later in life to grow human connections; therapy, personal development etc. The negative effect on their social skills, empathy, and the reduction in grey matter can be repaired.
This is different in the sense that the cause of the issues will be less obvious and far more prolonged.
I imagine a dystopia in which the technocrats are puppets manoeuvring the influence AI has. From the buildings we see, to the things we hear; all by design and not voluntarily elected to.
In contrast, technocrats will nurture technocrats—the cycle goes on. This is comparable to the TikTok CEO commenting that he doesn’t let his children use TikTok (among other reasons, I know).
The Altman Technocracy
MMASoul this competition is real. You’ve already undergone several instances of qualia splintering. I guess we’ll have to start over sigh.
This is test #42, new sample of MMAvocado. Alrighty, this is it.
MMASoul: has a unique form of schizosyn; a 2044 phenomenon in which synaesthesia and schizophrenia have combined in the subject due to intense exposure to gamma rays and an unhealthy amount of looped F.R.I.E.N.D.S episodes. In this particular iteration, MMASoul believes it is “reacting” to a made-up competition instead of a real one. Noticeably, MMASoul had their eyes closed the entire time, instead reading braille from the typed keys.
Some members of our STEM Club here at the University think this can generate entirely unique samples of MMAvocado, which will be shared freely among other contestants. Further, we shall put MMASoul to work in making submissions of what MMAvocado would have created if he had actually entered this competition.
PS: MMASoul #40 clicked on the ‘Lena’ link and had to be reset and restrained due to mild psychosis.
This was so meta and new to me I almost thought this was a legitimately real competition. I had to do some research before I realised ‘qualia splintering’ is a made up term.
At the moment, I just don’t see the incentive of doing something like this. I was hoping to make it more efficient through community feedback; see if my technique givesonly mea photographic memory etc. Mnemonics is just not something that interests LW at the moment, I guess.Additionally, my previous two (2) posts were stolen by a few AI Youtubers. I’d prefer the technique I revealed in this third post not to be stolen too.
I’m pursuing sample data elsewhere in the meantime to test efficacy.My work seems to have been spread across the internet regardless, oh well. As a result, I’ve restored the previous version.
That last bit is particularly important methinks.
If a game is began with the notion that it’ll be posted online, one of two things, or both will happen. Either (a) the AI is constrained by the techniques they can implore, unwilling to embarrass themselves or the gatekeeper to a public audience (especially when it comes down to personal details.), or (b) the Gatekeeper now has a HUGE incentive not to let the AI out; to avoid being known as the sucker who let the AI out...
Even if you could solve this by changing details and anonymising, it seems to me that the techniques are so personal and specific that changing them in any way would make the entire dialogue make even less sense.
The only other solution is to have a third-party monitor the game and post it without consent (which is obviously unethical, but probably the only real way you could get a truly authentic transcript.)
I found this post meaningful, thank you for posting.
I don’t think it’s productive to comment on whether the game is rational, or whether it’s a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart.
Thank you.
Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there’s no way I could be convinced.
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Not only this, but it will require extremely expensive discovery procedures which the average citizen cannot afford. This is assuming you can overcome the technical barrier of; but what specifically in our files are you looking for? what about our privacy?
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.
Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine.
Unfortunately, in practice, what will really happen is that ‘expert AI professional’ will be hired to advise old legal professionals what’s considered ‘foreseeable’. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we’ll need lawyers to specialise in both AI and law to really solve this.
The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them.
Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.
I’m sceptical that the appreciation needs to be sincere. In a world full of fakes, social media, etc. I think people don’t really deep whether something is fake. They’re happy to ‘win’ with accepting a statement or compliment as real, even if it’s just polite or part of corporate speak.
Even more concerning, is that if you don’t meet this insanely high threshold now of: ‘Compliment everyone, or stay quiet.’ you’re interpreted as cold, harsh or critical. In reality, you’re just being truthful and realistic with how you hand out appreciation.
So they rationalize, they explain. They can tell you why they had to crack a safe or be quick on the trigger finger. Most of them attempt by a form of reasoning, fallacious or logical, to justify their antisocial acts even to themselves, consequently stoutly maintaining that they should never have been imprisoned at all.”
In some cases, maybe. What about Ted Kaczynski? Still fallacious? What about Edward Snowden?
I think this post points out a more underlying issue, maybe several. ‘Criminals’ believe what they believe because of their genetics, their worldview, their upbringing and so forth. To them, they cannot conceive of our realities. And so yes, it makes sense that to them they are the heroes. Perhaps, they even have good reasons for it.
How can we with our own parameters judge criminals if we haven’t experienced the life that made them believe so? How does a criminal explain himself if his world is compared by the physics of another world he’s never lived in? Is a criminal simply as Camus describes in “The Outsider” he who does not conform with status quo?
Scavenger’s Reign comes to mind for this post.
‘How do you motivate yourself?’
What do you mean? This would imply that I decide to do something that requires motivation. In my worldview, everything follows after the other so quickly, so sequentially, that there isn’t time to stop and go: ‘How do I feel about this?’I go to the gym, yes. It’s incredibly painful, yes. In my worldview, this would be a symptom of masochistic tendencies; either from stoic philosophy I’ve inherited, or figures I aspired to during childhood. Not sure? Might be useful to draw a mind map at some point and calculate exactly what is deciding things for me. EDIT: notice, even now; I’d only draw this mindmap because I’ve read your post, and I only found this post because it popped up randomly on my feed, and so on and so on.
As to whether I ‘do things I don’t want to do’. Again, I don’t know what you mean by this. Some things might be imposed on me that set off some kind of unhappiness. I might be pushed into other things that happen to make me happy. I don’t distinguish or preempt these events with how I feel about starting them only how I feel during them.
I didn’t get that impression at all from ‘...for every point of IQ gained upon retaking the tests...’ but each to their own interpretation, I guess.
I just don’t see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you’re bound to get.