Yes, the 80s would have been when the Iraq-Iran war happened, which I mention, so I don’t quite understand the link to the wiki. Maybe you’re emphasizing the “Very likely never existed.” but that was more about the claims used for the invasion that Iraq currently had large and dangerous stockpiles on hand at the time. None were found so either they didn’t really exist then or were well hidden/could be quickly dismantled without a trace. I suspect the former would be more likely—though there is a lot of empty space and finding needles in haystacks is hard.
jmh
Life of Posts?
I’m one of those here that remember seeing the news loop of the plane hitting the tower and the building collapsing.
It was a defining moment and I think fixed in everyone’s mind the required response (other than it wasn’t quite clear who the target should be), just a Pearl Harbor was with WWII.
To some extent the “lies about WMD” is a bit of a misdirection/political spin. It is true that no stockpiles of biologic, chemical or development of nuclear weapons were ever found. Very likely never existed. But we can say the same about Iran today. At one point we could have said the same about North Korea. We do know that Iraq developed, stockpiled and used chemical (and I think some biological) weapons in the Iraq-Iran war. We also know they had a nuclear program (which I don’t think was monitored by the UN until after the war and Iraq was forced to dismantle it). So I think concerns about WMD were quite legitimate even if not quite current state reality.
And yes, Kerry nearly won and the whole “where are the WMD” (including using the old Wendy’s commercial “Where’s the beaf?” line) was frequently mentioned after the war and in the campaigns.
I’m not entirely sure what to make of the potential for some emergence of AI related policy stances with regard to any AI moment of realization as it were (something hits the AI fan). On that I might be more inclined to point to Higg’s Crisis and Leviathan line of thinking. Yes, we’ll see certain policy positions elevated but how beneficial that ultimate proves I’m less sure.
Perhaps I’m off on this but wanted to just ask. How much of that term being a contradiction is driven by the lack of a good underlying model of something. I’ll use the house temperature example here.
We have a very good model (by assumption here) of energy transmission from the exterior to the interior. We have a controller (thermostat) that only measures external features, say outside temp and maybe light and a time duration for when and how long to run either the air conditioner or heating. With a good model (and probably a good initialization at installation) that regulator seems like it would do a good job of hitting the defined internal temp target without needing to monitor the internal temp.
That certainly fails the GRT on the grounds it’s a lot more complicated than it needs to be as measuring the internal temp and then activating the AC or heating is pretty simple and straightforward. But having a good model seems to fit well with the Shannon entropy aspects—the model itself tells us what is needed about the internal temp so monitoring/sampling that variable is not necessary.
Is the complicated regulation no an open loop control case? Or is it still something of a contradiction in terms in your view?
It’s an interesting post and on some levels seems both correct and, to me at least, somewhat common sense.
Still I have a small tingle in the back of my head asking “is this magic really from intelligence or something else?” Or perhaps intelligence (perhaps not all that exceptional) and something else. It seems like in a number of the cases we’re presented a somewhat narrow frame of the situation. If the magic is not highly correlated with, or better a function largely of, intelligence I wonder exactly how meaningful this is regarding ASI.
Perhaps this is a bit of mirror image but accepted even just being myself actually included not opening up about some things so feeling safe not being vulnerable.
I’m not sure I’m understanding your claim here. I don’t think you’re suggesting a “if you build it they will come” type result from being in a dominant position. But I also don’t quite get why being in a dominant position in one space has much (other that deep pockets) really impacts the success in doing things others are not doing—or leading a group in doing things others are not leading that group.
Probably not the type of reference you were thinking about regarding reprogramming and impact on aging issues but I suspect it’s in the area you were thinking. I’m pretty sure it’s been mentioned here on LW before in other posts/comments. Interesting idea but implementation is problematic to say the least—but really hoping someone can figure it out.
Since a lot of this is way beyond my skill sets and knowledge, when you’re looking at the dynamic interplay aspect, is that purely internal to the cell or do you also look at the extra-cellular “communications”? If so, are you familiar with the Conboy’s plasma dilution experiments?
Yes, I agree. But actions that make people dislike you less may have no impact on people liking you or may have a positive impact (increase you likeability). How does one go about choosing which element from the action set to put in your model to produce the results you present?
I suppose a better way to put my take here is that you’re presenting a limited/narrow model but the rhetoric implies some type of general model. I think that is a bit dangerous to people trying to both understand their social anxiety and do something if they are looking to change it (which seems a bit implied by the use of anxiety rather than just saying introverted).
I like the general thrust of the post here but not sure I agree with some of the assumptions and conclusions that seem to follow from assumptions. I think you’re correct that there as some real asymmetries that get lost when confusing avoiding dis-like with getting more liked. But I think that comes from disliked and liked not being different ends of a common spectrum. I suspect they are somewhat orthogonal to one another.
So the distribution you draw is imposed on a projection of the dislike vector and the like vector which can allow shifting from the insecure to the secure distribution producing no change in the like number in the tail, or arguably no change in the dislike number, with an increase in the like.
I would add best literature survey piece for the given subject area as another type of reference material people should submit.
My sympathies to you and your family. This is one of life’s sadness that we all experience at some point. While I don’t think this will help easy anything (what could I say to do so?) I have to say when may father died it was too soon—or felt that way. He lived a long life, died from aging (at nearly 102) and lasted a few days after the pain got too much for him that morphine was regularly administered by the hospice nurse. It was time, and expected. When it happened it was still too soon.
It was easier for me when my mother died about 6 months later but I didn’t not have the same sense of loss, being lost, wishing there was more I had done, regrets regarding the relationship and more. And in her case there was even a bit of your experience of an initial mis-assessment (in our case be a sybling) that was the final straw with my mother dying a couple of weeks later.
I hope your father is taking it well (that sounds bad because I know it’s nothing but pain, but I cannot think of any better way to say) and has other reasons to view life as worth living rather than feeling that same deep too soon loss and a lack of interest in what life offers.
Perhaps a more general way to approach the question would be can one identify the existing comparative advantages related to the wizard power related task to be performed.
If you only want parts which are easy to clearly describe, you can hire someone to CAD them up… or you could describe them to an LLM and have it emit files that you can convert to your CAD of choice.
Don’t know if this is still the case but 15 years ago something very similar existed with CAD/CAM production. The CAD design could be passed through a processor to generate the G-code (instructions the CAM processor reads and follows) and then some machinist would review and “fix” tool path or order to make the code more efficient for production.
The inference goes beyond the definition and claims, additionally, that in a case like this diagram, the agent will only take actions that “follow the arrows,” changing the world state from less-preferred to more-preferred at each step.
Hmm. I’m not sure about that inference but might be due to absence of specific argument you’re making that inference from.
The preference arrow to me is simply pointing in a direction. The path between the less preferred state to more preferred state is pretty much a black box at this point. So I think the inference is more like: If agent 86 stays in state A, there is a lack of resources or a higher transition cost to get from A to C than a difference in utility/value between the two states.
There is probably something of a granularity here as well. Does taking a second job for a while, or doing some type of gig-work one doesn’t enjoy at all, count as a state or is that wrapped up in some type of non-preference type of state?
Perhaps it is more terminology as I didn’t mean to suggest some existing tendency was merely coincidental. I suspect, and would agree with the points you make, that some underlying structures will produce that fewer king opportunities and more wizard opportunities world.
I see a lot of complications in the interactions here related to the kings and wizards framing that can muddy things up. But was really just observing that these are not quiet as simple as choose wizard power over king power. I think one finds more wizard opportunities (to be found within all parades, between parades and even before/after/separate from parades), but think that sometimes we’ll have new parades that want to do their thing but existing kings are able to see that path so a new king (not really competing with other kings as the parade will not follow them far) will add a lot of value. That then opens up more wizard opportunities. Plus, I suspect many wizards would be poor kings and many kings poor wizards—so think about who you are when looking for you “power”.
Choosing the wrong option or choosing one at the wrong time will be low/negative value (probably both personally and systemically). But if you’re unsure about it, choosing wizard is probably the smarter choice.
Do you have an indications that those without the clinical signs of depression (or at least doctor approved state) won’t become acclimated to the drug in a way those that perhaps need it for a balanced state don’t?
I suppose asking a bit differently here, what are the gears here that do the work and how well one might think they they match up with one’s own system that is in place?
Interesting though about using it to improve one’s performance rather than just as an antidepressant or aid to quit smoking. the wiki has some good info but interesting that it doesn’t have a strong effect on dopamine so makes me wonder if looking more at what norepinephrine does, or perhaps the ratio between norepinephrine and dopamine.
Any consideration on the use of other NDRI drugs rather than Bupropion. I’ve not looked into much at this point but Bupropion does have some side effects I would not be too interested in experiencing.
Interesting distinction to draw out. I some what come away with a view that wizards get to play in the positive-sum game world while kings are stuck in a zero- to negative-sum game. I’m not completely sure that is a good modeling though might see the game of thrones hitting it’s equilibrium at a lower number of kings than one might expect to occur with wizards (resources are not infinite so at some point wizards will also just be competing against one another for stuff—and possible followers for their own parades).
So maybe the bit is that we will tend to find ourselves in a world where kings are at, or past, the equilibrium level, but the market for wizards will still support expansion and growth. I suspect there is likely some interaction here such that as wizards “create” and more wizards then create more, this activity produces some scope for increasing the kingdoms.
So perhaps it’s not really about king power or wizard power but identifying where the greatest lack resided.
I’m not sure that is the whole story here though as some of John’s post seemed to point at purely personal aspect—where that lost part went and why. I’m not sure that the king/wizard framing gets to the core of that.
Current AI, even after ChatGPT, earns roughly ten‑thousand dollars per H100‑GPU‑year.
What does that $10K number actually represent? An average across all AI? The marginal GPU earning across all AI? An estimate of either from one of the big AIs?
I always find the use of “X% of the vote” in US elections to make some general point about overall acceptability or representation of the general public problematic. I agree it’s a true statement but leaves out the important aspect of turn out for the vote.
I have to wonder if, particularly the USA, would not be quite as divided if all the reporting provided percentage of vote of the eligible voting population rather than the percentage of votes cast. I think there is a big problem with just ignoring the non-vote information the is present (or expecting anyone to look it up and make the adjustments on their own).
But I agree, I’m not too sure just where electoral systems fall into this question of AGI/ASI first emerging under ether the USA or CCP.
Is that what is does tell us? The sign doesn’t make the claim you suggest—it doesn’t claim it’s reducing the deaths from heart disease, it states it’s 33% less likely to be “premature”—which is probably a weaselly term here. But it clearly is not making any claims about reducing deaths from heart disease.
You seem to be projecting the conclusion that the claim/expected interpretation is that membership reduces the deaths by 33%. But I don’t know how you’re concluding that the marketing team thought that would be the general interpretation by those seeing the sign.
While I would not be incline to take an billboard ad at face value, a more reasonable take seems to me that claiming that even with heard disease KP’s members are less likely to die earlier than expect that other with other healthcare providers. That may be a provable and true claim or it might be more “puffing” and everyone will play with just how “premature” is going to be measured.
Whether or not it’s corporate stupidity, I think that might be a separate question but understanding exactly what results such an ad is supposed to be producing will matter a lot here. Plus, there is the old adage about no one every going bankrupt underestimating the intelligence of the American consumer—and I suspect that might go double in the case of medical/healthcare consumption.