e^3 is ~20, so for large n you get 95% of success by doing 3n attempts.
Boris Kashirin
Thinking about responsible gambling, something like up-front long-term commitment should solve a lot of problems? You have to decide right away and lock up money you going to spend this month and that will separate decision from impulse to spend.
I tried to derive it, turned out to be easy: BC is wheel pair, CD is surface, slow medium above. AC/Vfast=AB/Vslow and for critical angle D touches small circle (inner wheel is on the verge of getting out of medium) so ACD is right triangle, so AC*sin (ACD)= AD (and AD same as AB) so sin(ACD) = AB/AC= Vslow/Vfast. Checking wiki it is the same angle (BC here is wavefront so velocity vector is normal to it). Honestly I am a bit surprised this analogy works so well.
I read about better analogy long time ago: use two wheels on an axle instead of single ball, then refraction come out naturally. Also I think instead of difference in friction it is better to use difference in elevation, so things slow down when they go to an area of higher elevation and speed back up going down.
It is defecting against cooperate-bot.
From ASI standpoint humans are type of rocks. Not capable of negotiating.
This experience-based primitivity also means inter-temporal self identification only goes one way. Since there is no access to subjective experience from the future, I cannot directly identify which/who would be my future self. I can only say which person is me in the past, as I have the memory of experiencing from its perspective.
While there is large difference in practice between recalling past event and anticipating future event, on conceptual level there is no meaningful difference. You don’t have direct access to past events, memory is just an especially simple and reliable case of inference.
Would be funny if hurdle presented by tokenization is somehow responsible for LLM being smarter than expected :) Sounds exactly like kind of curveball reality likes to throw at us from time to time :)
[Question] How tokenization influences prompting?
But to regard these as a series of isolated accidents is, I think, not warranted by the number of events which they all seem to point in mysteriously a similar direction. My own sense is more that there are strange and immense and terrible forces behind the Poverty Equilibrium.
Reminded me of The Hero With A Thousand Chances
May be societies with less poverty are less competitive
When I read about automated programming for robotics few month ago I wondered if it can be applied for ML. If it can, then there is good chance of seeing paper about modification for ReLU right about now. It seemed like possibly similar kind of research, case where perseverance is more important than intelligence. So at first I freaked out a bit at headline, but it is not it, right?
Important fact about rocket is that it provides some fixed amount of change of velocity (delta-v). I think your observation of “how long strong gravity slows me” combined with thinking in terms of delta-v budget and where it is best spent brought intuitive understanding of Oberth effect to me. Analysing linear trajectory instead of elliptic one also helps.
Not sure if this help but:
A program that generates “exactly” the sequence HTHTT
Alongside this program there is program that generates HHHHH and HHHHT and HHHTT etc − 2^5 of such programs, and before seeing evidence HTHTT is just one of those not standing out in any way. (but I don’t know how specifically Solomonoff induction accounts for it, if that is your question)
Computer viruses belong to the first category while biological weapons and gain of function research to the second.
I think it is not even goals but means. When you have big hammer every problem looks like a nail, if you good at talking you start to think you can talk your way out of any problem.
I’d add that correctness often is security: job poorly done is an opportunity for hacker to subvert your system, make your poor job into great job for himself.
Have you played something like Slay the spire? Or Mechabellum that is popular right now? Deck builders don’t require coordination at all but demands understanding of tradeoffs and managing risks. If anything those skills are neglected parts of intelligence. And how high is barrier of entry to something like Super Auto Pets?
Heard that boxfan works best if placed some distance from window, not in window.
I remember reading about zoologist couple that tried to rise their child together with baby gorilla. Gorilla development stopped at certain age and that stopped human development so they had to be separated.
When I see the question, I know I am on LW. It allows me to deduce that “arcane runes” part is not important, but LLM don’t have this context. Maybe it sounds like crackpot/astrology question to it?