(Perhaps you’re thinking of this https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances)
Viktor Rehnberg
Good formulation. “Given it’s Monday” can have two different meanings:
you learn that you will only be awoken on Monday, then it’s 50%
you awake assign 1⁄3 probability to each instance and then make the update
So it turns out to 50 % for both but it wasn’t initially obvious to me that these two ways would have the same result.
I’d say
The possible observer instances and their probability are:
Heads 50 %
Red room 25 %
Blue room 25 %
Tails 50 %
Red room 50 % (On Monday or Tuesday)
Blue room 50 % (On Monday or Tuesday)
If I choose a strategy “bet only if blue” (or equivalentely “bet only if red”) then expected value for this strategy is so I choose to follow this strategy.
I don’t remember what halfer and thirder were or what position I consider to be correct.
Concrete empirical research projects in mechanistic anomaly detection
Capabilities leakages don’t really “increase race dynamics”.
Do people actually claim this? Shorter timelines seems like a more reasonable claim to make. To jump directly to impacts on race dynamics is skipping at least one step.
To me it feels like this policy is missing something that accounts for a big chunk of the risk.
While recursive self-improvement is covered by the “Autonomy and replication” point, there is another risk from actors that don’t intentionally cause large scale harm but use your system to make improvements to their own systems as they don’t follow your RSP. This type of recursive improvement doesn’t seem to be covered by any of “Misuse” or “Autonomy and replication”.
In short it’s about risks due to shortening of timelines.
You can see twin birth rates fell sharply in the late 90s
Shouldn’t this be triplet birthrates? Twin birthrates look pretty stable in comparison.
Hmm, yeah it’s a bit hard to try stuff when there’s no good preview. Usually I’d recommend rot13 chiffer if all else fails but for number sequences that makes less sense.
I knew about 2-4-6 problem from HPMOR, I really like the opportunity to try it out myself. These are my results on the four other problems:
indexA
Number of guesses:
8 guesses of which 3 were valid and 5 non-valid
Guess:
“A sequence of integers whose sum is non-negative”
Result: Failure
indexB
Number of guesses:
39 of which 23 were valid 16 non-valid
Guess:
“Three ordered real numbers where the absolute difference between neighbouring numbers is decreasing.”
Result: Success
indexC
Number of guesses:
21 of which 15 were valid and 6 non-valid
Guess:
“Any three real numbers whose sum is less than 50.”
Result: Success
indexD
Number of guesses:
16 of which 8 were valid and 8 non-valid
Guess:
“First number is a real number and the other two are integers divisible by 5”
Result: Failure
Performance analysis
I’d say that the main failure modes were that I didn’t do enough tests and I was a very bad number generator. For example, in indexD
I made 9 tests to test my final hypothesis 4 of which were valid, that my guess and the actual rule would give the same result for these 9 tests if I were actually good at randomizing is very small.
I could also say that I was a bit naive on the first test and that I’d grown overconfident after two successive successes for the final test.
See FAQ for spoiler tags, it seems mods haven’t seen your request. https://www.lesswrong.com/faq#How_do_I_insert_spoiler_protections_
These problems seemed to me similar to the problems at the International Physicist’s Tournament. If you want more problems check out https://iptnet.info
In case anyone else is looking for a source a good search term is probably the Beal Effect. From the original paper by Beal and Smith:
Once the effect is pointed out, it does not take long to arrive at the conclusion that it arises from a natural correlation between a high branching factor in the game tree and having a winning move available. In other words, mobility (in the sense of having many moves available) is associated with better positions
Or a counterexample from the other direction would be that you can’t describe a uniform distribution of the empty set either (I think). And that would feel even weirder to call “bigger”.
Why would this property mean that it is “bigger”? You can construct a uniform distribution of a uncountable set through a probability density as well. However, using the same measure on a countably infinite subset of the uncountable set would show that the countable set has measure 0.
Intuitions by ML researchers may get progressively worse concerning likely candidates for transformative AI
So we have that
[...] Richard Jeffrey is often said to have defended a specific one, namely the ‘news value’ conception of benefit. It is true that news value is a type of value that unambiguously satisfies the desirability axioms.
but at the same time
News value tracks desirability but does not constitute it. Moreover, it does not always track it accurately. Sometimes getting the news that X tells us more than just that X is the case because of the conditions under which we get the news.
And I can see how starting from this you would get that . However, I think one of the remaining confusions is how you would go in the other direction. How can you go from the premise that we shift utilities to be for tautologies to say that we value something to a large part from how unlikely it is.
And then we also have the desirability axiom
for all and such that together with Bayesian probability theory.
What I was talking about in my previous comment goes against the desirability axiom in the sense that I meant that for in the more general case there could be subjects that prefer certain outcomes proportionally more (or less) than usual such that for some probabilities . As the equality derives directly from the desirability axiom, it was wrong of me to generalise that far.
But, to get back to the confusion at hand we need to unpack the tautology axiom a bit. If we say that a proposition is a tautology if and only if [1], then we can see that any proposition that is no news to us has zero utils as well.
And I think it might be well to keep in mind that learning that e.g. sun tomorrow is more probable than we once thought does not necessarily make us prefer sun tomorrow less, but the amount of utils for sun tomorrow has decreased (in an absolute sense). This comes in nicely with the money analogy because you wouldn’t buy something that you expect with certainty anyway[2], but this doesn’t mean that you prefer it any less compared to some other worse outcome that you expected some time earlier. It is just that we’ve updated from our observations such that the utility function now reflects our current beliefs. If you prefer to then this is a fact regardless of the probabilities of those outcomes. When the probabilities change, what is changing is the mapping from proposition to real number (the utility function) and it is only changing with an shift (and possibly scaling) by a real number.
At least that is the interpretation that I’ve done.
Skimming the methodology it seems to be a definite improvement and does tackle the short-comings mentioned in the original post to some degree at least.
Isn’t that just a question whether you assume expected utility or not. In the general case it is only utility not expected utility that matters.
Another hypothesis: Your description of the task is
From METR’s recent investigation on long tasks you would expect current models not to perform well on this.
I doubt a human professional could do the tasks you describe in something close to an hour, so perhaps its just currently too hard and the current improvements don’t make much of a difference for the benchmark, but it might in the future.