I took the survey 2 days ago. It was fun. I think I was well calibrated for those calibration questions, but sadly there was no “results” section.
Is it possible to self-consistently believe you’re poorly calibrated? If you believe you’re overconfident then you would start making less confident predictions right?
Being poorly calibrated can also mean you’re inconsistent between being overconfident and underconfident.
You can be imperfectly synchronised across contexts & instances.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I took the survey 2 days ago. It was fun. I think I was well calibrated for those calibration questions, but sadly there was no “results” section.
Is it possible to self-consistently believe you’re poorly calibrated? If you believe you’re overconfident then you would start making less confident predictions right?
Being poorly calibrated can also mean you’re inconsistent between being overconfident and underconfident.
You can be imperfectly synchronised across contexts & instances.