Relevant: https://andymatuschak.org/hmwl/
acertain
o1 CoT: The user is asking for more references about brownies. <Reasoning about what the references should look like> So, the assistant should list these references clearly, with proper formatting and descriptions, and provide actual or plausible links. Remember, the model cannot retrieve actual URLs, so should format plausible ones.
this might encourage it to make up links
description of (network, dataset) for LLMs ?= model that takes as input index of prompt in dataset, then is equivalent to original model conditioned on that prompt
There exist inexpensive real co2 sensors, e.g. https://www.sparkfun.com/products/22396 . Datasheet says only updates every 5 seconds & 60s response time “for achieving 63% of a respective step function”, which I guess is what parent comment means by “They’ll likely be extremely slow”.
Probably worth searching e.g. digikey for sensors with faster response time.
What about specialized algorithms for problems (e.g. planning algorithms)?
IANAL, but I think that this is currently impossible due to anti-trust regulations.
I don’t know anything about anti-trust enforcement, but it seems to me that this might be a case where labs should do it anyways & delay hypothetical anti-trust enforcement by fighting in court.
blueiris’s posts read to me as a combination of good concepts & poor quality attacks/attempts to defend leverage (or something?). Personally I’d mind the attacks more if they were more successful and/or less obvious I think? As-is they’re annoying but don’t seem very dangerous epistemically.
Trying to reduce the amount of compute risks increasing hardware overhang once that compute is rebuilt. I think trying to slow down capabilities research (e.g. by getting a job at an AI lab and being obstructive) is probably better.
edit: meh idk. Whether or not this improves things depends on how much compute you can destroy & for how long, ml scaling, politics, etc etc. But the current world of “only big labs with lots of compute budget can achieve SOTA” (arguable, but possibly more true in the future) and less easy stuff to do to get better performance (scaling) both seem good.
I personally think work on reduced precision inference (e.g. 4 bit!) is probably useful, as circuits should be easier to analyze than floats.
How to convert simple predictions/probability distributions (e.g. $stock will go down with x% probability at a date distributed around day Y an amount normally distributed around Z) into positions.
How much should the average person worry about tail risk? the average EA?
Less naive portfolio construction.
What tools from quantitative finance might be useful outside of finance: Econometrics & probabilistic modeling as used in finance (or as used 8 years ago or whatever)? Risk modeling?
I guess this is sorta about your 3, which I disbelieve (though algorithms for tasks other than learning are also important). Currently, Bayesian inference vs SGD is a question of how much data you have (where SGD wins except for very little data). For small to medium amounts of data, even without AGI, I expect SGD to lose eventually due to better inference algorithms. For many problems I have the intuition that it’s ~always possible to improve performance with more complicated algorithms (eg sat solvers). All that together makes me expect there to be inference algorithms that scale to very large amounts of data (that aren’t going to be doing full Bayesian inference but rather some complicated approximation).