If you think that the net costs of using ML techniques when improving our rationalist/EA tools are not worth it, then there can be some sort of argument there.
Many Guesstimate models are now about making estimates about AI safety.
I’m really not a fan of the “Our community must not use ML capabilities in any form”, not sure where others here might draw the line.
My comment was 99% a joke. Though if you used Squiggle to perform an existential risk-reward analysis of whether to use Squiggle, who knows what would happen. :-)
AI Safety Theorist: In my arxiv paper I invented the Squiggle Maximizer as a cautionary tale
AI Safety Company: At long last, we have created the Squiggle Maximizer from classic arxiv paper Don’t Create The Squiggle Maximizer
If you think that the net costs of using ML techniques when improving our rationalist/EA tools are not worth it, then there can be some sort of argument there.
Many Guesstimate models are now about making estimates about AI safety.
I’m really not a fan of the “Our community must not use ML capabilities in any form”, not sure where others here might draw the line.
My comment was 99% a joke. Though if you used Squiggle to perform an existential risk-reward analysis of whether to use Squiggle, who knows what would happen. :-)
Thanks for clarifying! That really wasn’t clear to me from the message alone.
> Though if you used Squiggle to perform an existential risk-reward analysis of whether to use Squiggle, who knows what would happen
Yep, that’s in the works, especially if we can have basic relative value forecasts later on.