Yes, I really don’t see how this would work right now. If I try doing Taylor series, which is what I’d start with for something like this, I very much get the opposite result.
I’m actually (hopefully) joining ai safety camp to work on your topics next month, so maybe we can talk about this more then?
Yes, I really don’t see how this would work right now. If I try doing Taylor series, which is what I’d start with for something like this, I very much get the opposite result.
I’m actually (hopefully) joining ai safety camp to work on your topics next month, so maybe we can talk about this more then?
Yeah definitely.