Instead, Aligned AI used its technology to automatically tease out the ambiguities of the original data.
Could you provide any technical details about how this works? Otherwise I don’t know what to take from this post.
It’s an implementation of the concept extrapolation methods we talked about here: https://www.lesswrong.com/s/u9uawicHx7Ng7vwxA
The specific details will be in a forthcoming paper.
Also, you’ll be able to try it out yourself soon; signup for alpha testers at the bottom of the page here: https://www.aligned-ai.com/post/concept-extrapolation-for-hypothesis-generation
Could you provide any technical details about how this works? Otherwise I don’t know what to take from this post.
It’s an implementation of the concept extrapolation methods we talked about here: https://www.lesswrong.com/s/u9uawicHx7Ng7vwxA
The specific details will be in a forthcoming paper.
Also, you’ll be able to try it out yourself soon; signup for alpha testers at the bottom of the page here: https://www.aligned-ai.com/post/concept-extrapolation-for-hypothesis-generation