Interesting. Thanks for your thoughts. I think this difference of opinion shows me where I’m not fully explaining my thinking. And some differences between human thinking and LLM “thinking”. In humans, the serial nature of linking thoughts together is absolutely vital to our intelligence. But LLMs have a lot more seriality in the production of each utterance.
I think I need to write another post that goes much further into my reasoning here to work this out. Thanks for the conversation.
I think this difference of opinion shows me where I’m not fully explaining my thinking
I perceive a lot of inferential distance on my end as well. My model here is informed by a number of background conclusions that I’m fairly confident in, but which haven’t actually propagated into the set of commonly-assumed background assumptions.
I perceive a lot of inferential distance on my end as well. My model here is informed by a number of background conclusions that I’m fairly confident in, but which haven’t actually propagated into the set of commonly-assumed background assumptions.
I have found this conversation very interesting. Would be very interested if you could do a quick summary or writeup of the background conclusions you are referring to. I have my own thoughts about the feasibility of massive agency gains from AutoGPT like wrappers but would be interested to hear your thoughts
I saw it. I really like it. Despite my relative enthusiasm for LMCA alignment, I think the points you raise there mean it’s still quite a challenge to get it right enough to survive.
I’ll try to give you a substantive response on that post today.
Interesting. Thanks for your thoughts. I think this difference of opinion shows me where I’m not fully explaining my thinking. And some differences between human thinking and LLM “thinking”. In humans, the serial nature of linking thoughts together is absolutely vital to our intelligence. But LLMs have a lot more seriality in the production of each utterance.
I think I need to write another post that goes much further into my reasoning here to work this out. Thanks for the conversation.
Glad it was productive!
I perceive a lot of inferential distance on my end as well. My model here is informed by a number of background conclusions that I’m fairly confident in, but which haven’t actually propagated into the set of commonly-assumed background assumptions.
I have found this conversation very interesting. Would be very interested if you could do a quick summary or writeup of the background conclusions you are referring to. I have my own thoughts about the feasibility of massive agency gains from AutoGPT like wrappers but would be interested to hear your thoughts
Here’s the future post I was referring to!
I saw it. I really like it. Despite my relative enthusiasm for LMCA alignment, I think the points you raise there mean it’s still quite a challenge to get it right enough to survive.
I’ll try to give you a substantive response on that post today.
I may make a post about it soon. I’ll respond to this comment with a link or a summary later on.