Hi, nim!
Thanks for commenting : )
Yes, exactly I used speech-to-text but actually the chatGPT speech-to-text software on their app because I like the UI better and I think it performs better too. Yeah, the heal heel thing miffed me slightly but I think it is a fun artifact since it doesn’t actually change the meaning.
Well for one I didn’t prompt for a whole essay. In one chat I lightly edited the snippets from my walk, then I took the final essay generated from another chat about the Black Chess Box to synthesise into the Sidebar and similarly for a different conversation again for part 2 and then finally, which is where Claude has the advantage—because at this point the context would be too large for ChatGPT 4o for instance—you just ask for either a brief or extended conclusion to all discussed in the chat. In summary, having separate conversations to develop sections and bring them all together in one final chat. This worked well for this essay because the progression from section to section didn’t need to be that strong but idk what one would do if that were the case.
I have tried other methods in the past and in general, there’s no one size fits all (for instance sometimes the project function can allow you to tackle reports over 10 pages long, then sometimes it just gets stuck in loops.) The best thing to do is try to leverage the advantages you have and experiment.
Anyway I hope that answers your question
Matthew
Matthew McRedmond
Walking Sue
Thank you! Everything was AI generated (and unfiltered) to see how well Gemini could understand ChatGPT’s abstractions in a zero-shot environment. But yes I should have edited and contextualised : )
Hmm, I hadn’t thought of the implications of chaining the logic behind the superintelligences policy—thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn’t solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I’m not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think
>as long as we discover ever vaster possible tomorrows
doesn’t hold. The fact that the accessible universe is finite and there is a finite amount of time before heat death means that there is some ultimate possible tomorrow?
Also, I think that sacrifices of the nature described in the post come in discrete steps with potentially large time differences between them allowing you to realise the gains of a particular future before the next sacrifice if that makes sense.
I only skimmed that category but if I’m not mistaken the kind of systems I describe in the piece are special cases of times when the boundary between defining agents and one agent and another is unclear/pivotal/insightful etc.