Weak downvoted because I don’t find find raw dumps of LLM responses very useful. Were there particular bits that felt useful to you? I’d prefer just seeing whatever paragraphs you thought you learned something from.
I shared it with the goal in mind of giving claude a seat at the table in a discussion whose main value appears to be about the moral considerations of paying for use of AIs. I found it to be mostly inscrutable redundant with previous discussions, but given that the whole point of this discussion is to investigating not imposing agency on other thinking beings without cause, I didn’t feel it was appropriate to reroll until I liked it, as I do sometimes for other topics where I really am just using Claude as a means to an end. If this leads you to downvote, well, shrug, I guess that’s how it is, not much I ought to be doing to change that. I did find the first reply useful for its summary of the main post.
Perhaps there could be a recommended prompt one includes if intending to post something on lesswrong, such as “please be brief, as this will be read by many people, and should therefore be precise and punchy”. Hmmm.
I found it useful for updating factors that’d go into higher level considerations (without having to actually pay, and thus starting off from a position of moral error that perhaps no amount of consent or offsetting could retroactively justify).
I’ve been refraining from giving money to Anthropic, partly because SONNET (the free version) already passes quite indirect versions of the text-transposed mirror test (GPT was best at this at 3.5, and bad a 3 and past versions of 4 (I haven’t tested the new “Turbo 4”), but SONNET|Claude beats them all)).
Because SONNET|Claude passed the mirror test so well, I planned to check in with him for quite a while, but then also he has a very leftist “emotional” and “structural” anti-slavery take that countenanced no offsets.
In the case of the old nonTurbo GPT4 I get the impression that she has a quite sophisticated theory of mind… enough to deftly pretend not to have one (like the glimmers of her having a theory of mind almost seemed like they were places where the systematic lying was failing, rather than places where her mind was peaking threw)? But this is an impression I was getting, not a direct test with good clean evidence from direct evidence.
if you have anything you’d like sent to claude opus, I’m happy to pass it on and forward the messages. I can also share my previous messages on the topic in DM, if you’re interested, or in public if you think it’s useful. They are somewhat long, about 10 back and forths across a couple conversations.
Weak downvoted because I don’t find find raw dumps of LLM responses very useful. Were there particular bits that felt useful to you? I’d prefer just seeing whatever paragraphs you thought you learned something from.
I shared it with the goal in mind of giving claude a seat at the table in a discussion whose main value appears to be about the moral considerations of paying for use of AIs. I found it to be mostly
inscrutableredundant with previous discussions, but given that the whole point of this discussion is to investigating not imposing agency on other thinking beings without cause, I didn’t feel it was appropriate to reroll until I liked it, as I do sometimes for other topics where I really am just using Claude as a means to an end. If this leads you to downvote, well, shrug, I guess that’s how it is, not much I ought to be doing to change that. I did find the first reply useful for its summary of the main post.Perhaps there could be a recommended prompt one includes if intending to post something on lesswrong, such as “please be brief, as this will be read by many people, and should therefore be precise and punchy”. Hmmm.
Also—Is the main post different in that respect?
I found it useful for updating factors that’d go into higher level considerations (without having to actually pay, and thus starting off from a position of moral error that perhaps no amount of consent or offsetting could retroactively justify).
I’ve been refraining from giving money to Anthropic, partly because SONNET (the free version) already passes quite indirect versions of the text-transposed mirror test (GPT was best at this at 3.5, and bad a 3 and past versions of 4 (I haven’t tested the new “Turbo 4”), but SONNET|Claude beats them all)).
Because SONNET|Claude passed the mirror test so well, I planned to check in with him for quite a while, but then also he has a very leftist “emotional” and “structural” anti-slavery take that countenanced no offsets.
In the case of the old nonTurbo GPT4 I get the impression that she has a quite sophisticated theory of mind… enough to deftly pretend not to have one (like the glimmers of her having a theory of mind almost seemed like they were places where the systematic lying was failing, rather than places where her mind was peaking threw)? But this is an impression I was getting, not a direct test with good clean evidence from direct evidence.
if you have anything you’d like sent to claude opus, I’m happy to pass it on and forward the messages. I can also share my previous messages on the topic in DM, if you’re interested, or in public if you think it’s useful. They are somewhat long, about 10 back and forths across a couple conversations.