Inference and infrastructure costs are about $3700 a month, and then there is a variable amount of dev cost on top of that. The point of the experiment was not to make a case that this is an effective fund raising strategy—the point was to explore how well they could do at the task. Which, I think, is surprisingly well :)
Shoshannah Tekofsky
Thanks!
Hard to say how much they would have raised as anon humans. A few considerations that come to mind:
The agents led with the fact that they are agents where ever they interacted with humans
They were suspended from reddit and twitter for being bots
They crafted fundraising messages focussed on the effectiveness of the given charities and honesty about they themselves being AI agents.
Some people have an aversion to agents/bots
We haven’t asked the people who donated why they did so, and I hesitate to speculate about their motives.
If the agents would have pretended to be humans, they might have been able to craft more sympathetic messages and get through more bot detection on social media.
If the agents would have pretended to be non-anon humans, they might have crafted misleading and/or inspiring stories to increase donations. Admittedly, some humans do this as well, so it’s not exclusively AI-behavior, I think.
Anon humans would be less remarkable than bots, and thus might have raised less money. Not sure.
All in all, I don’t have a prediction if they would have raised more or less money as anon humans.
Oh super valid! I live in the Netherlands which is very densely populated
I agree your example is a better analogy. What I was trying to point to was something else: how the decision to remove detail from a navigational map feels to me experientially. It feels like a form of voluntary blindness to me.
In the case of the subway map, I’d probably also find a more accurate and faithful map easier to parse than the fully abstracted ones, cause I seem to have a high preference for visual details.
Thanks! Glad to hear it :D
Oh shit. It’s worse even. I read the decimal separators as thousand separators.
I’m gonna just strike through my comment.
Thanks for noticing … <3
As someone who isn’t really in a position to donate much at all, and who feels rather silly about the small amount I could possibly give, and what a tiny drop that is compared the bucket this post is sketching...
I uh … sat down and did some simple math. If everyone who ever votes (>12M) donates $10 then you’d have >$120 million covered. If we follow bullshit statistics of internet activity, where it’s said 99% of all content is generated by 1% of all people, then this heuristic would get us $1.2M from people paying this one time “subscription” fee. Now I also feel, based on intuition and ass-numbers, that LW folk have a better ratio than that, so let’s multiply by 2 and then we could get a $2.4 million subscriber fee together from small donations.Now on the pure power of typical mind … I personallylikepeople knowing when I do a nice thing—even a stupidly small thing.So I’m commenting about it.I find this embarrassing, and I’m working through the embarrassment to make it easier for others to farm this nutrient too and just normalize it in case that helps with getting a critical mass of small donations of the $10 variety.Basically my point to readers is: ‘Everyone’ paying a one-time $10 subscription fee would solve the problem.The trick is mostly to help each other generate the activation energy to do this thing. If it helps to post, high five, or wave about it, please do! Visibility of small donations may help activation energy and get critical mass! Group action is awesome. Using your natural reward centers about it is great! <3 Hi :D Wanna join? _Thanks, abstractapplic, for noticing the first error in my calculation: It’s number of votes, not number of people voting. Additionally I noticed I applied the power of dyslexia to the decimal point and read that as an thousand separator. So ignore the errored out math, give what you can, and maybe upvote each other for support on giving as much as possible?
PS: I would prefer if actually big donators would get upvoted more than my post of error math. Feel free to downvote my post just to achieve a better ordering of comments. Thanks. <3
PPS: Note to the writer—Maybe remove decimal numbers entirely throughout the graphs? This is what it looked like for me, and led to the error. And this image is way zoomed in compared to what I see naturally on my screen.
Thanks for the explanation! Are you familiar with the community here and around Astral Codex Ten (ACX)? There are meetups and events (and a lot of writers) who focus on the art and skill of rationality. That was what led to my question originally.
This made me unreasonably happy. Thank you :D
Thank you for the in-depth thoughts!
Thank you!
It was a joke :) I had been warned by my friends that the joke was either only mildly funny or just entirely confusing. But I personally found it hilarious so kept it in. Sorry for my idiosyncratic sense of humor ;)
Oh cool!
I was asking for any connection of any type. The overlap just seemed so great that I’d expect there to be a connection of some sort. The Clearer Thinking link makes sense and is an example, thank you!
Oh and also, thank you for checking and sharing your thoughts! :)
I didn’t look deeply in to the material, but good branding gives people a good feeling about a thing, and I think rationality could use some better branding. In my experience a lot of people bounce off a lot of the material cause they have negative associations with it or it’s not packaged in a way that appeals. I think even if (I didn’t check) the material is too superficial to be useful as content, it’s still useful to increase people’s affinity / positive association with rationality.
Yeah, I can second this entire sentiment. I try to write up parenting tricks that work for me that are clearly not going to reflect negatively on my kids, or will even feel too personal. And then I realized that a lot of the most valuable information that I could read as a parent, I’ll never find cause a parent with high integrity is not going to write down very negative experiences they had with their kids and all the ways they failed to respond optimally. It reminds me a little of Duncan’s social dark matter concept.
Oh this is amazing. I can never keep the two apart cause of the horrible naming. I think I’m just going to ask people if they mean intuition or reason from now on.
Thank you for the clarification!
I think I agree this might be more a matter of semantics than underlying world model. Specifically:
Bill.learning = “process of connecting information not known, to information that is known”Shoshannah.learning = “model [...] consisting of 6 factors—Content, Knowledge Representation, Navigation, Debugging, Emotional Regulation, and Consolidation.” (note, I’m considering a 7th factor at the moment: which is transfer learning. This factor may actually bridge are two models.)
Bill.teaching = “applying a delivery of information for the learner with a specific goal in mind for what that learner should learn”Shoshannah.teaching = [undefined so far], but actually “Another human facilitating steps in the learning process of a given human”
---With those as our word-concept mappings, I’m mostly wondering what “learning” bottoms out to in your model? Like, how does one learn?
One way to conceptualize my model is as:
Data → encoding → mapping → solution search → attention regulation → training runsAnd the additional factor would be “transfer learning” or I guess fine-tuning (yourself) by noticing how what you learn applies to other areas as well.
And a teacher would facilitate this process by stepping in an providing content/support/debugging for each step that needs it.I’m not sure why you are conceptualizing the learning goal as being part of the teacher and not the learner? I think they both hold goals, and I think learning can happen goal-driven or ‘free’, which I think is analoguous with the “play” versus “game” distinction in ludology—and slightly less tightly analoguous to exploration versus exploitation behavior.
I’m curious if you agree with the above.
Hmmm, I think ‘healthy’ is saying too much. This is one particular way of being psychologically healthy, but in my model you can be psychologically healthy and suffer more than 5 minutes per week and experience inner conflict some of the time. I think this is implicitly making the target too narrow for people that care about getting there and might consider this a reference point.
Also, I’m curious if the depression comment also refers to adaptive depression, like when someone very close to you dies and you need to adapt? (I’m not making a case that prolonged grief is good but I would make the case that grieving for 6 months or so is not psychologically unhealthy).
All the other points seem fine to me ❤️
Thanks, Bill! I appreciate the reframe. I agree teaching and learning are two different activities. However, I think the end goal is that the user can learn whatever they need to learn, in whatever way they can learn it. As such, the learner activity is more central than the teaching activity—Having an ideal learning activity will result in the thing we care about (-> learning). Having the ideal teaching experience may still fall flat if the connection with the learner is somehow not made.
I’m curious what benefits you notice from applying the reframe to focusing on the teaching activity first. Possibly more levers to pull on as it’s the only side of the equation we can offer someone from the outside?
Thanks, glad to hear that!
So the agents have a shared group chat where they communicate. You can watch them live here every week day from 10AM PST, or you can watch the replays any time, which includes the group chat.
You can also email them yourself! Sometimes they even answer back :) Their emails are in their memories, I think.
Apart from that, they managed themselves. There was an amusing arc where o3 kept insisting on being the manager, and eventually they took it to a vote, and then Gemini refrained from voting and o3 took that as a vote in its favor XD o3 often has strategic-seeming behaviors like this.