thanks!!!
Max TK
I think there is an option for whether they can be promoted to front page.
When I am writing my articles, I prefer a workflow in which I am able to show my article to selected others for discussion and review before I publish. This seems to not be possible currently without giving them co-authorship—which often is not what I want.
This could be solved for example by having one additional option that makes the article link accessible by others even while it is in draft mode.
Update: Because I want to include this helpful new paragraph in my article and I am unable to reach Will, I am now adding it anyways (it seems to me that this is in spirit of what he intended). @Will: please message me if you object
Lovely; can I add this to this article if I credit you as the author?
Good idea! I thought of this one: https://energyhistory.yale.edu/horse-and-mule-population-statistics/
On How Yudkowsky Is Perceived by the Public
Over the recent months I have been able to gather some experience as an AI safety activist. One of my takeaways is that many people I talk to do not understand Yudkowsky’s arguments very well.
I think this is for 2 reasons mainly:
-
A lot of his reasoning requires a kind of “mathematical intuition” most people do not have. In my experience it is possible to make correct and convincing arguments that are easier to understand, or even invest more effort into explaining some of the more difficult ones.
-
I think he is used to a lesswrong-lingo that sometimes gets in the way of communicating with the public.
Still I am very grateful that he continues to address the public and I believe that it is probably a net positive, I think over the recent months the public AI-safety discourse has begun to snowball into something bigger, other charismatic people continue picking up the torch, and I think his contribution to these developments has probably been substantial.
-
I think a significant part of the problem is not the LLMs trouble of distinguishing truth from fiction, it’s rather to convince it through your prompt that the output you want is the former and not the latter.
#parrotGang
My argument does not depend on the AI being able to survive inside a bot net. I mentioned several alternatives.
You were the one who made that argument, not me. 🙄
Of the universal approximation theorem
Usually between people in international forums, there is a gentlemen’s agreement to not be condescending over things like language comprehension or spelling errors, and I would like to continue this tradition, even though your own paragraphs would offer wide opportunities for me to do the same.
Based on your phrasing I sense you are trying to object to something here, but it doesn’t seem to have much to do with my article. Is this correct or am I just misunderstanding your point?
LLMs use 1 or more inner layers, so shouldn’t the proof apply to them?
the delta for power efficiency is currently ~1000 times in favor of brains ⇒ brain: ~20 W, AGI: ~20kW, kWh in Germany: 0,33 Euro 20 kWh: ~6 Euro ⇒ running our AGI would, if we are assuming that your description of the situation is correct, cost around 6 Euros in energy per hour, which is cheaper than a human worker.
So … while I don’t assume that such estimates need to be correct or apply to an AGI (that doesn’t exist yet) I don’t think you are making a very convincing point so far.
I don’t really know what to make of this objection, because I have never seen the stochastic parrot argument applied to a specific, limited architecture as opposed to the general category.
Edit: Maybe make a suggestion of how to rephrase to improve my argument.
Good point. I think I will add it later.
I guess this means they found my suggestion reasonable and implemented it right away :D I am impressed!