Milan Weibel https://weibac.github.io/
Milan W
Interesting. Do you have the code published somewhere?
I applaud the scholarship, but this post does not update me much on Gary Marcus. Still, checking is good, bumping against reality often is good, epistemic legibility is good. Also, this is a nice link to promptly direct people who trust Gary Marcus to. Thanks!
Hi sorry for soft-doxxing you but this information is trivially accesible from the link you provided and helps people evaluate your work more quickly:
danilovicioso.com
Cheers!
Oh. That’s nice of her.
In the gibbs energy principle quote you provide, are you implying the devil is roughly something like “the one who wishes to consume all available energy”? Or something like “the one who wishes to optimize the world such that no energy source remains untapped”?
This post is explicitly partisan and a bit hard to parse for some people, which is why I think they bounced off and downvoted, but I think this writer is an interesting voice to follow. I mean, a conservative who knows deleuze and cybernetics? Sign me up! (even though I’m definitively not a conservative)
Hi! Welcome! Is your thesis roughly this?:
”The left latched into the concept of “diversity” to make the right hate it, thus becoming more homogeneous and dumber”
I think the thesis of the poster is roughly: The left latched into the concept of “diversity” to make the right hate it, thus becoming more homogeneous and dumber. Seems plausible, yet a bit too clever to be likely.
All of it. Thinking critically about AI outputs (and also human outputs), and taking mitigating measures to reduce the bullshit in both.
Yeah people in here (and in the EA Froum) are participating in a dicussion that has been going on for a long time, and thus we tend to assume that our interlocutors have a certain set of background knowledge that is admittedly quite unusual and hard to get the hang of. Have you considered applying to the intro to EA program?
Thank you for doing, that and please keep doing it. Maybe also run a post draft trough another human before posting, though.
Huh. Maybe. I think the labs are already doing something like this, though. Some companies pay you to write stuff more interesting than internet mediocrity. They even pay extra for specialist knowledge. Those companies then sell that writing to the labs, who use it to train their LLMs.
Side point: Consider writing shorter posts, and using LLMs to critique and shorten rather than to (co)write the post itself. Your post is kind of interesting, but a lot longer than it needs to be.
Huh. OK that looks like a thing worth doing. Still, I think you are probably underestimating how much smarter future AIs will get, and how useful intelligence is. But yes, money is also powerful. Therefore, it is good to earn money and then give it away. Have you heard of effective altruism?
Well good luck creating AI capitalists I guess. I hope you are able to earn money with it. But consider that your alpha is shrinking with every passing second, and that what you will be doing has nothing to do with solving alignment.
Because building powerful AI is also hard. Also, it is very expensive. Unless you happen to have a couple billion dollars lying around, you are not going to get there before OpenAI or Anthropic or Google Deepmind.
Also, part of the problem is that people keep building new labs. Safe Super Intelligence Inc and Anthropic are both splinters from OpenAI. Elon left OpenAI over a disagreement and then founded xAI years later. Labs keep popping up, and the more there are the harder it is to coordinate to not get us all killed.
Hi. The point of AI alignment is not whether the first people to build extremely powerful AI will be “the good guys” or “the bad guys”.
Some people here see the big AI labs as evil, some see the big AI labs as well-intentioned but misguided or confused, some even see the big labs as being “the good guys”. Some people in here are working to get the labs shut down, some want to get a job working for the labs, some even already work for them.
Yet, we all work together. Why? Because we believe that we may all die even if the first people building super-AIs are the most ethical organization on Earth. Because aligning AI is hard.
EDIT: See this post for understanding why even smart and well-intentioned people may get us all killed from AI.
I developed a simple first-order mechanism that measures the divergence between initial, user-introduced insights and their subsequent reproduction by AI. For instance, using a vector-space model of semantic representations, I compared the detailed descriptions provided by the user with the AI’s output.
Can we see the code for this? It would further discussion a lot.
I began by asking ChatGPT-4 to analyze our ongoing conversation and assess the novelty of the insights. ChatGPT-4 estimated that the ideas in our dialogue might be present in fewer than 1 in 100,000 users—an indication of exceptional rarity when compared against mainstream AI reasoning patterns.
Did you try asking multiple times in different context windows?
Did you try asking via the API (ie without influences from the “memory” feature)?
Do you have the “memory” feature turned on by default? If so, have you considered turning it off at least when doing experiments?
In summary: have you considered the fact that LLMs are very good at bullshitting? At confabulating the answers they think you would be happy to hear instead of making their best efforts to answer truthfully?
Oh I know! That is why I added “somehow”. But I am also very unsure over exactly how hard it is. Seems like a thing worth whiteboarding over for an hour and then maybe doing a weekend-project-sized test about.
Maybe one can start with prestige conservative media? Is that a thing? I’m not from the US and thus not very well versed.