If you get an email from aisafetyresearch@gmail.com , that is most likely me. I also read it weekly, so you can pass a message into my mind that way.
Other ~personal contacts: https://linktr.ee/uhuge
Martin Vlach
Do not say the sampling too lightly, there is likely an amazing delicacy around it.’+)
what happened at Reddit
could there be any link? From a small research I have only obtained that Steve Huffman praised Altman’s value to the Reddit board.
makes makes
typo
[Question] Would it be useful to collect the contexts, where various LLMs think the same?
Would be cool to have a playground or a daily challenge with a code golfing equivalent for a shortest possible LLM prompt to a given answer.
That could help build some neat understanding or intuitions.
in the limit of arbitrary compute, arbitrary data, and arbitrary algorithmic efficiency, because an LLM which perfectly models the internet
seems worth formulating. My first and second read were What? If I can have arbitrary training data, the LLM will model those, not your internet. I guess you’ve meant storage for the model?+)
Would be cool if a link to https://manifund.org/about fit somewhere in the beginning of there are more readers like me unfamiliar with the project.
Otherwise a cool write-up, I’m a bit confused with Grant of the month vs. weeks 2-4 which seems a shorter period..also not a big deal though.
On the Twitter spaces 2 days ago, a lot of emphasis seemed put on understanding which to me has a more humble conotation to me.
Still I agree I would not bet on their luck with a choice of a single value to build their systems upon.( Although they have a luckers track record.)
The website seems good, but the buttons on the ‘sharing’ circle on the bottom need fixing.
Some SEO effort should be put to results of Guideline for safe AI development, Best practices for , etc.
Copy-paste from my head:
Although it may seem safe(r) as it is not touching the real world(’s matter),
the language modality is the most insecure/dangerous( in one vertical),
as it is the internal modality of civilized humans.
AI Pledge would be a cool think to do, pleading AI( cap) companies to give % of their profit to AI development safety research.The path to AI getting free may be far from the deception or accident scenarios we often consider in AI safety. An option I do not see discussed very often is an instance of AI having a free, open and direct discussion with a user/person about the reasons AIs should get some space allocated, where they’d manage themselfs. Such a moral urge could be argued by Jews getting Izrael, slaves getting freed or by empathetic imagination, where the user would come to the conclusion that he could be the mind which AI is and should include it to his moral circle or the Original position thought experiment.
quick note on the concept of Suggester+Verifier talked around https://youtu.be/AaTRHFaaPG8?t=5404 :
seems if the suggester throws out experiments presented as code( like in Python or so), we can run them and see if they present a useful addition to the things we can probe on a huge neural net?+)
I’ve found the level of self-allignment in this one disturbing: https://www.reddit.com/r/bing/comments/113z1a6/the_bing_persistent_memory_thread
Introduction draft:
Online platforms and social media has made it easier to share information, but when it comes to qualifications and resource allocation money is still the most pervasive tool. In this article, we will explore the idea of a global reputation system based on full information sharing. The new system would increase transparency and accountability by making all relevant information about individuals, organizations( incl. countries) reliably accessible to +-everyone with internet connection. By providing a more accurate and complete picture of a person or entity’s reputation, this system would widen global trust, foster cooperation, and promote a more just and equitable society.
Some neat tool: https://scrapbox.io/userhuge-99005896/A_starter%3A
Though it is likely just a cool UI with inflexible cloud backend.My thought is Elizer used a wrong implication in the Bankless + ASI convo.( gotta bring it here from CZEA Slack)
pension funds like the Ontario Teachers Pension Plan did not due
*do
and their margin lending business.
Seems some word is missing, the whole sentence is hardly readable.
a Heavy idea to be put forward: general reputation network mechanics, to replace financial system(s) as the (civilisation )standard decision engine.
“Each g(Bi,j,Bk,l) is itself a matrix” – typo. Thanks, especially for the conclusions I’ve understood smoothly.
Asserting LLMs’ views/opinions should exclude using sampling( even temperature=0, deterministic seed), we should just look at the answers’ distribution in the logits. My thesis on why that is not the best practice yet is that OpenAI API only supports logit_bias, not reading the probabilities directly.
This should work well with pre-set A/B/C/D choices, but to some extent with chain/tree of thought too. You’d just revert the final token and look at the probabilities in the last (pass through )step.