I try to practice independent reasoning/critical thinking, to challenge current solutions to be more considerate/complete. I do not reply to DMs for non-personal (with respect to the user who reached out directly) discussions, and will post here instead with reference to the user and my reply.
ZY
A few thoughts from my political science classes and experience -
when people value authority more than arguments
It’s probably less about “authority”, but more about the desperate hope to reach stability, and the belief of unstable governments leading to instability, after many years of being colonized on the coasts, and war (ww 2 + civil war).
“Societies can be different”
is a way too compressed term to summarize the points you made. Some of them are political ideology issues, and others are resource issues, but not related to “culture” as could be included in “societies can be different” phrase.
Power imbalance and exploited positions:
This ultimately came from lack of resources compared with the total number of people. Unfortunately this still exist when a society is poor, or have very large economic disparity.
It would be very helpful to also take some reads at comparative governments (I enjoyed the AP classes back in high school in the US context), and other general political concepts to understand even deeper.
For “prison sentencing” here, do you mean some time in prison, but not life sentencing? Also instead of prison sentencing, after increasing “reliability of being caught”, would you propose alternative form of sentencing?
Some parts of 1) and most of 2) made me feel educating people on the clear consequences of the crime is important.
For people who frequently go in and out of prison—I would guess most legal systems already make it more severe than previous offenses typically, but for small crimes they may not be.
I do think other types of punishments that you have listed there (physical pain, training programs, etc) would be interesting depending on the crime.
how to punish fewer people in the first place
This seems to be hard when actual crimes (murder, violent crimes, etc.) are committed; seems to be good to figure out why they commit the crimes, and reducing that reason in the first place is more fundamental.
A side note -
We don’t own slaves, women can drive, while they couldn’t in Ancient Rome, and so on.
Seems to be a very low bar for being “civilized”
focusing less on intent and more on patterns of harm
In a general context, understanding intent though will help to solve the issue fundamentally. There might be two general reasons behind harmful behaviors: 1.do not know this will cause harm, or how not to cause harm, aka uneducated on this behavior/being ignorant, 2.do know this will cause harm, and still decided to do so. There might be more nuances but these two are probably the two high level categories. Knowing what the intent is helps to create strategies to address the issue − 1.more education? 2.more punishments/legal actions?
In my opinion, theoretically, the key to have “safe” humans and “safe” models, is “to do no harm” under any circumstances, even when they have power. This is roughly what law is about, and what moral values should be about (in my opinion)
Yeah nice; I heard youtube also has something similar for checking videos as well
It is interesting; I am only a half musician but I wonder what a true musician think about the music generation quality generally; also this reminds me of the Silicon Valley show’s music similarity tool to check for copyright issues; that might be really useful nowadays lmao
On the side—could you elaborate why you think “relu better than sigmoid” is a “weird trick”, if that is implied by this question?
The reason that I thought to be commonly agreed is that it helps with the vanishing gradient problem (this could be shown from the graphs).
I personally agree with your reflection on suffering risks (including factory farming, systemic injustices, and wars) and the approach to donating to different cause areas. My (maybe unpopular under “prioritizing only 1” type of mindset) thought is: maybe we should avoid prioritizing only one single area (especially collectively), but recognize that in reality there are always multiple issues we need to fight about/solve. Personally we could focus professionally on one issue, and volunteer for/donate to another cause area, depending on our knowledge, interests, and ability; additionally, we could donate to multiple cause areas. Meanwhile, a big step is to be aware of and open our ears to the various issues we may be facing as a society, and that will (I hope) translate into multiple type of actions. After all some of these suffering risks involve human actions, and each of us doing something differently could help with reducing these suffering risks in both short and long term. But there are also many things that I do not know how to best balance as well.
A side note—I also hope you are not very very sad by thinking of “missing crucial considerations” (but also appreciate that you are trying to gather more information and learn more quickly; we all should do more of this too)! The key to me might be an open mind and the ability to consider different aspects of things; hopefully we will be on the path towards something “more complete”. Proactively, one approach I often try to do is talking to people who are passionate in different areas, who are different from me, and understand more from there. Also, I sometimes refer to https://www.un.org/en/global-issues for some ideas.
Yeah that makes sense; the knowledge should still be there, just need to re-shift the distribution “back”
Haven’t looked too closely at this, but my initial two thoughts:
child consent is tricky.
likely many are foreign children, which may or may not be in the 75 million statistic
It is good to think critically, but I think it would be beneficial to present more evidence before making the claim or conclusion
This is very interesting, and thanks for sharing.
One thing that jumps out at me is they used an instruction format to prompt base models, which isn’t typically the way to evaluate base models. It should be reformatted to a completion type of task. If this is redone, I wonder if the performance of the base model will also increase, and maybe that could isolate the effect further to just RLHF.
I wonder if this has anything to do with also the number of datasets added on by RLHF (assuming a model go through supervised/instruction finetuning first, and then RLHF), besides the algorithm themselves.
Another good model to test on is https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 which only has instruction finetuning it seems as well.
The author seems to say that they figured it out at the end of the article, and I am excited to see their exploration in the next post.
I find it useful sometimes to think about “how to differentiate this term” when defining a term. In this case, in my mind it would be thinking about “reasoning”, vs “general reasoning” vs “generalization”.
Reasoning: narrower than general reasoning, probably would be your first two bullet points combined in my opinion
Generalization: even more general than general reasoning (does not need to be focused on reasoning). Seems could be the last two bullet points you have, particularly the third
General reasoning (this is not fully thought through): Now that we talked about “reasoning” and “generalization”, I see two types of definition
1. A bit closer to “reasoning”. first two of your bullet points, plus in multiple domains/multiple ways, but not necessarily unseen domains. In other simpler words, “reasoning in multiple domains and ways”.
2. A bit closer to “general” (my guess is this is closer to what you intended to have?): generalization ability, but focused on reasoning.
In my observation (trying to avoid I think!), “I think” is intended to (or actually should have been used to) point out perspective differences (which helps to lead to more accurate conclusions, including collaborative and effective communication), rather than confidence. In the latter case of misuse, it would be good if people clarify “this term is about confidence, not perspective in my sentence”.
True. I wonder for the average people, if being self-aware would at least unconsciously be a partial “blocker” on the next malevolence action they might do, and that may evolve across time too (even if it may take a bit longer than a mostly-good)
I highly agree with almost all of these points, and those are very consistent with my observation. As I am still relatively new to lesswrong, one big observation (based on my experience) I still see today, is disconnected concepts, definitions, and or terminologies with the academic language. Sometimes I see terminology that already exists in academia and introducing new concepts with the same name may be confusing without using channels academics are used to. There are some terms that I try to search on google for example, but the only relevant ones are from lesswrong or blogposts (which I still then read personally). I think this is getting better—in one of the recent conference reviews, I saw significant increase in submissions in AI safety working on X risks.
Another point as you have mentioned is the reverse ingestion of papers from academia; there are rich papers in interpretability as you have mentioned for example, and some concrete confusion I saw from professors or people already in that field is that why there is feels like a lack of connection with these papers or concepts, even though they seems to be pretty related.
About actions—many people that I see are concerned about AI safety risks in my usual professional group are people who are concerned about or working in current intentional risks like misuse. Those are actually also real risks and have already started (CSAM, deep fake porn with real people’s faces, privacy, potential bio/chem weapons), and needs to be worked on as well. It is hard to stop working on them, and transition directly to X risks.
However, I do think it is beneficial to keep merging the academic and AI safety groups, which I see are already underway with examples like more papers, and some PhD positions on AI Safety, industry positions etc; This will increase awareness of AI safety, and as you have mentioned the interests in the technical parts are shared, as they could be applied potentially to many kinds of safety, and hopefully not that much on capabilities (though sometimes not separable).
What would be some concrete examples/areas to work on for human flourishing? (Just saw a similar question on the definition; I wonder what could be some concrete areas or examples)
True; and they would only need to merge up to they reach a “swing state” type of voting distribution.
Out of curiosity—what was the time span for this raise that achieved this goal/when did first start again? Was it 2 months ago?