The paper you linked about the last big breakthrough seems to be from 1997, so roughly 28 years ago. What do you consider to be the biggest breakthrough since then?
ChristianKl
In many cases, new paradigms care about different metrics than older paradigms. In the beginning, successful new paradigms usually don’t fulfill the qualities that heterodox researchers in the field are looking to. You might want to read Thomas Khun’s “The Structure of Scientific Revolutions”.
There are ways you can make it easier to overturn paradigms. You can change the way research is funded. You could change the grant-making processes in a way that makes it easier for very smart young people to pursue research agendas that heterodox old people don’t find interesting.
There’s the Max Planck quote of “Science advances one funeral at a time”. In the last decades, old researchers got more power over which research gets conducted than back in 1950 when Planck wrote his autobiography.
I think the problem is that you ignore the idea that science works via paradigms. Even if there’s a possible paradigm besides string theory that would produce more progress, there are a lot of different things that people who aren’t working on string theory could work on. Most of them won’t lead anywhere.
If a new paradigm could be found that has more potential, that paradigm would have new low hanging fruit.
However, researchers that would write papers about that low hanging fruit, might have trouble getting published in journals of the old paradigm because they are solving problems of interest to the new paradigm and not problems of interest of the old paradigm. Getting funding to work on problems of a new paradigm is also harder.
It’s worth noting that we observe other forms of simplication of language as well. English reduced the amount of inflections of verbs. The distinction between singular and plural pronouns disappeared.
In many cases, there are diminishing returns to a given scientific paradigm. The fact that you observe a field getting diminishing returns doesn’t mean that there isn’t a paradigm that the field could adopt that would allow for returns to flow again. Paradigm change is about pursuing ideas that people in the old paradigm don’t find promising.
Just adding more smart people who follow a hegemonic paradigm doesn’t automatically get you paradigm shifts that unlock new returns. If string theory stiffles progress, it would look from the inside like there are diminishing returns to theoretical physics.
How much progress actually happens in theoretical physics?
There seems to be papers that show that if you naively train on chain of thought, you train models not to verbalize potentially problematic reasoning in their chain of thought. I however don’t see discussion about how to train chain of thought models to better verbalize their reasoning.
If you can easily train a model to hide it’s reasoning you should also be able to train models the other way around to be more explicit about their reasoning.
One approach I imagine is to take a query like diagnosing medical issues and replace key words that change the output and then see how well the chain of thought reflects that change. If the chain of thought tells you something about the change in outcome, you reinforce the chain of thought. If the chain of thought doesn’t reflect the outcome well, you punish the chain of thought.
All it takes is trusting that people believe what they say over and over for decades across all of society, and getting all your evidence about reality filtered through those same people.
I seems to me like you also need to have no desire to figure things out on your own. A lot of rationalists have experiences of seeking truth and finding out that certain beliefs people around them hold aren’t true. Rationalists who grow up in communities where many people believe in God frequently deconvert because they see enough signs that the beliefs of those people around them aren’t really fitting together.
Given that most people living in religious communities grow up believing in God just as the people around them do, it’s might be very normal to think that way, but it still feels really strange to me and probably does feel strange to many other rationalists as well.
What do you mean with ‘must’? The word has to different meanings in this context and it seems bad epistemology not to distinguish them.
Have you thought about making an altered version that strips out enough of the My Little Pony-IP to be able to sell the book on Amazon KDP? (or let someone else do that for you if you don’t want to do the work?)
The existing ontology that we have around consciousness is pretty unclear. A better understanding the nature of consciousness and thus what’s valuable will likely come with new ontology.
When it comes to reasoning around statistics, robustness of judgements, causality, what it means not to Goodhart it’s likely that getting better at reasoning also means to come up with new ontology.
Regardless of the details, we ought to prioritize taking all of our power plants, water purification stations, and nuclear facilities out of the world-wide-web.
I think it’s very questionable, to make major safety policy “regardless of the details”. If you want to increase the safety of power plants, listening to the people who are responsible for the safety of power plants and their analysis of the details, is likely a better step instead of making these kind of decisions without understanding the details.
Orcas already seem to have language to communicate with other orcas. Before trying to teach them a new language, it would make more sense to better understand the capabilities of their existing language and maybe think about how it could be extended to communicate with them about what humans want to talk about with them.
The author seems to just assume that his proposal will lead to a world where humans have a place instead of critically trying to argue that point.
It depends on how much Pokémon-like tasks are available. Given that a lot of capital goes into creating each Pokémon game, there aren’t that many Pokémon games. I would expect the number of games that are very Pokémon-like to also be limited.
It’s quite easy to use Pokemon playing as feedback signal for becoming better at playing Pokemon. If you naively do that, the AI would learn how to solve the game but doesn’t necessarily train executive function.
A task like doing computer programming where you have to find a lot of different solutions is likely providing better feedback for RL.
Good good strategy might be to cross post post and see what reception they get on Less wrong as far as up votes go. If a post would stay in the single digits, don’t cross post other posts like that. If it gets 50+ karma, people on Less wrong wants to see more like it.
What is the chance that these octopuses (at the point of research scientist level) are actively scheming against us and would seize power if they could?
And the related question would be: Even if they are not “actively scheming” what are the chances that most of the power to make decisions about the real world gets delegated to them, organizations that don’t delegate power to octopuses get outcompeted, and they start to value octopuses more than humans over time?
Left-vs-right is not the only bias that matters. Before the pandemic, I would have thought that virologists care about how viruses are transmitted. It seems, that they don’t consider that to be their field.
Given that virologists are higher status in academia than people in environmental health who actually care about how viruses are transmitted outside the lab, the COVID19 seems to have been bad. Pseudoscience around 6-feet distancing was propagated by government regulations. Even Fauci admits that there was no sound reasoning that supported the 6-feet rule.
Fauci also decided against using use money from the National Institute of Allergy and Infectious Diseases to fund studies about community masking as a public health intervention. You don’t need virologists to run studies about masking, so probably that’s why he didn’t want to give money to it.
While Fauci was likely more to the left, that did not create the most harmful biases in the policy response that didn’t want to use science to it’s fullest potential to reduce transmission of COVID19 but rather wanted to give billions to the Global Virome Project.
In another case, grid-independent rooftop solar installations are a lot more expensive than they would need to be. Building codes are made by a firefighter interest group in the US, and for firefighters it’s practical if the rooftop solar cells shut of when disconnected from the grid and as a result the pushed based on flimsy evidence for regulation that means that most rooftop solar in the US doesn’t work if the grid is cut off.
The question of whether you want grid-independent rooftop solar, is not one of left-vs-right but the biases are different.
Especially, today where many experts are very narrow in their expertise and have quite specific interests because of their expertise, thinking in terms of left-wing and right-wing is not enough.
Are you saying that progress in physics hasn’t stalled or that string theory isn’t to blame?