Also, coming up with your own ideas first can help you better understand what you find in the literature. I’ve found that students learn more readily when they come to a subject with questions already in mind, having tried to figure things out on their own and realized where they had gaps in their mental framework, rather than just receiving a firehose of new information with no context.
Perhaps try pursuing a number of proxy goals for short, pre-defined periods, while tracking whether each proxy goal is likely to be instrumental for reaching the terminal goal. Assessing the instrumentality of each proxy should be easier once you’ve started to get a sense of where each project can lead, and abandoning those that are clearly not going to be fruitful should be easier if you don’t plan on going all-in from the start.
Don’t be afraid to ask stupid questions. We often tend to refrain from asking questions that we predict would cause those more experienced to perceive us as idiots. Ignore those predictions. Even when the answer is obvious to everyone else, it will help the writer practice clarifying their ideas from a new perspective, which could even help them understand their own work better. And sometimes everyone else is just afraid to look like idiots, too.
Try steel-manning the best argument you can come up with against an authority’s position. Ideas that can withstand the harshest scrutiny are those worth keeping. Ideas that can be destroyed by the truth should be. Help the intellectual community filter the chaff from the wheat.
Good hypotheses always entail predictive models. If you can’t program it, you don’t really understand it.
I can’t think of anything else to add to this one.
Also, don’t wait until you’ve learned linear algebra, multivariable calculus, probability theory, and machine learning before starting to tackle the alignment problem. It’s easier to learn these things once you already know where they will be useful to you. Plus, we may not have enough time to wait on mathematicians to come up with provable guarantees of AI safety.
Also, coming up with your own ideas first can help you better understand what you find in the literature. I’ve found that students learn more readily when they come to a subject with questions already in mind, having tried to figure things out on their own and realized where they had gaps in their mental framework, rather than just receiving a firehose of new information with no context.
Perhaps try pursuing a number of proxy goals for short, pre-defined periods, while tracking whether each proxy goal is likely to be instrumental for reaching the terminal goal. Assessing the instrumentality of each proxy should be easier once you’ve started to get a sense of where each project can lead, and abandoning those that are clearly not going to be fruitful should be easier if you don’t plan on going all-in from the start.
Don’t be afraid to ask stupid questions. We often tend to refrain from asking questions that we predict would cause those more experienced to perceive us as idiots. Ignore those predictions. Even when the answer is obvious to everyone else, it will help the writer practice clarifying their ideas from a new perspective, which could even help them understand their own work better. And sometimes everyone else is just afraid to look like idiots, too.
Try steel-manning the best argument you can come up with against an authority’s position. Ideas that can withstand the harshest scrutiny are those worth keeping. Ideas that can be destroyed by the truth should be. Help the intellectual community filter the chaff from the wheat.
Good hypotheses always entail predictive models. If you can’t program it, you don’t really understand it.
I can’t think of anything else to add to this one.
Also, don’t wait until you’ve learned linear algebra, multivariable calculus, probability theory, and machine learning before starting to tackle the alignment problem. It’s easier to learn these things once you already know where they will be useful to you. Plus, we may not have enough time to wait on mathematicians to come up with provable guarantees of AI safety.