Your outline has a lot of beliefs you expect your students to walk away with, but basically zero skills. If I was one of your prospective students, this would look a lot more like cult indoctrination than a genuine course where I would learn something.
What skills do you hope your students walk away with? Do you hope that they’ll know how to avoid overfitting models? That they’ll know how to detect trojaned networks? That they’ll be able to find circuits in large language models? I’d recommend figuring this out first, and then working backwards to figure out what to teach.
Also, don’t underestimate just how smart smart 15- and 16-year-olds can be. At my high school, for example, there were at least a dozen students who knew calculus at this age, and many more who knew how to program. And this was just a relatively normal public high school.
This is about… I wouldn’t say “beliefs”—I will make a lot of caveats like “we are not sure”, “there are some smart people who disagree”, “this is an arguments against this view”, etc. (mental note: do it MORE, thank you for your observation) - but about “motivation” and “discourse”. Not about technical skills, that’s true.
I have a feeling that there is an attractor “I am AI-researcher and ML is AWESOME, and I will try to make it even more AWESOME, and yes, there are this safety folks and I know some of their memes and may be they have some legitimate concerns, but we will solve it later and everything will be OK”. And I think that when someone learns some ML-related technical skills before basic AI Safety concepts and discourse, it’s very easy for them to get into this attractor. And from this point it’s pretty hard to return back. So I want to create something like a vaccine against this attractor.
Technical skills are neccesary, but for most of them there are already good courses, textbooks and such. The skills I saw no texbooks for are “to understand AIsafetyspeak” and “to see why alignment-related problem X is hard and why obvious solutions may not work”. Because of the previously mentioned attractor I think it’s better to teach this skills before technical skills.
I make an assumption that average 15-16-year-olds in my target audience know how to program at least a little bit (In Russia basic programming in theory is in the mandatory school program. I don’t know about US), but don’t know calculus (but I think smart school student can easily understand a concept of a derivative without strict mathematical definition).
Your outline has a lot of beliefs you expect your students to walk away with, but basically zero skills. If I was one of your prospective students, this would look a lot more like cult indoctrination than a genuine course where I would learn something.
What skills do you hope your students walk away with? Do you hope that they’ll know how to avoid overfitting models? That they’ll know how to detect trojaned networks? That they’ll be able to find circuits in large language models? I’d recommend figuring this out first, and then working backwards to figure out what to teach.
Also, don’t underestimate just how smart smart 15- and 16-year-olds can be. At my high school, for example, there were at least a dozen students who knew calculus at this age, and many more who knew how to program. And this was just a relatively normal public high school.
Thanks for your answer!
This is about… I wouldn’t say “beliefs”—I will make a lot of caveats like “we are not sure”, “there are some smart people who disagree”, “this is an arguments against this view”, etc. (mental note: do it MORE, thank you for your observation) - but about “motivation” and “discourse”. Not about technical skills, that’s true.
I have a feeling that there is an attractor “I am AI-researcher and ML is AWESOME, and I will try to make it even more AWESOME, and yes, there are this safety folks and I know some of their memes and may be they have some legitimate concerns, but we will solve it later and everything will be OK”. And I think that when someone learns some ML-related technical skills before basic AI Safety concepts and discourse, it’s very easy for them to get into this attractor. And from this point it’s pretty hard to return back. So I want to create something like a vaccine against this attractor.
Technical skills are neccesary, but for most of them there are already good courses, textbooks and such. The skills I saw no texbooks for are “to understand AIsafetyspeak” and “to see why alignment-related problem X is hard and why obvious solutions may not work”. Because of the previously mentioned attractor I think it’s better to teach this skills before technical skills.
I make an assumption that average 15-16-year-olds in my target audience know how to program at least a little bit (In Russia basic programming in theory is in the mandatory school program. I don’t know about US), but don’t know calculus (but I think smart school student can easily understand a concept of a derivative without strict mathematical definition).