Morris Chang (founder of TSMC and titan in the fabrication process) had a lecture at MIT giving an overview of the history in chip design and manufacturing. [1] There’s a diagram ~34:00 that outlines the chip design process, and where foundries like TSMC slot into the process.
I also recommend skimming Chip War by Chris Miller. Has a very US-centric perspective, but gives a good overview of the major companies that developed chips from the 1960s-1990s, and the key companies that are relevant/bottlenecks to the manufacturing process circa-2022.
1: TSMC founder Morris Chang on the evolution of the semiconductor industry
I’m giving up on working on AI safety in any capacity.
I was convinced ~2018 that working on AI safety was an Good™ and Important™ thing, and have spent a large portion of my studies and career trying to find a role to contribute to AI safety. But after several years of trying to work on both research and engineering problems, it’s clear no institutions or organizations need my help.
First: yes, it’s clearly a skill issue. If I was a more brilliant engineer or researcher then I’d have found a way to contribute to the field by now.
But also, it seems like the bar to work on AI safety seems higher than AI capabilities. There is a lack of funding for hiring more people to work on AI Safety, and it seems to have created a dynamic where you have to be scarily brilliant to even get a shot at folding AI safety into your career.
In other fields, there are a variety of professionals who can contribute incremental progress and get paid as they progress their knowledge and skills. Like educators across varying levels, technicians in lab who support experiments, and so on. There are far fewer opportunities like that w.r.t AI Safety. Many “mid-skilled” engineers and researchers just don’t have a place in the field. I’ve met and am aware of many smart people attempting to find roles to contribute to AI safety in some capacity, but there’s just not enough capacity for them.
I don’t expect many folks here to be sympathetic to this sentiment. My guess on the consensus is that in fact, we should only have brilliant people working on AI safety because it’s a very hard and important problem and we only get a few shots (maybe only one shot) to get it right!