as they would not risk to crash it by flying between two airplanes in tight formation.
This is incorrect. They shouldn’t risk crashing by flying between a tight formation, but you’ve got to consider that people who work in top secret programs are mostly just regular people who don’t talk about their work. There is plenty of room in top secret military projects for all the same jackassery that happens in public projects, like incompetence, pranks, deliberately dangerous tests, etc. Arguably more so, since they are sheltered from scrutiny.
And this ignores more prosaic explanations like an autopilot glitch. Alpha Go made weird decisions because it was misreading the apparent score, a pilot AI would certainly encounter similar problems at some point.
This is incorrect. They shouldn’t risk crashing by flying between a tight formation, but you’ve got to consider that people who work in top secret programs are mostly just regular people who don’t talk about their work. There is plenty of room in top secret military projects for all the same jackassery that happens in public projects, like incompetence, pranks, deliberately dangerous tests, etc. Arguably more so, since they are sheltered from scrutiny.
And this ignores more prosaic explanations like an autopilot glitch. Alpha Go made weird decisions because it was misreading the apparent score, a pilot AI would certainly encounter similar problems at some point.
Maybe they tested some radar-jamming tech. I also find more discussion about new radars there here.