James Cameron recently talked about AI, its potential and risks, at the SCSP AI+Robotics Summit where he partially drew parallels to Terminator. I liked his thinking through the various „What if„s.
However, I believe his analysis stops short of exploring the ultimate evolution of autonomous warfare systems. The progression from human-supervised to fully autonomous military AI should come with a whole boatload of critical questions about a probable endgame scenario:
What would really happen if machines started fighting machines without human input, oversight, or intervention?
In such a future, would AI systems develop their own strategic goals beyond human understanding, potentially transforming warfare into something entirely separated from human objectives and control?
If we go further down that rabbit hole, the ultimate question raises once again: why have war in the first place then?