I don't want to speak for rickbill. I will let them answer what they feel were the salient points. Here are the main points I got from Dr. Sabin's discussion. I'm hope blackndecker, brant, Capn Darwin and whoever else attended will provide their input too.
Just a heads up: I have not been taking close notes as I watch the discussions, so my input may not be the best. With that disclaimer out of the way...
<SNIPy>
Thanks very much for the detailed summary @Tolstoi!, I didn't get notified of your response even if I see that I am allegedly subscribed to the thread.
I do not think anybody in the defence side is seriously considering removing the human element in war gaming (hard to do that in the first place when the intent is to use it for training or as a support tool for appropriation or doctrine development). Also, a "human someone" needs to program those AIs and the simulators/games they operate on.
- Today's wargame AI might work well for the older WATU system; however, they are not good at today's complicated situations of asymmetrical and cyber warfare/influence
- AI is very rational. This makes it ill equipped to find new and unexpected ways of finding solutions because it lacks ingenuity and has limits
- AI and ML can be useful. AI and ML can not replace the human element of wargaming and should only be used in a supporting and supplemental fashion
Of the three points, I think the third one is pretty much obvious to any sane participant. There may be insane people out there, or who hype stuff in a irresponsible manner, but in my interactions with defence organizations, I haven't come across any.
The second point is just not true.
An AI isn't "rational", is "programmed to be rational". This programming can be direct, as when you write down rules for developing course of action (as we did in Command Ops for instance). Or it can be indirect, by having data tagged as "right" or "wrong". Any "rationality" in an automated system is bounded by its inputs, and those inputs are chosen/designed by humans. Hence why I think the reflection of "garbage in, garbage out" was a very good take away point from the conference.
Regarding being ill-equipped to find new and unexpected ways of solving a problem, I would say that if we're looking at "first principles" systems, we already know we can be shocked and awed by what AI systems can come up with. For instance, see Lee Sedol's remarks on the style of playing he perceived Alpha Zero to develop. For systems where a lot of knowledge is programmed to determine the behaviour of the autonomous system, indeed, you won't get anything innovative because you're not allowing the system any freedom in exploring possibilities.
The real limitation is the that the price of "innovative" AI can be prohibitive (it took nearly a year to a community effort to scrape together enough computing power to construct an AlphaZero-like system that was able to beat Stockfish). The state-of-the-art isn't simply there to allow for the success of AlphaZero to be replicated on every problem we want/need to solve, without significant effort.
For the first item, I am not sure how you would capture asymmetrical and cyber warfare in a wargame in a way that would make it out of reach of a "first principles" AI system without making it also out of reach for humans.
I was expecting to disagree more with Sabin. Very interesting, and many thanks for taking the time to put this together Tolstoi!