Armchair Dragoons Forums

News:

  • The ACDC returns in 2025!  17-19 January 2025 we'll gather online for a variety of games and chats all weekend long
  • The 2024 Armchair Dragoons Fall Assembly will be held 11-13 October 2024 at The Gamer's Armory in Cary, NC (outside of Raleigh)

News

The 2024 Armchair Dragoons Fall Assembly will be held 11-13 October 2024 at The Gamer's Armory in Cary, NC (outside of Raleigh)

Author Topic: "Virtual" Connections 2020 Running Thread  (Read 16254 times)

BletchleyGeek

  • Jr. Trooper
  • *
  • Posts: 20
Reply #45 on: August 21, 2020, 02:36:39 AM
https://www.armchairdragoons.com/feature/connections-conference-2020-an-aar/

Thanks very much for putting this AAR together, guys. Very helpful.

Quote
Second, the idea of AI acting as an %u2018independent thinker%u2019 was a common thread, as would be expected at a conference that was focused on AI throughout.  One of the constant refrains was the idea of trying to get the AI to act %u201Crationally%u201D or within some nebulous expectations that would approximate human behavior.  However, as several audience questions and comments pointed out, the cultural %u201Clens%u201D through which the AI was programmed to act was of enormous %u2013 and undervalued %u2013 importance. What is considered %u201Crational%u201D or %u201Cnormal%u201D or %u201Cexpected%u201D to one culture might be evaluated very differently by another culture.  How an AI might evaluate varying courses of action available to different actors within a wargame could be dramatically impacted by the cultural assumptions built into the menu of options available to the AI.

This is indeed a challenge and an opportunity.

A challenge, because biases like "those guys won't ever do X because we wouldn't" can seep undetected into, not just the data/knowledge that drives the AI, but also the game mechanics themselves (as the range of outcomes becomes limited to "sensible" ones). Since ML-powered AI will draw from both data/knowledge and the "rules" of the game, its behaviours will be guaranteed to not being a good predictor for the behaviours of others (which is what wargaming actually is about, in my opinion). Tracing back the source of a specific behaviour is in general not possible for ML-heavy approaches.

To avoid this I think the only way is to build up from "first principles", the challenge lies in finding the computational power to scale up. See for instance

https://link.springer.com/chapter/10.1007%2F978-3-030-35288-2_1

for a serious attempt at having AlphaZero to come up with "superhuman" strategies on a war game. I don't know if there are similar works out there, if anybody knows a reference, I would appreciate them sharing.

Finding a minimal set of first principles to capture in your rulesets I think is always a good target when designing war games. Hence the interest in Kriegsspiel, for instance, where you have a "barebones" simulation, which would be the "minimal credible war game".

It is also an opportunity because an algorithm running on a computer doesn't have feelings, and can develop strategies to their logical conclusion, regardless of our cultural expectations. Which may lead to us not "trusting" the results, as they are "strange". Trusting autonomous systems is not just about they calculating the right thing, but also of training the humans to recognize what is that "right thing".
« Last Edit: August 21, 2020, 03:02:28 AM by BletchleyGeek »



BletchleyGeek

  • Jr. Trooper
  • *
  • Posts: 20
Reply #46 on: August 21, 2020, 03:01:35 AM

I don't want to speak for rickbill. I will let them answer what they feel were the salient points. Here are the main points I got from Dr. Sabin's discussion. I'm hope blackndecker, brant, Capn Darwin and whoever else attended will provide their input too.

Just a heads up: I have not been taking close notes as I watch the discussions, so my input may not be the best. With that disclaimer out of the way...

<SNIPy>

Thanks very much for the detailed summary @Tolstoi!, I didn't get notified of your response even if I see that I am allegedly subscribed to the thread.

I do not think anybody in the defence side is seriously considering removing the human element in war gaming (hard to do that in the first place when the intent is to use it for training or as a support tool for appropriation or doctrine development). Also, a "human someone" needs to program those AIs and the simulators/games they operate on.

Quote
  • Today's wargame AI might work well for the older WATU system; however, they are not good at today's complicated situations of asymmetrical and cyber warfare/influence
  • AI is very rational. This makes it ill equipped to find new and unexpected ways of finding solutions because it lacks ingenuity and has limits
  • AI and ML can be useful. AI and ML can not replace the human element of wargaming and should only be used in a supporting and supplemental fashion

Of the three points, I think the third one is pretty much obvious to any sane participant. There may be insane people out there, or who hype stuff in a irresponsible manner, but in my interactions with defence organizations, I haven't come across any.

The second point is just not true.

An AI isn't "rational", is "programmed to be rational". This programming can be direct, as when you write down rules for developing course of action (as we did in Command Ops for instance). Or it can be indirect, by having data tagged as "right" or "wrong". Any "rationality" in an automated system is bounded by its inputs, and those inputs are chosen/designed by humans. Hence why I think the reflection of "garbage in, garbage out" was a very good take away point from the conference.

Regarding being ill-equipped to find new and unexpected ways of solving a problem, I would say that if we're looking at "first principles" systems, we already know we can be shocked and awed by what AI systems can come up with. For instance, see Lee Sedol's remarks on the style of playing he perceived Alpha Zero to develop. For systems where a lot of knowledge is programmed to determine the behaviour of the autonomous system, indeed, you won't get anything innovative because you're not allowing the system any freedom in exploring possibilities.

The real limitation is the that the price of "innovative" AI can be prohibitive (it took nearly a year to a community effort to scrape together enough computing power to construct an AlphaZero-like system that was able to beat Stockfish). The state-of-the-art isn't simply there to allow for the success of AlphaZero to be replicated on every problem we want/need to solve, without significant effort.

For the first item, I am not sure how you would capture asymmetrical and cyber warfare in a wargame in a way that would make it out of reach of a "first principles" AI system without making it also out of reach for humans.

I was expecting to disagree more with Sabin. Very interesting, and many thanks for taking the time to put this together Tolstoi!