Learning / Neurlogical AI in Arma/Video games
#5
(02 Aug 20, 0656)B.De Jong Wrote: I think the biggest challenges with AI and machine learning in this context are data collection, reward functions, and probably also the search/state space.

First of all, the current state of the art shows that you need vast amounts of data to have any chance of creating an AI that is equal to, or smarter than, human players. For example, Google's AlphaGo (the AI that beat the world's best Go player multiple times) uses what is called deep learning. Deep learning tries to mimic the way humans learn, by creating many layers of abstractions and repeatedly trying different approaches to see what works best. AlphaGo played thousands, if not millions, of games against itself to learn the best moves in any situation. In ArmA context: to have a kind of self learning AI requires tons of the data, and to get this data the AI needs the ability to run simulations against itself, without human intervention. This way it can learn certain things by itself, for example shooting through walls is hard, shooting through windows is easier. Of course the AI can learn while we are playing against it, but this wont be fast enough, as there wont be enough data points (as playing ArmA in real-time takes a long time).

Then, there is the reward function. A reward function in machine learning basically determines whether one action is better than another action. For example, if you have a road navigation AI and there it arrives at a junction where you can go left and right, it has to make the correct decision. If the goal is to minimize the distance travelled, and going left results in a 10KM drive, and going right results in a 15KM drive, then it is clear that going left offers the best reward (i.e. minimal distance). The way this reward function is defined influences the behaviour of the AI. If you minimize for time instead of distance, and going left is a 20 minute drive and going right is a 10 minute drive, the outcome would be completely different. In ArmA context: for the AI to know what is the best action to take at any given point, it needs some kind of reward function. For example, going prone when underfire is worth 10 points, but just standing there and doing nothing is worth -5 points. Of course the amount of actions (shooting, moving, cover, support, etc.) an AI can take at any point in time in ArmA is massive and it somehow needs to quantify the reward for each action. And then there is the fact that some actions do not offer immediately reward. For example, when facing two enemies do you shoot or take cover? You may kill one, but then you could sustain injuries from the other enemy. Taking cover means you kill neither enemy, but you stay alive longer (which is a delayed reward). 

And finally, we come to the search/state space. An AI solves problems, it has a certain goal and tries to optimise its actions to achieve this goal. When playing Chess, each piece on the board has a certain pattern in which it moves, the rules of the game are crystal clear. There is only a finite (though large) number of moves, the entire collection of all possible moves is called the search space. This allows the AI to look ahead and calculate all possible moves and determine which moves have the best outcome (i.e. which move has the highest reward). The more complex the game is, the more states are possible in the game and the larger the search space will be. In ArmA context: there are so many actions that an AI can take while out in the field, ranging from movement, to engaging enemies, to taking cover, providing support, so on and so forth. The rules of the game are not nearly as clear as in Chess for Checkers for example. Because there are so many actions, the search space to find the best action at any point in time is massive, and will require more computational power then any of our PCs could possibly have. 

I agree with what Buxton said about having a more sophistacted AI acting as a commander. When looking at a tactical level or even a strategic level, the search space is reducded massively as the number of possible actions is greatly reduced. There could be a self learning AI that controls how squads/platoons/companies behave on a marco level (strategic, maybe even tactical). I think (deep) reinforcement learning could have potential there. But having this kind of AI for every soldier in the field would be too difficult at this point in time. 

This topic is super interesting (I'm a researcher in a somewhat related field), but I think we are nowhere near having this type of self learning AI in games that are as complex as ArmA. Games such as DotA and Starcraft have fixed rules, limited actions, and are not open world sandboxes like ArmA.


Buxton Says:
Yeah +1000 on what De Jong says from what I know this is all correct, the future for video games will likely be reward based or analyising what players doing and reacting to it. The simple way to put it is having a brain for each agent woudl get laggy, 10 agents running the learning software made it run at 30 fps so i am so interested to see what video games will do.
One of the use cases i think they will do is AI which are players for multuiplayer sessions which learn and react and are used to fill in a lowly populated server and allow you to still enjoy and play the game without knowuing any difference. 
I think if its pushed you could use it for anything in video games and any agent you just need peopel to start using it. 
I think I predicted the first game to use it will be 2021 to 2022 due to new technolgoy and all that lot.



Messages In This Thread
RE: Learning / Neurlogical AI in Arma/Video games - by J.Buxton - 02 Aug 20, 0905

Forum Jump:


Users browsing this thread: 3 Guest(s)