RE: Concerning the AI (Full Version)

All Forums >> [New Releases from Matrix Games] >> Armored Brigade



Message


noooooo -> RE: Concerning the AI (11/18/2018 4:07:24 PM)

I think the an easy solution is to add a "Seek cover on contact" SOP. The defend functionality is already there to automatically find cover, so all the SOP has to do is automatically use "Defend" order on contact with the target being the contact enemy unit.




Werewolf13 -> RE: Concerning the AI (11/18/2018 4:49:31 PM)


quote:

ORIGINAL: Rosseau

I am no fanboy, but 30 years and hundreds of wargames, still looking for the holy grail [;)]


I'm with ya there, Rosseau. Been searching for 39 years and haven't found that grail yet. After adding up the cost of that search I wonder if that's the way the game makers want it to be. A lot more money to be made in the search than in the finding.




Werezak -> RE: Concerning the AI (11/18/2018 6:00:43 PM)

A neural network is not going to produce a good AI for a wargame.

In order to have any success with a NN you would need to answer many of the difficult questions before you can even set up training. In other words you would have to create a good wargame AI first before you would be able to have a good wargame AI that incorporated NN. The same applies for genetic algorithms, reinforcement learning, or any other ML technique.

Please stop bringing up ML with regards to the game AI. The most productive thing to discuss would be the CONCEPTS used by people in the real world to conduct military operations that would apply to the game, as well as the CONCEPTS used by experienced players.




exsonic01 -> RE: Concerning the AI (11/18/2018 6:08:17 PM)

quote:

ORIGINAL: Werezak

A neural network is not going to produce a good AI for a wargame.

In order to have any success with a NN you would need to answer many of the difficult questions before you can even set up training. In other words you would have to create a good wargame AI first before you would be able to have a good wargame AI that incorporated NN. The same applies for genetic algorithms, reinforcement learning, or any other ML technique.

Please stop bringing up ML with regards to the game AI. The most productive thing to discuss would be the CONCEPTS used by people in the real world to conduct military operations that would apply to the game, as well as the CONCEPTS used by experienced players.

Maybe it was my mistake to bring ML in this post XD I'm sorry about all those hypes. I feel responsibility to ruin discussion XD

But it is possible, I mentioned in above post about SC. However, it would not only need so many conditions, but also requires huge amount of data, and very nice computational resources. So, it would be more proper to say 'impractical' for this game or any other small game studios at current stage. I also agree that we should not discuss with ML or NN in this post any further.




Skybird -> RE: Concerning the AI (11/18/2018 6:34:18 PM)


quote:

ORIGINAL: Werezak

A neural network is not going to produce a good AI for a wargame.

In order to have any success with a NN you would need to answer many of the difficult questions before you can even set up training. In other words you would have to create a good wargame AI first before you would be able to have a good wargame AI that incorporated NN. The same applies for genetic algorithms, reinforcement learning, or any other ML technique.

Please stop bringing up ML with regards to the game AI. The most productive thing to discuss would be the CONCEPTS used by people in the real world to conduct military operations that would apply to the game, as well as the CONCEPTS used by experienced players.

Still this is how Google Team Alphas's project created the strongest Go engine out there that plays against the human world elite - successfully. Backgammon was solved this way as well already years ago. I think Google in case of Go had its machone not even knowing the basic rules of Go, but simpy had their code core observing the going of Go macthes - MANY of them. The patterns of rules and later strategies the machine formed by these observations by itself. The rules were not "explained" to it or coded into it. It concluded on them by itself, by watching matches.

If you want an AI to recognise an animal, you show its core algorithm managing the neural network pictures of that animal, in a thousand different situations, in all kinds of habitats, in all light and darkness situations, in full display, in cover, in flight, sitting on the ground, in colour, in black and white. Hundreds of thousands of such pictures. At some point the algorithm has so many data sets for comparison that it will recognise that animal when its optics see it, and will do so under all circumstances.

Thats the basic principle by which you teach a neural network: You let it observe an immense amount of events, objects or whatever it is about.

Assuming you have such a neural network and you have a database with 3 million battles played in Armored Brigade. You have an interface by which the machine can observe these matches, one by one, one after the other. You must not explain it the rules and what options there are. If the number of matches that serve as demonstrators is big enough, it will conclude on the rules and options' meanings and then on successful tactis and strageies by itself due to the repeation of things. Like Google's Go machine understood all by itself how Go is played and what the best strategy is and how to calculate that far ahead in moves that you can compete against world class players - AND BEAT THEM.

As one neurla networkj expert some months ago said in a documentaiton I saw on the matter: you teach a neural network by letting it watch a very high number of observations - in principle that's it.

Simple ,eh? [:D]




vorbarra2 -> RE: Concerning the AI (11/18/2018 7:56:57 PM)

That's true, but not useful. You can teach a computer how to recognize a cat with that type of learning, and therefore any other type of image recognition, but thats a binary outcome. Is the image cat? is a yes/no question. In the case of Go, though the board state is complex, there are still only a finite number of states the game qcan can exist in, snd the only imput space for the AI is 'where do I put the counter?'

AI for games with an essentially unbounded number of internal states (ie. all computer game design as we know it) does not conform to this type of learning. The number of instructions the AI has to process is very large, it's not 'the counter goes here,' but many different types of instructions which don't map neatly to the binary outcome classification of machine learning. The best that can be achieved is a generalized algorithim, which probablys outperforms the majority of human players (certainly if my record is anythign to go by) The comments by the dev are trying to convey this point, which is basically 'so how would you express that in a symbolic, generalized fashion that would apply to all possible edge states and conditions and unit choices?'

Things like pathfinding and pathing though are more solvable and I think the way in which units path could certainly be improved. It;s not that obvious to me why they take the path they do, and the interface does a poor job of telegraphing what commands would be needed to ensure a certain type of movement. I can formulate the thoguht 'mech inf, stick to roads, get to the town fast, chuck out your passnegers and take up defesnive positions in or behind buildings,' but the pathing or formation commands never seem to make it easy to achieve this.

I wonder if the comments people are making about piecemeal advances are really an issue of tactical-level decision making, or simply poor pathing leading to formations getting broken up and encountered piecemeal? Does the AI have an internal metric for checking formation integrity?





noooooo -> RE: Concerning the AI (11/18/2018 8:30:28 PM)

quote:

ORIGINAL: vorbarra2

That's true, but not useful. You can teach a computer how to recognize a cat with that type of learning, and therefore any other type of image recognition, but thats a binary outcome. Is the image cat? is a yes/no question. In the case of Go, though the board state is complex, there are still only a finite number of states the game qcan can exist in, snd the only imput space for the AI is 'where do I put the counter?'

AI for games with an essentially unbounded number of internal states (ie. all computer game design as we know it) does not conform to this type of learning. The number of instructions the AI has to process is very large, it's not 'the counter goes here,' but many different types of instructions which don't map neatly to the binary outcome classification of machine learning. The best that can be achieved is a generalized algorithim, which probablys outperforms the majority of human players (certainly if my record is anythign to go by) The comments by the dev are trying to convey this point, which is basically 'so how would you express that in a symbolic, generalized fashion that would apply to all possible edge states and conditions and unit choices?'

Things like pathfinding and pathing though are more solvable and I think the way in which units path could certainly be improved. It;s not that obvious to me why they take the path they do, and the interface does a poor job of telegraphing what commands would be needed to ensure a certain type of movement. I can formulate the thoguht 'mech inf, stick to roads, get to the town fast, chuck out your passnegers and take up defesnive positions in or behind buildings,' but the pathing or formation commands never seem to make it easy to achieve this.

I wonder if the comments people are making about piecemeal advances are really an issue of tactical-level decision making, or simply poor pathing leading to formations getting broken up and encountered piecemeal? Does the AI have an internal metric for checking formation integrity?




Took the words right out of my mouth. When you look a go board, say a 19x19 board, you have 361 possible places to put your stones. And less over time as the board becomes filled. Even that creates a enormous amount of possible permutations which is why AlphaZero (chess AI although chess has much less permutations than Go) and AlphaGo is so impressive.

But if you look at something like AB or other computer games, the amount of possible states outnumber Go by such a massive amount that it's not even close to comparable. The fact is, when a tank is pointing 56 degrees northeast is technically a different state than if everything else being the same the tank point 57 degrees. This is talking about ONE tank and ONLY the direction it's facing. It's simply not even remotely on the same magnitude as something like Go, and Go is ALREADY a massive undertaking.

Using ML for something like this is not the equivalent of recognizing a cat or playing Go. It's like asking it to speak English to the level of an average human. There are so many contexts and concepts that there simply isn't enough computing power or resources to do it.




kevinkins -> RE: Concerning the AI (11/18/2018 9:09:58 PM)

As an avid chess player, I have followed the AlphaZero story in amazement. AlphaZero AI beat the champion chess program after teaching itself in four hours. It accomplished a similar thing with Go. I believe another AI has become very proficient a poker. Both Chess and Go are complex I go you go with a single piece game. These milestones are really used to demonstrate of far neural networks have come. I believe the technology behind these demonstrates are really meant to tackle more important problems. That being said, it would be fascinating to see how a technology like AlphaZero tackles a basic ground war game (infantry, light and heavy armor and light and heavy arty) on a mixed terrain map using a basic rules set. Don't' give it 4 hours, give it a day or two. I have no idea how you would set up the training. Maybe a meeting engagement with map exit and the objective. Would the tactics developed mirror human tactics? I do not know. What rules set would be a good to start with? Something from the Tiller genre or more simple like PanzerBlitz.

Kevin




vorbarra2 -> RE: Concerning the AI (11/18/2018 9:20:07 PM)

No it's not an issue of time or training sets. The AI instruction sets for something like AB cannot map to the type of ML framework used by the Go or chess learning tools. It's a different type of computation trying to solve a different (limited, ridigly defined) type of problem.

What you are thinking of, is I think, more like 'what is the consensus-best flowchart that can be generated which we could implement into a generic algorithm for the computer?' That is a different type of question to what the ML platforms achieve.




maxb -> RE: Concerning the AI (11/18/2018 9:46:34 PM)

seems to me the easy solution is to have multiplayer. the game seems cool enough and easy enough to pick up that multiplayer would be a no brainer.

it's definitely better than Steel Panthers, Close Combat, and FC:RS in my humble opinion. It's like a mix of those but superior.

nobody would whine about AI if there was MP




noooooo -> RE: Concerning the AI (11/18/2018 9:56:06 PM)


quote:

ORIGINAL: maxb

seems to me the easy solution is to have multiplayer. the game seems cool enough and easy enough to pick up that multiplayer would be a no brainer.

it's definitely better than Steel Panthers, Close Combat, and FC:RS in my humble opinion. It's like a mix of those but superior.

nobody would whine about AI if there was MP


Agreed.




exsonic01 -> RE: Concerning the AI (11/18/2018 9:57:06 PM)

Guys, let's stop talking about ML, it is not that easy and simple to use ML in gaming industry. I know and I introduced one example of SC, but that kind of activity would take some significant amount of computational resources, and some degree of man hour from experienced researchers. Also, such feature requires huge amount of big data, but this game don't even have that. Those would be impossible for this studio and probably any other similar game studios. Like I mentioned, someday 'holy grail of wargame' would come with such AI, but I'm not sure when will be the day.

So... let's forget about it in this post and let's keep some meaningful discussion. I agree with Werezak, we need to looking for concepts which would be easy and straightforward to implemented in the coding, yet good enough to improve AI's behavior.




exsonic01 -> RE: Concerning the AI (11/18/2018 9:59:40 PM)


quote:

ORIGINAL: maxb

seems to me the easy solution is to have multiplayer. the game seems cool enough and easy enough to pick up that multiplayer would be a no brainer.

it's definitely better than Steel Panthers, Close Combat, and FC:RS in my humble opinion. It's like a mix of those but superior.

nobody would whine about AI if there was MP

+1
Hopefully devs introduce MP to this game later, when they have time and manpower to work.




kevinkins -> RE: Concerning the AI (11/18/2018 10:41:27 PM)

I understand there is a long way to go even with the resources of the Pentagon, but DARPA is leaving no stone un-turned. This is an interesting article on how they are treating battlefield positions as objects whereby the machine is taught to distinguish good and bad positions and how to improve them based on resources and maneuver. This is not in AlphaZero's wheelhouse currently.

http://general-staff.com/computational-military-reasoning-tactical-artificial-intelligence-part-1/




Page: <<   < prev  1 [2]

Valid CSS!




Forum Software © ASPPlayground.NET Advanced Edition 2.4.5 ANSI
0.59375