janh
Posts: 1216
Joined: 6/12/2007 Status: offline
|
quote:
ORIGINAL: Tarhunnas I can't see that rounding errors should be a problem. In the case of Wite I would say that the sheer size of the task of playtesting a 200+ turns campaign is the real hurdle. Interesting that Herwin kicks off such a discussion with a very general comment in both AE and WitE forums... I would assume the greatest lack of games in this context stems from the huge efforts that would be needed to create a very sophisticated AI. The development time would be huge, and better AI quality alone doesn't sell a game, so earning would not follow the needed investment. But AI can be refined sequentially, for example AI in ARMA2 saw quite a few evolutions from OFP. Probably also the future "War in Europe" AI will come a long way since WiTEast. AI can be improved systematically, and AI can be programmed to mimic the thinking process of a human. Obviously a good number of people in academia are researching this, and people in industry working on things from autonomous robots to games -- I don't think they would agree with Herwin that it is fundamentally impossible to do. Yet it is a question of how, and most importantly: how to achieve it efficiently. Since the days of John von Neumann things have developed steadily, and probably a few of the arguments and the knowledge (both of the function of the brain, and the one of computers) upon which he based his conclusions in the fifties-sixties have made fundamental progress today... Logic (AI) can be represented in different implementations, some being non-linear, though thinking process can as well be reduced to strictly linear logic. Somehow the recent successes of neuronal grids methods in for instance quantum chemistry and molecular modeling come to my mind, which help to make lots of "smart" approximations in many-body problems to ignore factors with little impact for a given required accuracy of the simulation result. This is not exactly AI, but in principle you could interpret it as the code thinking about which contributions or steps are important, and which can be ignored. People in chemistry and physics (assisted by math and informatics and...) have been working for many decades on such approximations to make problems, that would require almost infinite computation time but could theoretically be handled exactly, treatable without sacrificing too much accuracy. That includes issues like sparse matrix diagonalizations, sparse numbers problems, rounding errors, etc etc etc over a large number of computational steps. In linear logic things are less efficient to treat, but it allows easier examples: lots of nested "if then else" type structures that follow a situation assessment the same way you sit in front of your map: You count the units in your stack, look at moral, devices, check the enemy counters in front, recall what you know about them from the previous turns taking account some inaccuracy and add some possible margins for how the enemy could have changed by reinforcements etc., check what other friendly counters are nearby, and virtually assign you possible moves some "priority" or "usefulness" factor. Then you look at the next counter of your army, and do the same, ideally until you have check, like a chess player, all of your moves, and then pick the combination with has the largest total "usefulness". Ideally, you'd propagate that over all the turns to the end. But evidently, neither the human brain nor present day computers can do so by brute force, even for something with a comparably small parameter space like chess. But theoretically, you could do it. Thus, one issue with programming an AI, and as far as I understand it the most critical one, is to find such implementations that are more efficient than linear structures, and that use suitable structures and approximations to eliminate a lot of "overhead" that is not so important for a sufficiently good result (for example the grid representation how AI sees a map that Bletchley_Geek mentioned). Otherwise you cannot treat complex problems with many variables that enter a decision in a finite time, with a reasonable result. Whether rounding errors end up being significant errors, is a somewhat different matter but also depends on implementation specifics. If a game (or AI) is to only to keep meaningful information from one step (turn) to the next, and no longer, rounding errors even if ignored could end up averaging out statistically. Perhaps Herwin can give a more specific example to illustrate what he had in mind... Another interesting factor that comes to my mind is that the human brain is also error prone, and so an AI that realistically mimics a human should be. Otherwise, why, if you take a very large group of people, feed them the same information, and give them the same task that does not require other input from say personal experiences (even simple task like a small mathematical test), will you find a few "statistical" erroneous results besides a large number of correct ones. So perhaps in some applications you actually want something that causes errors in your program...
< Message edited by janh -- 8/31/2011 11:10:49 AM >
|