herwin
Posts: 6059
Joined: 5/28/2004 From: Sunderland, UK Status: offline
|
Smart AI is very hard. I'm currently the house neuroscientist for an intelligent systems research group, and our interest is in how to use models of biological intelligence to solve that problem. The purpose of brains is to control behaviour. To do that, the brain has to be able to judge outcomes. One approach is caching previous outcomes--this is how habit works and how actor-critic models work. The problem is that the state space that contains your cached values is enormous. Another approach is predicting future outcomes. That requires an internal model of the external process--yet again something that tends to be enormous. On the other hand, there's good evidence that goal-directed behaviour works in that way, since modifying the goal value changes the behaviour. I strongly suspect the internal model is solved in parallel, since the solution is found very quickly in mammalian brains, and the response to goal value changes is also very fast. We don't know how to model this yet in silicon. My own suspicion is that the goal values used in behaviour are found using both methods, and an averaged value is actually used, with the two sides being weighted based on the reliability of their predictions. In game terms, the current approach of scripting is probably a good starting point, but the scripts should be supplemented by tactical look-ahead, and the value of actions should be a weighted average of the expected values of the scripts and the predicted values of the look-ahead. Maintain variability measures and current values associated with scripts and look-aheads, so that as the game continues, the AI converges to a better estimate of the current value of actions.
_____________________________
Harry Erwin "For a number to make sense in the game, someone has to calibrate it and program code. There are too many significant numbers that behave non-linearly to expect that. It's just a game. Enjoy it." herwin@btinternet.com
|