I remember reading that, but it seemed to drift off-topic very quickly because it's easier to talk about zombies than figure out how to make them work. Adaptive FSMs are a good way of programming your AI, I agree, but in a way, the AI still couldn't 'learn' because it is still restricted by the FSM by definition. I believe that for an AI that can truly learn, you would need an artificial neural net (ANN) programed from the start with the actions of the would-be FSM. That way, the AI can readjust the weightings of the synapses and will eventually be crafted to be a perfect enemy for that player. It also has an edge over FSM AIs because the ANN is able to associate between different neurons, and should be relatively easy to code pattern recognition into an ANN, leading to a much more realistic AI.
If anyone is interested, a few topics down from here Sinaz made one about FSM's (Zombie Jim) where a few others and I were discussing methods of adaptive FSM by giving "memory" arrays to the AI. It gives every object and action a reward/punishment system that determines it's likelihood to be noticed by the AI.
It might be a read some here are interested in.
Ahh, very nice way to put it, and leads to the next part of my pitch. The human brain works with neurons, albeit a couple hundred trillion more than the computer can handle, but neurons none the less. I definitely agree that AIs are supposed to replicate human errors, and simulate a human player. Obviously, a human player will learn from their mistakes and eventually figure out an optimal solution. A properly programmed ANN will successfully simulate the interaction of neurons. Now, many programmers may think that a neural net takes way to much computational power to be put into a game, but I disagree. For a semi-realistic AI, all you would need is a 15*15 array or ds_grid to run. That's only 225 elapsed checks, an easily manageable number. And since the ANN model would most likely not be in a sandbox environment, its easy to cut that number down even more. As a test, I ran a 4*4 ANN AI that would learn what words meant, and follow commands in strings. The AI learned 4 commands in an average of 86 trials, but sometimes as low as 48. The game played at 30/30 FPS, meaning the whole process at average took around 3 seconds.
To make a good flawed AI you need to make sure of two things: That the AI interacts with the game world the same way as you do, and: that the AI emulates human behavior.
I understood the concept you provided in your platform jump analogy, but I think that is a bad method you described.
Assuming its an average ordinary platformer, the most likely occurrence of human "falling to their doom" is not because they did not jump high enough, but because they did not jump far enough. The most often cause of falling off cliffs is because they waited too long to press the jump button. So I think that would be a better way of emulating a real person.
Another thing that absolutely destroys any semblance of a realistic human AI, is that the AI does not follow the same rules as a human. Example, in a shooting game, the AI does not aim like a real human, and it moves based on waypoints. Instead of waypoints, for the AI to seem convincing, it would have to use the same 4 directional based movement as the player.
So, done with my pitch, ANN models work not only to simulate learning and the average human player, but errors can easily be incorporated even after the brain has been perfected (AKA after that 3 seconds in my ANN test) by playing with error/success impact. If the AI chooses an action that is incorrect, it not only lowers the weight of that neuron, but also the surrounding neurons by a smaller value. Same idea with when it gets it right, only with a positive addition to the weight.