stefan

Making enemies move and shoot: An A.I. Primer part 1

In a more extreme example, bots in a shooter don’t have killing the player as their first priority (except bosses). They want to get past the player, and possibly kill them in the process. I’d imagine from their perspective, a shooter is more like a massively multiplayer game of Frogger than anything else. One where they are heavily outgunned, at least on an individual level), and facing an enemy who has just brought down thousands of their comrades. It would make sense from their perspective to modify their attack patterns based on which worked and which didn’t. Or if that didn’t enhance the game, having the last few in a group realize that killing the player is hopeless and break off into evasive maneuvers to try and make it off the opposing edge of the screen would possibly work well.

Hide in shadows failed?! Broad daylight or not, this is bullshit. Imoen has 60,000 points in hide in shadows!

This area is often neglected by game developers, which results in bots which ignore obvious solutions to their in-game problems, or which react in ways that – even within the framework of the game — are supernatural or otherwise inexplicable. Monsters wander aimlessly with no prey in sight and partners teleport to our side when pathfinding fails. I’ll deal more with those aspects, however, in the still-to-come sections on the current state of A.I.

The last area where A.I. is important is aesthetics. This one really goes unnoticed most of the time, because if it’s done well it’s usually done subtly, and it just adds the right emotional feel without requiring any notice or thought on the part of the player. Aesthetic A.I. is anything that adjusts the look, sound, or feel of the game in a way that mimics things done by humans in films/comics/technical drawings/music mixing/any visual or auditory art. If done properly this works well with immersive A.I. techniques by heightening emotional responses to in-game action.

Aesthetic A.I. is often a very different application of A.I. from the ones mentioned above, because it works to make games less realistic in carefully chosen ways. This is seen in Non-Photorealistic smart graphics, creative camera control, and dynamically controlled music and sound effects.

Smart graphics (the application of A.I. to the rendering process) allow for the environment to react to and compliment the in-game situation in such a way as to bring attention to aspects of it that would be helpful in-game, or just to enhance the emotional impact of the visuals being displayed. Slight visual distortion, adjustments to color saturation, and changes in the way the rendering engine displays textures can all increase the sense of doom when the player is facing an enemy bot that they should be afraid of.

A slight clipping problem in Oblivion diminishes immersion.

Similarly, when one is comparatively safe, other visual effects can make the world around you seem a bit brighter, more open, and less dangerous. This can be done statically, by location, but can often be more effective when they are based on multiple factors, possibly level, location, nearby enemies, and where a character is in the game.

Camera control in 3D games is another area of aesthetic application of A.I. It, however, overlaps greatly with gameplay and mechanics. Going beyond simply following the character and ensuring that the view is not obstructed, however, offers aesthetic possibilities for intelligent camera placement. Drawing expertise from the cinema, cameras can frame scenes in many different ways, each of which affects the impact it will have on the viewer. Good choices regarding what to focus on, as well as when to pan, tilt, and zoom can all help the viewer experience the emotions that a scene is attempting to evoke, and as with smart graphics, this is potentially the most effective when it is chosen on the fly, rather than being set by location during the game’s creation.

Finally, similar aesthetic control can be applied to music and sound. Custom-composed music which fits the actions of the character – both how they are doing and what they are doing – can not only set the tone but can also convey information to the character about their status or the status of an enemy or environment. The same applies for sound effects. It is not realistic, but altering the sound of an engine to make it seem more frantic and desperate as escape seems less likely will dramatically increase the tension in a scene, and make it far more memorable to the player.

In the next installments, I’ll talk about the evolution, current state, and possible future of video game A.I., and what that means to players.

Subscribe
Notify of
guest

3 Comments
Inline Feedbacks
View all comments
Christian
Christian
17 years ago

A fine article right here, with a couple of fields I never considered to think about. Can’t wait for the next part.

One of the interesting things I was reading about in my own research into game ai is why the field is so much less advanced than it could be. It seems to be a combination of

a)Developers not wanting to “waste” CPU cycles that could go into rednering for something like AI routines. Shiny graphics sell games

b)Developers seem to cling to certain technologies rather than play around with something new. Half Life 1 used a version of the A* algorithm for pathfinding, yet it seems to be that people are still using it to this day. Black and White tried to experiment with a basic form of machine learning, but the concept has been pretty much abandoned. There are a lot of great AI techniques being developed in academia, but it seems to take a very long while for them to find their way into games. I’ll never forget the one research paper I read in which a professor used finite state machines and rule based systems in order to produce a very good bot for Quake 2. He ran his bot on a separate, junky old computer that communicated with the game over sockets. If this can be done with such old hardware, then there is no reason why it can’t be done on modern equipment (that is, unless you really need to implement that ray tracer…).

Stefan
Stefan
17 years ago

I agree, it seems to fall by the wayside very often. Honestly when playing FPS deathmatch-type games, I don’t get the impression that enemeies are really much more advanced than reaper bots were back in the day. If you ever look at the code for the quake 1 reaper bots, it’s really pretty simple, and ran fine on an old 486 – leaving massive amounts of processing power free on today’s machines. But instead of using it, we continue with simple state searches and branching conversation trees.

I think for the most part it’s not technical limitations so much as lack of interest on the part of developers, but that’s something I should probably save for the next couple articles :)

Christian
Christian
17 years ago

I look forward to it!