This is the last lecture in the series series that I presented at University of Strathclyde in 2011/2012 as part of the final year AI course.
In this lecture I rehash the fundamental differences between Game AI and the traditional AI that has been taught in previous courses. It also includes a (frankly time-filling) section called the "Brain Dump" where I briefly touch on a bunch of things I was thinking about at the time.
2. What is Game AI?
• Game AI is not about :
‣ Beating the player
‣ Optimality
‣ “making the bad guys shoot”
2
3. What is Game AI
• Game AI is about :
‣ Delivering great experiences to players
‣ Building immersive environments
‣ Automating the functions of a human “Dungeon Master”
3
5. Batman - Arkham Asylum
• A classic example of the game AI challenge
• We could coordinate the thugs, use small unit
tactics etc.
• Make use of existing AI techniques to make
intelligent, successful thugs.
• Is this the point?
5
7. Batman - Arkham Asylum
• The overall aim is to make a Batman “simulator”.
• Make the player feel like they ARE Batman.
• Recall “Losing with Style”
‣ Epic brawls
‣ Batman can generally win if he’s smart
• Player is entertained and immersed in the world of
Batman.
7
8. The “Odd Sock Drawer”
Stuff That Didn’t Fit In
Anywhere Else
AKA “The Brain Dump”
9. Emergence
• Emergence is the term for “interesting” things
coming from seemingly simple rules.
• Recall L-Systems from last week
‣ Basic rules : draw, turn left, turn right
‣ Complicated patterns emerged from the interaction of
these rules
9
10. Reynolds Steering
• Three simple rules for steering
‣ Separation - avoid crowding local units
‣ Alignment - steer towards the avg heading of local units
‣ Cohesion - steer towards the avg position of local units
• From these three rules we can simulate “flock”
behaviour
10
12. Cellular Automata
• A CA is a grid representation
• Individual cells are “off” or “on”
‣ Actually can take a range of values, but usually binary.
• Iterative process to determine at a given step a cell’s
state
• Most famous ruleset is Conway’s Game of Life
12
13. Conway’s Game of Life
• Under-population : Any live cell with fewer than
two live neighbours dies
• Overcrowding : Any live cell with more than three
live neighbours dies
• Reproduction : Any dead cell with exactly three live
neighbours becomes a live cell
• Live cells with two or three live neighbours lives
13
16. CA and the Universe
• There are some people who believe that CA
emergence reflects the nature of our universe.
• The universe is claimed to be a Turing Machine.
• Through the property of Turing Completeness any
Turing Machine can replicate behaviour of another.
• CAs are a Turing Machine.
‣ Some researchers claim to be discovering laws of our
universe by analysis of CA systems
16
17. CAs for Games
• CAs are of interest in the context of Game AI
• Can be used to generate “particles”
‣ Think confetti
• Particle systems are used for all sorts in graphics
‣ Smoke, vapour, dust, leaves, sparkles, explosions, flames
• Rather than defining a complex particle system
• Use a lightweight CA definition and a start state
17
18. Emergence
• Emergence allows us to define complex systems
simply.
• Diverse behaviour can be exhibited
‣ Novel, unexpected behaviour
• Loss of directorial control when AI becomes
unpredictable.
18
19. Traditional vs Game AI
• Something I hear on a fairly regular basis is how bad
AI in games is.
• “It’s not intelligent”
• “You could do this with <insert algorithm>”
• We’ve talked at length about the different
motivations of Traditional AI and Game AI
19
20. Processing Time
• One of the big things that is different between is the
amount of CPU power available.
• When we talk about traditional AI systems, that’s
typically all that the computer is doing.
• Our AI systems need horsepower.
• Games need horsepower too.
‣ Graphics, physics, networking.
‣ 1ms of CPU time per frame - <10% of time given to AI
20
21. Memory Usage
• Memory is another big factor.
• Consider the kind of search trees we’ve been
talking about.
‣ Combinatorial explosion
‣ Tracking massive amounts of states
• Memory required for non-trivial planning quickly
hits multiple gigabytes.
‣ For one planning problem
21
22. Debugging AI Systems
• Lots of our AI systems are randomised
• How can we accurately test behaviour if it relies on
random chance?
• We talked previously in Poker context about how
to overcome randomness
‣ Many repetitions to minimise variance
‣ Find a way to rig the randomness
22
23. Pseudo-Random Behaviour
• We know that computers cannot generate truly
random numbers.
• What they actually use are complicated functions
using a “seed” number to initialise them.
• We can generate the same number sequence iff
‣ We know the function
‣ We know the seed
23
24. Car Key
• At a very basic level, this is how many cryptography
techniques work.
• Consider a remote car key
• Lots of keys share the same frequency
‣ Need to distinguish our key/car pair
• Thieves could intercept the unlock command if it
was broadcast in the clear.
‣ Replay Attack
24
25. Rolling Code
• Get around this using a pair of pseudo-random
number generators between lock and key.
‣ Synchronised by having the same seed number
• Both lock and key know what the next number in
the sequence is.
• If a thief intercepts it, that code is now “old”
‣ Need to be able to predict what the next number is
‣ Lots of extra stuff to make this difficult/impossible
25
26. Debugging AI Systems
• We can use the same basic principles to ensure that
when we debug an AI system, we are observing the
same decisions.
• Crash reports always report seed number used.
• We can replicate the random number sequence that
was generated before the crash
‣ Replicate the behaviour that caused the crash
26
27. A Note on Footprints
• Activating debug code and changing your program
can alter it’s footprint when compiled.
• Can lead to unintentional changes in behaviour.
‣ Debug build takes longer to execute as there are more
instructions - different amounts of processor available,
behaves differently
27
28. Software Engineering for
(Game) AI
• By now you will have been told about best practices
for Software Engineering.
‣ UML diagrams
‣ Waterfall method
• Good for well understood tasks
‣ Write an app that does this
• What if the task isn’t understood?
‣ How can we design a system on paper if it isn’t specified?
28
29. Iteration, Iteration,
Iteration
• The way we deal with not knowing the specification
in advance is to constantly test.
• Rapid iteration
‣ Building systems incrementally
‣ No monolithic approach
‣ Make it do something, make it do the right thing later
‣ Iterative refinement
• Testing code correctness every few lines.
29
30. Reality
• Continuous Integration
‣ Code being committed is going through automated tests
to ensure correctness
• Long build processes
• Core reason a lot of AI is based on scripting
‣ No need to alter the code, no need to wait for a
recompile.
‣ Script updates can be executed on the same build.
30
31. Scrum
• Scrum is a project management technique getting a
lot of use in development teams
• Growing in popularity since 2001, although dates
back to 1986
• Tightly integrates a product-centric view of the
development process
‣ Avoids teams working on “cool” rather than “useful”
features.
31
32. Roles
• Product Owner - Represents the “product”, client’s
perspective. Ensures that the team is providing
value.
• Scrum Master - In charge of ensuring smooth
operation of the team. Not leader of the team.
‣ Somewhere between Mum and Fixer
• Team - Developers
32
33. The Product Backlog
• Created by the Product Owner
• Prioritised list of potential features.
‣ Priority based on value of feature and work involved
• Product Owner and Team determine priority
‣ Value is set by the Product Owner
‣ Amount of work is set by the team
• Items in the Product Backlog must be promoted to
the Sprint Backlog before being worked on.
33
34. The Sprint
• Building block of Scrum
• A development process consists of multiple sprints
‣ Each sprint lasts between a week and a month
‣ Firm deadline for the length of the sprints
• Sprint Planning Meeting selects what items from the
Product Backlog are going to be tackled.
• Sprint Review Meeting analyses what has/hasn’t
been accomplished at the end. Demo.
34
35. The Scrum
• Daily Team meeting - 15m
‣ What have you done since yesterday?
‣ What are you going to do today?
‣ Are there any obstacles right now?
• Scrum of Scrums
‣ Daily summary
‣ Each team sends a delegate
‣ Allows inter-team communication and progress checking
35
36. Maslow’s Hammer
• “If all you have is a hammer, everything looks like a
nail” - The Psychology of Science, Maslow
• This plagues the AI field - especially in academia
• Experts in one particular aspect
‣ Or grads who learnt about one technique/algorithm
• Go on to use it as standard approach everywhere,
even when it’s not at all appropriate.
36
37. The Philosophy of this
Module
• As much as possible I’ve avoided talking about
specific algorithms.
• Algorithms are available in books or on Wikipedia
• What I’ve tried to emphasise is approaches and
application areas.
• Teaching ways of thinking about Game AI
‣ Not how to write Game AI
37
38. The AI Toolbox
• Different techniques are suited to different jobs.
• Whenever you come across a new technique, make
a note of it.
‣ Add it to your toolbox
• When you come across a new problem :
‣ Do you have a tool that can solve it?
‣ Is there a better one available?
• These lectures hopefully give you a “starter kit”.
38
39. Final Summary
• Science of playing games
• Building mathematical representations of players.
• Generating content for games
• Tailoring content to players
• Managing the experience of players
39
40. Source Material
• Largely drawn from articles I’ve written for
‣ AIGameDev.com
‣ AltDevBlogADay.com
‣ Gamasutra.com
• Other aspects based on a series of posts
forthcoming for Gamasutra
• Also based on conversations with / talks from the
following people (and more) over the past few years
40
41. Acknowledgements
• Phil Carlisle (Namaste)
• Alex Champandard (AIGameDev.com)
• Kevin Dill (Lockheed Martin Advanced Simulation Center)
• Richard Evans (Stumptown Game Machine)
• Dan Kline (Electronic Arts - Maxis)
• Dave Mark (Intrinsic Algorithm, Game AI Programmers Guild)
• Gwaredd Mountain (Climax Studios)
• Brian Schwab (Blizzard Entertainment)
• Togelius and Yannakakis (ITU Copenhagen)
41
42. Finally
• Strathclyde AI and Games research group
‣ Talk to us about postgrad opportunities
• International Game Developers Association
‣ IGDA Scotland
‣ IGDA Scholarships
• Organising some form of game-development based
program here at Strathclyde
‣ Keep an eye on your email in the next week or two
42