Presented at Casual Connect 2009.
Jason Schklar covers how Big Huge Games did Rapid Iterative Testing and Evaluation (RITE) user-testing on Catan.
We identified and fixed usability issues with core mechanics (trade, building) and the “learn as you play” tutorial mode in real time while conducting user-testing studies.
The key takeaway is that instead of getting a list of action items a week after the study is done, you end up with a dramatically improved game before the study is even over.
Catan as a board game has been enjoyed by millions and millions of people. It’s best played with 3-4 players sitting around a table. The idea is to build up your civilization by harvesting the resources found on the island and trading with opponents to improve your situation. Our goal was to bring Catan to Xbox Live Arcade Players.
Some of the key reasons for its appeal... Easy to teach and learn in a social setting The core mechanic involves wheeling and dealing with your friends in real time to move yourself ahead without giving your opponents the upper hand.
So, when we approached the Xbox Live Arcade version of Settlers of Catan, we started out with a couple of key user experience goals we wanted to carry over from the original board game.
I’m going to talk about a few key lessons we learned... But first, our approach: Best Guesses: Visuals Functional Spec Iterative user-testing: Get it in front of target users as quickly as possible (we stacked the deck against ourselves: people who hadn’t played Catan the board game) needed to learn the game needed to learn our UI - 1 lead designer/developer and 1 usability guy on site; other coders and artists available by IM/email. - Several sessions per day with the goal of new session = new build with fixes addressing previously discovered issues.
Now I’ll talk about 3 of our key learnings from our work on Catan Live! Common early error in the game: Top Left: Player discovers needs wood and brick to build road. Top Right: Player WANTS wood, can give away wool (sheep) Bot Left: Player focuses on “wanting wood” and forgets that still needs brick; trades his brick for Sun Tzu’s wood. Bot Right: Player realizes his mistake when tries to build road Penalty: Player would need to wait until his next turn to try again... Is forced to proceed before has mastered a core concept Feels “stupid”
Finding common early mistakes allowed us to bullet proof the tutorials...
We “fixed” the AI so that it would allow players to make common mistakes and not have to suffer unduly for it. Sun Tzu gladly accepts a trade that on higher levels of difficulty (or with more competitive players) wouldn’t likely fly.
Remember: Your familiarity and expertise with the game make you experience it very differently than the player trying it out for the first time. Just because we’re suffering due to “lack of challenge” or “repetition” doesn’t mean that the first-time player is.
Sometimes you just need to try something radically different to get out of a user experience rut Q: Which of the two do you think work best in conveying what opponent players wanted to “Give” vs. “Receive”? give/want arrows from opposing players. started with basic necktie, added arrows within neckties, animated the arrows. Nothing worked. we could try a radically different visual design – and validate it – because we were capturing issues in real time and trying fixes in real time (no need to wait weeks)
Being able to identify issues in real time and try solutions before the next participant meant that we could experiment when we got into a rut without serious risk to our ship schedule.