Website Woes - How and Why User Input Should Guide Your Website Decisions
1. Website Woes? How and Why User Input Should Guide Your Website Decisions Rebecca Blakiston & Dan Lee University of Arizona Libraries November 30, 2011 Arizona Library Association Annual Conference
Our library websites reflect our organizational identity, but often times they are difficult to use and don’t meet the expectations of our users. By bringing users into the conversation, we can better understand the user experience, make informed decisions, and increase our chances of success. In this presentation, learn how to incorporate user input into your website design, planning and maintenance processes. We will share simple and cheap ways to conduct user testing, including intercept usability testing, card sorting, and surveys. We will also share the personas that we developed to put us in the mind of our users and explain the process for developing these at your own institution. If you are interested in your website presence and how it can be improved, this is the session for you. Learn how to easily bring your users into the discussion, leading to a better website and a better user experience.
The University of Arizona Libraries created their first web site pretty early on. It looked a lot like other web sites that came up pretty early on, and like most of those sites got pretty unruly quickly. In 1997 they set out to overhaul it and bring it up to date. A working group was formed under the name Access 2000 and they did a great job of working with users in the redesign. Ruth Dickstein and Vicki Mills wrote it up in an article in the LITA journal that came out in 2000. Access 2000 somewhat stayed together as a group and managed the web site. They made decisions about changes and what could be included where. The web went through a lot of changes between 1997 and 2004 though, and the site was getting stale. The amount of information we were making available had once again overwhelmed the site. A new group was formed to make the needed changes. This group conducted minimal usability testing – and what we did was on the cheap. We took mock-ups out to selected faculty and some random users. We conducted a card sorting exercise and did some other quick testing. AND here is one key mistake we made. Our audience for the card sorting exercise was ourselves. Sort of. We didn’t have time to organize full fledged formal card sortings. We bought into the belief that librarians and others who were in daily contact with users could represent user needs. So we held an exercise with representatives of the various groups in the library who used the web site daily and worked with users and saw their problems. A step, but not good enough. After this group was done, no one was left responsible for the site. Rather, everyone was with no one clearly in a position to make decisions on directions and updates. So it got out of hand too. That site lasted into 2008 when a new group was formed to conduct formal usability testing of the site and develop a new design. They got a lot of user input. They worked with a commercial group who does usability testing for Fortune 500 companies and set up a mini-usability lab where we could sit down with users, test the key functions of the site, and record their reactions for fuller analysis. A lot of good information was gathered and the site was overhauled again. To some degree that is the site we have now. BUT, once again, no one was given responsibility for managing the site, giving it direction, and making the ongoing gradual changes we all knew had to made. So in 2010 Rebecca was given responsibility for the site and a small group was formed to advise and help her out.
Okay, so why bother to actually harass users and get them to test our web sites and help us discover what doesn’t work for them? In a sense, we do know a lot. We know what new stuff we want to promote. We know what tools they would find useful if they used them. But we also know some of the barriers we’ve put up for people because these are things we get asked about a lot – over the phone, via email, and at the desk. And this can be a form of unobtrusive testing if we take that attitude and pay attention when we notice or look for repeat questions. What we do about that and how we try to remove those barriers are key. At heart, we want to go to users and learn from them so we can meet Ranganathan’s 4 th Law of Library Science. We want to provide systems and resources that save their time and let them do their jobs better, whether that is research, teaching or learning. So here is a video clip showing one person’s reaction to our previous web site <Run 2008 testing clip>
Here’s the first tool we’ll cover to help you think about your website like a user would – and it makes use of what you already know about your users. They also can be a lot of fun to create. Personas are a tool that helps with planning and initial design. It isn’t a variety of usability testing. Developing personas gets you to think about the range of demographic types typically using your web site and the ranges of attitudes and approaches they come with. Think about a typical user, say a Freshman just arriving on campus or a local business person. How would she or he come to the web site? What do they expect from the site before coming there? What are they coming to accomplish? Draft that into a one page synopsis and use this characterization when you think about changes to the site. How would this persona be helped by the change? How would they respond?
Here is a screen shot of one we created. The name is fictitious and has no connection to the photo. You can get a sense of what goes into a Persona here. Now I’m going to hand things off to Rebecca.
As simple as a survey monkey Benefits: many purposes – gathering general input; finding out who your users are; determining “primary tasks,” getting justification for website redesign or website changes; letting users know you are trying; a way to get quantitative & qualitative data Drawbacks: users’ perceptions often don’t represent the reality; not real usability testing; often feedback is vague and may point out a problem but won’t lead you to a solution Examples: Used on CCP site to gather input.
As simple as a survey monkey Benefits: can gather a lot of input in a short amount of time; can reach users who don’t come to the library; can get ideas for better labels, categorizing & information architecture; an easy way to get “in the mind” of the user Drawbacks: users’ perceptions often don’t represent the reality; not real usability testing;
Open card sorting vs. closed card sorting. Benefits – doesn’t require a computer, a good strategy to see how users group and label content. Drawbacks – can take a lot of time and users can think of things very differently. Online options: websort
Online options: Websort.net – remote online card sorting Open Hallway – create test scenarios, record users remotely or locally, watch video results Plain Frame – clickable wireframe menus Usabilla – free! Ask questions about a snapshot of your site; users can mark things on the site that they would click on, that they trust, that they understand/don’t understand, etc. Free trials or free small studies; will cost money to do larger scale studies.
Mental models are strong Some users will always just site search Staff aren’t the ones we should be asking Not all users are equal Vague, broad menu items cause trouble Duplication should be avoided Gathering user input is easy! User input should guide all your (major) website decisions Vague, broad menu items: FAQs, How Do Is?, University Libraries, Services A-Z
“ Visiting the Libraries” label works “ Access and Use” label does not No one could find laptop information Users will search in WorldCat Local for anything Jargon: Document Delivery, Digital Collections, Special Collections, Information Commons, Reference Resources, Tutorials…
Students were hesitant to click on “Help” and hesitant to click on “Tutorials.” It was also confusing because Subject Guides were both under our Search & Find menu and under our Help menu > Tutorials & Guides.
We found that putting our tutorials under a label “Research and Writing Help” improved their findability. We also liked the idea of having Subject Guides and Course Guides under this same label. Usability testing helped guide our descriptions – we found that including linked examples was helpful, as well as having the More>>> link in addition to the main category link. We are still not happy with our overall results here – issues with Subject Guides and Course Guides still not being siblings and Course Guides are still hard to find. We are leaning towards separating these and having Research Guides as well as Research and Writing Help. Plan to test this new proposal over the next month.
Made some changes in the right direction, but so many problems still to address. Merging How Do I’s and Reference Resources into re-vamped “Research and Writing Help” page with logical user-driven categories. Database of FAQs Document Delivery, Express Retrieval, Interlibrary Loan > “Requesting Materials” Moving “Library Card Info” and “Check-Out Limits” page into one “Borrowing Privileges” page. Content types to avoid duplication of effort and improve consistency across our site (for example – can insert database & its description rather than type it manually). Content Strategy – trained content managers, new process for creating a web page & deleting a page to ensure ongoing quality. Search box – have a Discovery Product Manager who will be looking at this among other things.
How to incorporate user input into all website decisions. Have a process. For example, all substantial website changes (changed to global menu items; global labels; homepage changes) require usability testing. Have metrics. 100% of users will be able to do X, Y, and Z. Get buy in from the organization (videos and quotes from users are convincing). Get resources for incentives. Get training. Have this dedicated in job descriptions.