Although I’ve worked in academic libraries for the past 8 years, my web development experience is from running my own shop, outside of the University world1. This is a land where results matter more than statistics and reports, and everything you do has a price tag attached to it. As such, usability testing is frequently done on the cheap without a committee to write questions and do recruiting. I’ve kept it that way here at GVSU. The basic tenets of my usability test plan are:
- Do one test every month.
- Focus on what users do, not what we’d like them to do.
- No more than 5 questions for a 30 minute test.
- Test no more than 3 students or faculty a month.
- Invite everyone from the library to observe. EVERYONE.
I can tell you from experience that getting librarians, front-line staff, and administrative staff all in a room together will get you better feedback than a room with only librarians or IT staff. I also guarantee you’ll get buy-in on making changes from all levels. First, you’re letting everyone see what is wrong with the site by showing actual users interacting with it. It’s pretty hard to ignore problems when they are encountered in actual use. Second, you’re letting everyone participate in the discussion. Not only does this get you better feedback, but it also means employees across all levels of your organization will appreciate getting a chance to contribute to something as visible as the website2.
First, let me say that you can learn everything you need to know about usability testing from Steve Krug’s book Rocket Surgery Made Easy. We’ve adapted a few things to accomodate the special needs of a University library, but those might not be appropriate for your situation.
Setting up the test
The test is simple. We set up two spaces: a space for the test and a space for observation. I use my office for the test, setting up a monitor, a keyboard, a mouse, and a USB microphone hooked up to my laptop. Upstairs in the conference room I set up the observation room. For that I use my student’s laptop, hooked up to the room’s projector and sound system. You want to make sure that the observers can see the user’s screen and hear what is being said. That’s it for setup: I bet you already have all that stuff in your office.
For the first few tests, I also ordered food for the observation room, as a way to bribe staff into coming. Once I got a good turnout, however, everyone realized the importance of the tests and comes whether or not there is food.
Since we use Macs, we run Apple’s built-in AIM chat client iChat on both machines. iChat has a screen sharing feature that allows us to share the screen and audio from my office up to the conference room. It’s built-in and easy to use, and it doesn’t cost anything. If you have PCs, Skype has a similar feature.
That’s it for gear? Surely there is something fancier we can do? Getting fancy with your gear is just a distraction from getting the work done. Once you’ve got a few tests under your belt, then see if you feel like you need to invest in something better. After all, the goal here is to make your website better, not to do usability tests. Usability tests are just the vehicle to get you to a better site. If you want to get in shape, do you go running or do you buy newer, fancier running shoes? Which is going to help you trim your waistline?
What happens during the test
While I walk the user back to my office, I run through the basics of the test. Even though we’ve asked them to look over the consent form, they never do. I want them to know that
I’m planning on recording them3 and that other people can hear our conversation before they are committed. No one has ever backed out, but they appreciate the warning before I spring it on them with a live microphone. I then read from a script all the details of the test. Once I’m done, I ask the user to sign a consent form that allows me to record the session.
I then ask a few questions about the user’s major, interests, and internet use, mostly as a warm up. I also want to find out if he or she has had any library instruction in their classes, although sometimes I wait to bring this up until we work through tasks.
Now we get to the heart of the test. I have 5 tasks that I have written up as scenarios that I read aloud to the user. I ask him or her to follow the instructions in the scenario, and to talk aloud about the choices he or she makes on the site. This is where you’ll learn the most about how well your site is succeeding.
Here’s an example: I wanted to see how users found articles with incomplete citation information, so I wanted them to find a newspaper article by Lawrence Lessig about Google and copyright. That’s the task. To make it into a scenario, I provided some context:
You meet with your Computer Science professor about an upcoming paper on intellectual property and the Web. She mentions an article by the lawyer Lawrence Lessig about Google and Copyright published in a newspaper, but she can’t remember the paper or when it was published. Find this article.
Notice that I don’t have any tasks asking users to find our policies pages, or to learn more about a particular database. That’s because users don’t actually do these things; librarians do them. And, as I’ve said before, the website is for users, not librarians. (See also: website tenet #2.)
When all the tasks are over, I ask the observation room for follow-up questions. I then give the user a thank-you gift (in our case, t-shirts) and get ready for the next test. One test takes about 30 minutes.
Once all the tests are completed, I join the observation room and we spend an hour hammering out what we saw, what was a problem, and how we are going to address it. I try to limit the actual to-do list to 3-4 items, to make sure we can get them all done by the next test. There is no report other than a blog post on our intranet that breaks down the tasks we’re going to work on over the next month, and no follow-up meetings, except for the next test.
We routinely have 12-18 people in our observation room, and almost universal buy-in on changes made to the website. In the past six months alone our website has transformed, from a typical library link farm to a usabile, simplified search engine our faculty and students seem to enjoy using. Below you can see the transition we’ve made in successive iterations, all as a result of usability tests.
We’re not done. We keep testing and making changes to the site, and will keep doing so. It is the single most important thing we do to make our library experience better for our users, since 100% of our patrons come through the website while only 50-60% come through the front door. It’s easy to do and only takes a little bit of time to yield incredible insights. So what are you waiting for?
- IRB test protocol PDF
- Consent Form PDF
- Test Script PDF
- Sample Packet for Observation Room, with scenarios PDF
- Some call this the “real world.” ↩
- Always make sure to give credit where credit is due. When someone else comes up with an idea, remember to always attribute it to them. You’ll do enough awesome things in your job, there is no sense stealing other people’s good ideas. Who knows, they might bake you cookies! ↩
- You don’t need to record the sessions, but i find it useful to go back when I’m working on improvements to the site and watch exactly how the users had problems. Plus, we already had a license for Camtasia so it wasn’t an additional expense. UPDATE: We haven’t recorded any of the tests for the Fall 2012 semester, and we won’t be recording sessions in the future. We never looked at the recordings, and it was one more bit of data to keep secure. So we dropped it. ↩