Matthew Reidsma

Work Notes

Updates from the GVSU Libraries’ Web Team.
Archive // Subscribe: EmailRSS

Usability Test Roundup - February 2014

This morning we had another usability test with three students in the Mary Idema Pew Library. We used the same questions as our last test, but this time focused on Summon 2.0 and whether some of our tweaks to the catalog helped users with their difficulties. The questions were:

  1. You are writing a paper for Writing 150 on whether video games and violent behavior are linked. Find some sources to get you started.
  2. Your Education professor has assigned you a presentation on the Theory of Multiple Intelligences. Find some peer-reviewed sources on this topic.
  3. Your Capstone professor recommended the book The Emotional Brain by Joseph LeDoux. Find this book.
  4. Your Anthropology professor assigned the book chapter, “All Earthenware Plain and Flowered” from the book In Small Things Forgotten by James Deetz. Find this book chapter.

The first two questions showed us some interesting things about Summon 2.0:

  • All the students loved the right hand panel that gives previews of items in the search results. (This was a surprise to many of us, who didn’t think it was useful at all. I never stop being surprised!)
  • All of the students used the suggested searches when typing in the search box. In fact, they all stopped their search, even when they knew exactly what they wanted to type to find it in the prompt list.

The most noticeable problem was that the peer review facet is still too hard to find. What we all saw emphasized is that the patrons never even noticed that there were facets at the top left of the screen. They were all happy to use the other facets, but just didn’t see the first few.

One pattern we saw again was patrons looking for “Peer Review” in the Content Type facet area. The facet isn’t there, but we’ve seen this behavior many times over the past three years. Bob and Jeff suggested just adding a Peer-Review facet to the Content Type area, and I’m working on trying such a thing. I have a rough fix developed, but I’m not happy with how well it integrates into the way the rest of the facets work. (If you want to see the rough draft code, it’s up on Github.) I’ll keep working on this one.

The second problem we saw again was the difficulty students had understanding the “No Results” pages in Sierra’s OPAC for non-keyword searches. The original screen shows nearby results (for title, author, journal title, etc. searches) and places a box where the patron’s search would be if we had it. In some cases, help prompts for switching to a keyword search, searching MelCat (the state-wide consortium), or inverting author names to the last first order are also included. But our patrons never seem to see this box, and often don’t realize that they haven’t gotten any results. (Sierra defaults the No Results message to red, which made them think they’d made a mistake. We changed it to black but now it’s invisible.)

Default keyword search result screen

The default non-keyword results screen in Sierra

This afternoon, I did some research into search results best practices. In their book Designing the Search Experience, Tony Russell-Rose and Tyler Tate give a few best practices for designing no results screens (p. 162):

  • Provide an explicit message when zero results are returned
  • Provide support in the form of advice and tools for query reformulation
  • Display other navigation options such as top searches, featured products, popular items, and so on

The default Sierra screen actually has all of these features, but they aren’t getting noticed by the user. Based on discussions after the usability test, I decided to consolidate all of the information the patron might need in one place. (All except for the third point, which in this case is the contextual list of ‘nearby’ items).

Revised keyword search result screen

The revised non-keyword results screen in Sierra

The new page makes it clear that your search was not successful, and makes it easy to find the help prompts. In addition, it keeps the contextual list of nearby results, which might be helpful if you’ve spelled something wrong. I pushed this page live already, so if you use the catalog you should already see it. For the technically inclined, I’ve posted the code that makes this work to Github.

I also watched a few students use the Find Books search type dropdown with no apparent difficulty, and combined with the frequency those specialized searches get used, I don’t think changing the order of the search items in the catalog (as suggested last month) is something we’ll pursue.

I did, however, watch students once again get confused by the “Nearby on Shelf” button. One student leaned in so close to the screen to try to read the tiny button that I thought he would get a nose print on the glass. I think I’ll try changing the label to something more understandable like “Browse Similar Items” and make it a text link instead of an image (as suggested last month).

Running a usability test on our website every month is a lot of work, but it has helped us really hammer away at some of the big issues facing our patrons. Thanks for participating, and I look forward to seeing everyone next month!

Next month our test will be on Friday, March 14th. I want to spend some time testing the right-hand panel of Summon 2.0, specifically with searches that might bring up suggested librarians, LibGuides, and other help features. Bob has already volunteered to help, but if another LibGuides owner wants to work with me to get their metadata set up and craft some good questions for the next test, I’d appreciate it!