As part of my larger push to make it easier to move from our GVSU search tools to MeLCat, I added a little function to my custom Summon scripts that inserts a contextual help tip if you have the Book/eBook facet selected. (If you have more than one facet selected, it will show if the Book/eBook facet is the top one checked in the list. ¯_(ツ)_/¯ (Perhaps I’ll dig into the Angular framework enough to make a more sophisticated solution down the line.)
It might take a cache clear for this to show up. We’ll now keep an eye on MeLCat stats and see if it increases use!
Every now and then, when I’m not at work, I do a search in the Catalog that returns no results. For some reason, the folks who made WebPAC Pro thought the best place to go when you don’t get results in the Advanced Search screen. To quote Andromeda Yelton, who was just as frustrated as I have been about this screen, “if I didn’t get any original hits from my search, limiting it to large print Albanian will not help.” Because I always encounter this page in a context where I am not working, I have never managed to write down that this page needed to be fixed. Until now!
It’s live now.
A few years ago, I added a feature to ILLiad, our interlibrary load software, that searched the Michigan eLibrary catalog (MeLCat) for books that were being requested from ILL. If a MeL participating library owned the book, I’d post a nice little notice to the user to let them know they could get the book faster through MeL. (It would also be cheaper for us.) Here’s what it looked like:
Of course, this only worked on ILL requests that were manually entered. And based on the number of ILL requests that have been rerouted to MeL manually by our ILL staff over the years, it didn’t make much difference. Basically, everyone ignored my nice message and requested the item from ILL.
Amy in ILL pointed out that’s 1,895 requests since we started checking MeL holdings for ILL requests. Amy continued “if we had paid our max-cost of $20 per item by obtaining via ILL, we would have spent $37,900 on the 1,895 books we obtained via MeL, so the savings is substantial.”
Now, if you do a loan request and MeL has the title of the item you are requesting, it brings up a more obvious message that you can get the item faster from MeL, and includes a big blue button that will take you right to the MeL search. (This works whether the search leads to a single result (the record page) or if there are multiple results (the browse titles page).)
Here’s the new alert:
It just went live this morning, and I’m keeping stats on each time it is activated. Hopefully we can get those ILL to MeL numbers down for the rest of the year, and next I’ll be looking at ways to improve the catalog to MeL workflow!
Yesterday Melina and I ran our first formal usability test in nearly 2 years. We had three students in the Mary I who worked on a few tasks using the Omeka-based Digital Collections site. We had a great crowd of observers who took meticulously detailed notes and helped us whittle down a very long list of issues to a few high-priority action items for the immediate future.
The 5 task-based scenarios we had each student work through were:
The first question was designed both to see how our users understood the term “Digital Collections,” as well as to see if they could find it. None of our students had any idea of what Digital Collections meant (most thought it was anything online, put into groups). But because they had an incorrect understanding of what it was, they certainly couldn’t find it.
We discussed how Digital Collections is the kind of resource that needs some kind of facilitation, to explain to users what the collections are and why they want to use them. Melanie noted that for our on-campus users who encounter digital collections through an assignment, they will either have Leigh or a liaison introduce them, or will at least have some context provided by the professor. But for others, we discussed a few ways to help clarify not just what Digital Collections are, but what all of our separate collections are. I think there are other opportunities here for sharing this information, from social media to Web ads. And navigating to our collections is also something I want to explore in future tests. We also want to explore how easy these collections are to in the Library Search - sample searches from the users in the test showed that a lot of other stuff came up before anything in our digital collections.
(As a related note: Weave Journal of Library User Experience recently published an article on this very topic, and found that while Digital Collections was a terrible term, it as the least terrible of all the others, and most libraries use it. ¯_(ツ)_/¯ Read the whole article: What We Talk About When We Talk About Digital Libraries: UX Approaches to Labeling Online Special Collections
Questions 2-5 all focused on specific, yet common, tasks in the Digital Collections system, which is a customized version of the open source tool Omeka. Let’s just summarize by saying that Omeka didn’t do so well on this test.
Many other issues centered around Omeka’s search function. Our users mostly made assumptions that Omeka’s search would work a lot like Google’s or Summon’s: autosuggest, autocorrect, etc. But it doesn’t. In fact, Omeka’s advanced search requires you to explicitly use Boolean operators between keywords, but you have to use symbols, like “+” for AND and “-” for NOT. Super intuitive!
To top it off, a few years ago Kyle switched the main search functionality of Omeka over to a Solr index, that outperforms the built-in search dramatically. Unfortunately, the Advanced search doesn’t run on the Solr index. We found that the Advanced Search and Basic search would return totally different results for essentially the same search! And some buttons, like the “New Search” button, will take you to advanced search rather than to the basic search. Ugh.
Our plan right now is to do the following (although we need to do a little more research to make sure these will work and are the best options):
Kyle and I will get together over the next few weeks to look into making these changes happen. Then in the next few months, we’ll run another test for digital collections and see how the changes are received!
I’m planning on running another, more generalized test in November. Running a usability test on our website every month is a lot of work, but it has helped us really hammer away at some of the big issues facing our patrons. Thanks for participating, and I look forward to seeing everyone next month!
Recently, there was some discussion on a list serv about a new way to streamline requesting books in our catalog from the Summon results page. The workflow for requesting a book in Summon is a little click heavy, where the user clicks on the book result and is presented with the Summon book detail page, then they click on the “Request” button, where they are taken to the catalog page (and no request is placed), and finally, they request the book in the catalog. The new workflow promised to actually execute the hold from the Summon request button.
Unfortunately, it didn’t work for us, and will likely linger in the support queue for a while. In the meantime, however, I discovered that Summon allows you to turn off the book detail page! So, instead of clicking on the result to get to the detail page, which duplicates the catalog record, and then clicking to the catalog record before you can do anything, clicking on a book result in Summon now takes you directly to the catalog. This gives us the same reduced number of clicks as the hold script without worrying about how either Ex Libris or Innovative will break the connection the next time there is a software update.
As a bonus, I have heard from many of you how much you hate the Summon book detail page! (The detail page for A&I content remains.)
I also rerouted the “Feedback” link to our custom Problem Form, to streamline our support tickets (and I will customize the label soon).
As always, let me know if you have any questions or concerns.
As many of you know, I have been working for a few years researching bias in our library discovery tool, Summon. After I returned from sabbatical, I sent a proposal to Leadership Team that we turn off the Summon sidebar, the area on the right side of larger screens that shows the Topic Explorer, related topics, related LibGuides and librarians, and other contextual information. The proposal has been approved by both Leadership Team and many of the liaison librarians I have spoken with. I shut off the Summon sidebaron March 4th, the first day of Spring Break.
Below is the text of my proposal for shutting off the sidebar. If you’d like to read more, If you can see my article that started all this research or wait for my upcoming book on the subject from Library Juice Press.
We should turn off the right-hand sidebar of Summon, which provides contextual information because:
Details For the past 3 years, I have been researching the accuracy and effectiveness of the University Libraries’ Summon Discovery Service algorithms, and in particular, the algorithms that make up the “Topic Explorer,” the contextual information that makes up the right-hand sidebar of the search results screen. Based on my research, I find that these algorithms often cause more harm than good, and should be turned off in GVSU’s instance of Summon. Results that show bias in nearly 1 percent of the Topic Explorer results. What’s more, poor infrastructure design of the Topic Explorer compounds the problem, showing biased and inaccurate results more and more frequently.
Wikipedia, the most common reference source in Summon, is useful for libraries to include because users trust Wikipedia to have up-to-date content. However, Wikipedia entries in Summon are not pulled from Wikipedia’s updated content. In the summer of 2019, Ruth Tillman of Penn State University Libraries and I discovered that the Summon team loaded Wikipedia results into the Summon index at some time before February 20, 2013, a full month before the Topic Explorer was announced in a press release. They have never updated the results. (Brent Cook, the project manager for Summon, reluctantly confirmed this finding.) Now searches for living individuals, such as Barack Obama and Donald Trump, are wildly inaccurate. (Obama is listed as the 44th and current president of the United States. Trump is a reality TV star and real estate developer.) Many more recently deceased individuals are listed as alive, such as Barbara Bush. If the Topic Explorer cannot provide correct information, it is not useful to our users, and will degrade their trust in our other services.
In addition, nearly 1% of all results show bias against people of color, LGBTQ people, women, the mentally ill, Muslims, and more. Searches for information on stress in the workplace returned a result for “women in the workforce,” and searches for “rape in United States” showed a a result for “Hearsay Evidence.” (Ex Libris has blocked these particular results, but not addressed the underlying issues in the search algorithm.) Any search with the words “mental illness” returns a Topic Explorer result for “The Myth of Mental Illness,” despite my reports in January of 2016 that this was unacceptable. Many more examples can be found in my research.
In some instances, both of these problems merge together. Chelsea Manning, a transgender woman who served prison time for violations of the Espionage Act, is still listed in Summon only as “Bradley Manning,” her dead name. Not only is this article out of date, but the act of deadnaming a transgender person is to deny their actual identity.
Other reference sources are not designed and written to be excerpted by algorithms. In many cases, Credo Reference articles start with some tangential preamble, rather than being structured like an inverted pyramid (as Wikipedia’s articles are). This can lead to entries like one for “alcohol consumption,” which shows the Credo entry for alcohol that begins, “Prisoners are not allowed to drink alcohol while they are in prison,” implying that alcohol and incarceration are connected. A similar search for “alcoholism” (until recently) began, “The history of women’s relationship with alcohol constitutes a profound commentary on U.S. cultural attitudes about gender and power.” This implies that alcoholism is a gender-specific issue. Related topics are another area where the Topic Explorer shows bias, such as a search for “women in prison” shows a related search of “sex in film,” as if women in prisons must be related to sexploitation films. (The reference result for this search is also “Women in prison films.”) Searching for “murder” or “lying to patients,” two unethical practices, recommends searching for Islamic dietary laws. “Schizoaffective disorder” is connected by related searches to both “cocaine addiction” and “pedophilia,” despite having no logical connection at all.
Of the other algorithmic results shown in the Topic Explorer, including recommended librarians and guides, the assumptions the engineering team made about how these would work has introduced a number of problems. Based on keyword matching, we have the wrong librarian listed for a number of subjects. For instance, the owner of the modern languages guide “Spanish for Business” is always listed as the business liaison, because the numeric guide “id” in the LibGuides database is lower than the actual Business guide. What’s more, in some cases basic word proximity errors lead to strange match-ups, like Debbie Morrow, our engineering, math, and physics liaison being listed as a subject expert for Capital Punishment, because one of her guides uses the phrase “questionnaire execution.”
While some of these problematic searches have been suppressed since they were discovered, there will continue to be more biased and incorrect results, like a game of software whack-a-mole. We would not be alone in turning off the Topic Explorer. Most recently, Penn State University Libraries turned off the TE after Ruth Tillman of Penn State University Libraries and I uncovered the inaccuracies in Wikipedia article matching. The right-hand sidebar can be turned off with one option in the Summon Administration Console. Usage data is difficult to get, because much of the sidebar is designed to be read, not necessarily acted upon. What data we do have, however, suggests that clicks on recommended searches happen in less than a tenth of one percent of all searches, while the number for clicks on recommended guides and librarians is even lower.
On Saturday, January 19th starting at 10pm EST, ProQuest will be conducting maintenance on many of its platforms. Many of our subscription systems will have periods of unavailability overnight. The maintenance is expected to last up to 8 hours.
Here’s the list of affected services:
The redesign for Course Reserve will be going live this Thursday morning, July 5th! Course Reserve will get the shiny new template, as well as a bunch of workflow improvements for faculty who want to manage their own courses. You can see the new design (with some limited functionality - you can’t actually get to the items that are on reserve) at https://gvsu.ares.atlas-sys.com/ares/TestWeb
I built a script over the past few months that tries to address the confusion users have around the difference between “Adding a class” (starting from scratch) and “Cloning a class” (copying a class from one semester to the next). We’re stuck with the labels because the developers of Ares thought it would be a good idea to make their scripts dependent on a specific English word they had picked being sent to them (good luck with that translation, folks!) so instead I used data we’ve collected from interviews, support emails and calls, and last Winter’s faculty usability tests.
Basically, if you click “Add a class” my new script will load up to 3 of your previous classes in the background, and then present you the options to “Start from scratch” (with the button text reading “Add a class” to appease the computer gods) or show you yiour 3 previous classes with the option to copy them to a new semester (again, with appropriate deference to the deities of computer code). If you have more than 3 previous classes, you’ll also have the option to see more previous classes. You can see a screenshot of the prototype here. (Thanks to Kyle and Jon Earley for great feedback!)
There are only 2 more systems to do: Omeka (a.k.a. Digital Collections) and the Status Page. Those will be coming soon!
All summer long I’ve been working on redesigning all of our library web systems (except for Summon and ScholarWorks) in order to match the University’s new branding campaign and improve the overall accessibility of our sites. In late April, four of our GVSU-hosted websites switched over to the new design. (The fifth—Services for Faculty and Staff—was absorbed into the main library website.) In May, I redesigned the Library Catalog, EZ Proxy’s error pages, and upgraded our link resolver to 360 Link 2.0. In addition, I built a tool that allows us to put our library hours into all of our other systems! You may remember that I’ve done a lot of user research on how users get to our hours, and it’s one task that has evolved continuously since I started here. Earlier in June, I redesigned the Journal Finder. And since then I’ve been hard at work on other systems!
Tomorrow morning I’ll begin switching over our Help site (run by LibAnswers). Because of the way LibAnswers is structured, it will be a fairly manual process. I’ve been running the new design on a test section of the site (with different questions) so I could test it out in different browsers and devices, and to let others have a look! (Thanks to Kristin, in particular, for great feedback on an earlier iteration of the Help homepage.)
On Thursday I’ll begin the manual process of moving LibGuides over to the new template. I’m also running the Web Content group in that new template so you can test it out. A lot of the customizations I’ve been working on have been on the editing side of things, so LibGuides creators and editors should enjoy the new template in particular.
Springshare products in particular were challenging because they use the same design framework as the campus CMS - Bootstrap. The problem is that GVSU’s Web Team’s version of Bootstrap has some customizations to it that conflict with the customizations of the LibGuides’ Bootstrap. And because of the way LibGuides and LibAnswers have structured their template engine, I can’t turn off their version of Bootstrap for the test part of the site - I have to turn it off globally or leave it on everywhere. So, there will be a little style sheet tweaking when these systems go live to make sure that the two different production versions of Bootstrap play nicely with each other. (Yet another reason I don’t recommend folks use other people’s design frameworks, especially if you plan to sell your product as “customizable”!)
Next week I’ll begin working on Omeka, our Digital Collections platform. Kyle and I did a lot of work to customize that template when Omeka was first launched, and we learned a lot about this system. I feel pretty confident that it will be easier than some of the previous systems because we have complete control not only over the design but also most of the system’s code, too! I also have a wish list of interface tweaks for specific digital collections I’ll be incorporating into the redesign, and Kyle will be launching a new search plugin he’s been plugging away at for the past few months.
After that, I’ll spend the rest of July tackling Document Delivery and Course Reserves. The frameworks for these two systems are very similar (both were developed by Atlas Systems) so I wanted to do them together. I’ll also be releasing some more improvements to the faculty workflow in Course Reserves based on the faculty usability tests I ran in December in January on the previous round of improvements. Finally, Kyle will be updating the Library Status Page with the new template to get familiar with the new design patterns since he’ll be tinkering with anything that needs tweaking while I’m on sabbatical in the Fall!
At the end of July and the first half of August, I’ll be running more tests on these systems and making some performance improvements. For instance, right now each system is loading 5 or 6 style sheets—some from GVSU’s Web Team, some from the software provider (like Springshare or III), and some from us. This means that each site has to request 5 or 6 pages from different servers every time a page loads. We can speed that up by combining all the styles in a single style sheet, and setting it to cache on the user’s computer. (I wrote a special tool that does just almost automagically.) But it takes a bit more effort to make changes in that setup. So, until I’m comfortable that the sites are working as expected, I’ve left the separate style sheets. But I’ll be working back through each system and updating them before I wrap things up in August. I’ll also be updating all the customization files on the Libraries’ Github (those that haven’t already been updated) for anyone interested in how these changes were made.
That’s it! Please drop me a line if you have questions or concerns!
I’ve been hard at work updating the first three external systems to our new web template. EZProxy quietly went live last week. Hopefully you won’t notice! It will only show up if there is a problem. ERMS and I have been testing the link resolver for over a week, and below I have details on how you can test it from the comfort of your own computer before it goes live, May 24th. And the catalog is coming along, but there are so many moving parts I will have a few more days of tinkering before I can start testing.
At long last I am updating us to 360 Link 2.0 with this template change. This is a big boon for two reasons: first, I will no longer have to maintain the 360link Reset script I wrote years ago to reformat the link resolver for usability. (ProQuest redesigned 360 Link 2.0 to look just like ours. No, we didn’t get a discount.) Second, the link resolver includes Index Enhanced Direct Linking, which means that if a reliable direct link to an article exists, users will go right there, bypassing the link resolver. We already have this functionality in Summon, but now it will be available to users coming from other databases or Google Scholar, as well.
If you would like to test 360 Link 2.0, you can do it easily by installing a bookmarklet in your browser, and then any time you find yourself on a link resolver page, click the bookmarklet and it will reload the page with the new template and functionality in place.
Drag the link below to your bookmarks bar:
(Need help? Here are some tips for installing bookmarklets.)
Then load a link resolver page (like this one). Click the bookmarklet in your bookmarks bar, and the page will reload. It should look something like this:
The new template will go live Thursday morning, May 24th. (Exact time depends on when ProQuest’s update cycle runs, which vaires a bit.) After that, you won’t need the bookmarklet to see the new template.
If you see an issue with the new template, be sure to click the “Report a problem with this page” link in the bottom right. That tells us what exact URL you were looking at. Be sure to also tell us what the problem is. “Wrong” is not enough information for us to fix anything. :)
As always, let me know if you have any questions!