On June 6, 2017, I gave the opening keynote at the UX Libs III Conference in Glasgow, Scotland. This is an adaptation of my talk. You can watch the video, or just see my slides, if you prefer.
The sixteenth-century Villa La Rotunda is the masterwork of the architect Palladio, who changed the way we think about architecture. He used all of his knowledge of architecture to design this space for his client. With his design, he reached back to ancient Rome and embraced the way the Romans built their buildings to embody the virtues and ideals important to their society. The proportions, the scale, the materials, all related in some way to the values that Palladio thought were important. And just in case you hadn't taken a course in architecture and didn't know, he covered the Villa with statuary embodying particular virtues and ideals. He had the goddess of wisdom, the goddess of justice, and others. So when you're out for a stroll, not only does the building embody these virtues, but you see them standing before you.
For Palladio, the idea of embedding values into his work was so integral to his approach to architecture, that even hundreds of years after his death, his influential book, The Four Books of Architecture, was still printed with a frontispiece depicting the maidens of architecture bowing before the Queen of Virtue. The image was to give you an idea of what the book was about, in case you thought it would just be about how to design a functional building. Instead, it was a book about values, about ethics. Incidentally, it is also a book about architecture.
Palladio was very intentional about explicitly building values into his buildings. But this is not a unique thing, using one's values to help shape a building. Many architects and designers intentionally build their values into their work.

This is Albert Speer's German Pavilion for the 1937 World's Fair, representing Germany's fascist government. It is a monstrous, imposing, threatening tower of might. It doesn't have statuary embodying those values, but the building itself represents them.

I think we can best see the values built into Speer's building by comparing it to the German Pavilion done 21 years later, for the 1958 World's Fair, by Egon Eiermann. It is a three story, flat, horizontal building. It feels tranquil and calm. It is all glass, representing transparency and democracy. The World's Fair is a place where you represent what you believe is right in the things you make. So these were very intentional choices by the architects, to encode these values into their buildings.
Political and ethical ideas can be written into window frames and door handles.
— Alain de Botton1
And this is not unusual. When we build things, we want them to be expressions of us. When we build things, we want to help people feel cared for, supported, or intimidated, depending on our values.
But this is not just buildings. Gerrymandering, for instance, is also a design that reflects the values of its creators. This graphic is an example of how the redrawing of political boundaries works.
If you have, for instance, 100 voting precincts, and 60 of them vote for a Democratic candidate, and 40 vote for the conservative candidate, split into equal districts, you will have 5 districts sending Democratic representatives to Congress. But if you redraw those districts and put as many blue voters as you can into two of those districts, you can send 3 Republican candidates to Congress even though they are outnumbered by 20%.
This is a design, and it represents the values of the people who created it. If you want to see what this design looks like in real life, above is North Carolina's 12th Congressional District. Doesn't it seem like a really obvious shape for a congressional district? The District has three cities in it, that tend to vote Democratic. The party that was in control, the Republicans, redrew this district to load it up with the opposition party, potentially thwarting the will of the people, with design.
The late Social Critic Paul Goodman wrote that "Technology is a branch of moral philosophy, not of science."2 An when he spoke of technology, he was speaking broadly. By technology, he meant knowledge applied to practical means, rather than the simple definition we often gravitate towards, like gizmos and gadgets.
We take what we know, and we make something with it. That's technology.
The process of making technology is called design. It's not just architecture, its not just gerrymandering, but rather it is design. And design is a branch of ethics, because every decision you make as you create something is going to limit and constrain the possibilities for the people who use your tools, your services, and your designs.
UX Libs II alum Andreas Orphanides says in his talk Architecture is Politics that all of our designs reflect our values and the culture that we are in. This is true not just when we're intentional about designing something, like a user interface or a poster. All systems that we design, everything that we create, including our policies, our workflows, our buildings, our websites, and our services, reflect our values.
What
Are
Our
Values?
So, what are our values? As librarians, we could all go to our professional organization's website and download a handy PDF of our code of professional ethics. (IFLA has a handy list of professional codes of ethics, actually.) Then we could point to it and say, here you go! Access, privacy, equity, our values are all here in black and white.
When was the last time you taped your organization's code of ethics to your wall, next to an affinity map? That's okay, we don't necessarily have to have it visible all time to know what our values are, right?
In our tools, if someone searches for "children's literature" and our discovery layer suggests that they might be interested in "children's sex literature," what values do those reflect? Access?
Do we value our users more than the appearance of convenience?
This is the Brooklyn Public Library's online library card application. You don't have to go in and fill out a paper form. But this is one of the questions they ask: gender. And they give exactly two options: Male, or Female.

If I'm a transgender woman and I encounter this question, how do I answer it? Do I answer for who I am, or do I answer what it says on my birth certificate? After all, they don't say what they will do with this information, or who will see it. If I haven't come out to my friends and family, do I dare answer this truthfully?
If libraries are supposed to be a safe place, why would we ask this question, and make someone justify themselves just to use our services?
What are our values? Equity? Privacy? Do we value our users more than our fetish for data collection?

If we search for information about stress in the workplace, and our tools tell us that stress is probably related to women in the workplace, what are our values? Equity?

If our tools will only work on the newest technology, what are our values? Access? Equity?
This is how bad design makes it out into the world. Not due to malicious intent, but with no intent at all.
- Mike Monterio3
The difference between libraries and Palladio, Speer, and Eiremann, is that the architects intentionally encoded their values into their designs. Your values will be encoded in your work whether you want them to be or not. So be conscious of your values and what you want your work to say.
Last year, my friend Cody Hanson gave a talk called Libraries are Software.4 According to Cody, when you look at what we think about libraries, the one constant as we change is that our values underlie what we do. They are the most important part of libraries.
If we want to do the kinds of things we talk about here at UX Libs, we have to build those things into the software and services that we make for our users. This is intentionally encoding our values into the things we design.
When I hear Cody say that we should be encoding our values into our services, I hear him saying that we need ethical design.
What
is
Ethical
Design?
Ethical design is thinking about what happens in the world if we make this thing. You have to think about how the people who use your design will be affected. How do the choices we make when creating something help or hinder those who will use it? Does that sound familiar to you? That sounds like UX.
This should be our bread and butter. Because a graphic designer doesn't necessarily worry about how someone will interpret a poster (even if they should). An industrial designer doesn't necessarily have to worry about how people will react to what they've created (even if they should). But an experience designer thinks about the experience of the people who use our tools. It's in the title! We have the tools.
If any of you read one book about design in your life, you should read Victor Papanek's Design for the Real World. It's a classic 1970s screed against the state of things. The first line of the book is "There are professions more harmful than industrial design, but only a few of them."
Papanek argues that designers need to engage their moral and ethical judgment before creating prototypes, before drawing sketches. He urges us to think about ethics from the beginning.
Analytics
Let's talk about analytics.
Libraries love analytics! And I'm thinking of analytics broadly. We can include qualitative and quantitative data about our users under the heading of analytics. We love usage data, website data, and even gate counts for some reason.
Why do we collect data? Design giant IDEO reminds us that "the goal of design research isn’t to collect data; it’s to synthesize information and provide insight and guidance that leads to action."5
Now, in the States, it's different from other places, because we're less rational about data collection than most. For many US libraries, we have to provide a big data report to organizations like ACRL and ALA, because they need to know how many people walked through the gates of our library. It doesn't matter that gate counts offer absolutely no qualitative information or context for how to improve your library or services, the point is that you have a number you can share with these organizations. So we collect the data.
But data has a particular purpose, which is to guide you as you work through the design process. It's to help inform your design. But that's not how libraries tend to use it. We mostly seem interested in collecting it, just in case.
How many of you have Google Analytics installed on your websites? Almost all of you. How many of you made a conscious choice when installing Google Analytics, understanding you were making a trade-off? How many thought what they learned from the data would be so valuable that it would be worth the risk to our users' privacy? No one here had that conversation.
At GVSU, we didn't have that conversation either. I have 14 different instances of Google Analytics running on all our various web tools. (This is starting to feel like a 12-step meeting: My name is Matthew, and I use Google Analytics.)
But beyond the privacy concerns (which are super real, and you should read more about it from Eric Hellman) , I have other real issues with website analytics. We love analytics and data so much that often it's the only thing we see.
Remove a person’s humanity, and she is just a curiosity, a pinpoint on a map, a line in a list, an entry in a database. A person turns into a granular bit of information.
- Frank Chimero6
But what do you see in a spreadsheet, or a databases, or a map? Can you see people behind the rows and rows of data?
We always talk about designing for people. But, if the people are only represented by columns and rows of numbers, it is a bit easier to forget that those are real, complicated people using our libraries. Analytics reassure us that people are predictable, that their behavior will be reasonable and methodical. But what tells us more about human nature, a spreadsheet of catalog searches, or Whitman, when he writes "Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes."?

This is a screenshot from the marketing websites of Analytics on Demand from Gale. It's an analytics service that is provided by the company that will give you "household-level data" about your users. That means that instead of anonymous numbers in a row of spreadsheets, you can know the "age, race, and ethnicity" of the people behind those numbers. But what good is that information? Are demographics, our racial backgrounds, our genders, what cause us to make decisions, to want things, to do searches in the library? How can this information possibly help a library?
They also claim they can provide you with voting information about your patrons which, I suppose, can be valuable if you are a public library with an upcoming millage, but if scares the hell out of me. Look at that map: do you see people behind the pin points, highlighting where all of the libraries patrons are?
The way this service works is that you upload all of your own information to their friendly servers, and then they crunch your numbers and spit out some dehumanizing maps and charts for you as the last vestiges of your patrons' privacy are flushed away. This service provides them with a lot of value, too, of course. They want the data more than they want to provide you with this analytics service.
I think we have top be very careful with this kind of analytics usage, because we're not thinking up-front about what is right. Our moral judgment has not been invoked before we start designing.
Let me be clear: I am not condemning analytics. I am not saying we shouldn't use data about how our services are used. (I am coming down strongly against creating a map of where library patrons live.) But we're designing for people, and we need to make our design process more rich, and move beyond analytics and data.
Personas
"But Matt!"" you're saying. "We have personas! They help us take that data, and they humanize it."
How many people here use personas? About half. They can be a useful tool, especially if you want to communicate patterns of behavior within your organization, or when you are designing, or helping to provide some context for the analytics data you've collected. Some people prefer to see a face instead of data, to "trick" themselves into designing for a person that is an amalgamation of behaviors of particular types of users. I use personas in my work, as do my colleagues at GVSU.
And we have good intentions, because you need to have a way to design for people who are not you or the other library staff. As Erica Hall states, designing for yourself will likely lead to "building discrimination right into your product."7
Personas begin with us trying to better understand our users. Below are some examples of personas from Montana State University Libraries. They've done a terrific job of documenting and researching their personas, and they've been useful to me as I created and refined my own.


They have an undergraduate persona, a graduate student persona, and they also have faculty, including an adjunct (whose salary is approximately half of what a barista at Starbucks makes.)
But the longer I've used personas, and the more I've talked with others who use them, I've begun to feel uneasy about them as a design tool, at least in the way they are commonly used. Do you see anything strange about the personas above?
Everyone is smiling. Even the adjunct professor is smiling! (I used to be an adjunct at GVSU, and I can tell you that I didn't have time to smile, because I had to work 2 other jobs to make rent.)
The question is, when our personas all seem to be happy, perfect individuals, then who are we designing for? We're designing for smiling people, happy people! People who love being at the library! What amazing opportunities they have in the library, and they're so happy about it.
But people aren't really like that. Karen McGrane asks us to remember that we're note designing for "expert automaton, programmed to complete each task flawlessly." We're instead designing for "the messy, error-prone, distracted human."8 These "patterns" of behavior we've encoded into personas seem like nothing more than the same biases we read into our raw data, but with a smiling, CC-zero licensed portrait tacked on top.
Full disclosure: below are the smiling people on the GVSU personas. We're not immune from this.

How can we make these design tools and our research reflect the complexity of our users? Think about situations that might bring someone to a library. Because, your users might not have a choice about coming to you. They might be in crisis, afraid, numb, bored, angry, sad, or some combination of these things. Are we ready to help them when they come?
Deirdre Costello from EBSCO's User Research team shared Indi Young's article on crafting personas without demographics or photos. Indi proposes that instead of leading with demographics, you lead with needs, and use description and narrative instead of "facts". Demographics aren't necessary for a persona to be useful, but they might trigger biases about particular groups of people in those who use them to design.
Algorithms
Now everyone's favorite topic, algorithms.
We interact with algorithms all day, every day. The library is no exception. Of course, we're ahead of the game. Before we had computers the library ran on algorithms. An algorithm is a series of choices often based on conditions, like the steps to catalog a book. (Impress your friends by referring to AACR2 or RDA as algorithms about semi-colon placement.)
We think of algorithms as neutral systems that we build into computers. And so, if the computer is doing the choosing or the recommending, then there can't be a problem of bias or discrimination.
But all of those algorithms were written by messy people, with ideas and opinions and biases they might not even know they have. And often these people don't even know understand what they are designing for.
Relevance is one of those things. Libraries don't even have a clear idea of what relevance is anymore. Yet, we base many of our decisions on the effectiveness of one set of relevancy algorithms over another.
But what is relevance? How many people here have a particular search yo do when you encounter a new library search tool? (I once built a small social network at a bar after a conference called This is My Search, which allowed librarians to simultaneously share "their search" while testing it on a random, new system, so I know you all have one.)
My search is "batman," which gives you a nice mix of medieval special collections material as well as contemporary books and media from both adult and juvenile collections. UX Libs own Matt Borg uses "ethical tourism." John Chapman of OCLC told me years ago that his was "Space law." And my colleague Jeff uses "Stress in the worlplace," since you can see how well a tool uses synonyms by looking for engineering articles. (Full props to Pete Coco of Boston Public Library, who also uses batman. I shamelessly stole his search years ago when I realized how useful it was.)
But we like to use the same search across different tools because we think we're also evaluating the relevance algorithm of a tool. We might say, upon seeing a strange set of results in a new system, "I don't like the relevance ranking of this system."
But what is the assumption behind that statement? Is relevance actually a property of a list of items? Or is relevance actually a property of the relationship between the items and the person who needs information? There has to be a "for whom" for something to actually be relevant.
If two different patrons search for "fetal cell research" in your discovery tool, you may get varying reports as to the relevance of the results. Imagine the first user is a second-year college student writing a research paper, while another is a late-career academic that was just diagnosed with cancer and was told that these experimental fetal cell treatments would be her only hope of survival. These two people will have wildly different ideas of what results are relevant.
Tarleton Gillespie suggests that this makes relevance all but unknowable, and so "quick clicks and no follow-up searches [are] an approximation, not of relevance exactly, but of satisfaction."9 What we instead are designing for is satisfaction, that things "look right." I don't know what the answer is to this problem, but we have to start by being honest with ourselves about what our tools are capable of.
The Topic Explorer is one of my favorite features of Summon, the discovery layer we use at GVSU. It is designed to give contextual information about broad searches (or what we often call crummy searches, 1-2 words at most). In the example given with Summon's marketing material, searching for "heart attack" will expand the search and bring in an encyclopedia article about myocardial infarction, which is the medical term for heart attacks. Because medial literature is going to use this terminology, the user gets more academic materials and learns the proper terminology for their field of research.
I did an analysis of the Topic Explorer last year, because I really love this feature, and I wanted to make it better. 93% of the time, the topic was at least on the same topic as the user's search. That's pretty good success for an algorithm.
But the more important question isn't about the tool's success, but what happens when it fails. What about the other 7% of the time, when it shows some off-topic entry?
We don't know how the Topic Explorer works, because the algorithm is proprietary. So we don't know what makes it decide to choose one topic over another. We don't know, for instance, what makes it choose "Sexual abstinence" if you are searching for virginity. These are not synonyms, as you can be sexually abstinent and not be a virgin. There are, in fact, strong religious and political connotations to those two terms, and conflating them in this way makes it appear that ProQuest, and for the purposes of our users, the library, are making a political stance.
But no one sat down and consciously connected sexual abstinence and virginity together at ProQuest, so how did they get matched? What happened?
What happened in this discovery tool to connect a search for "new york city waste" to "new york city women"?
It's not a coincidence that the problems we find in our tools reflect the same biases we fight outside the library in our larger society every day.
None of these were intentionally added to these systems. But as Mike Ananny has said, "Reckless associations—made by humans or computers—can do very real harm especially when they appear in supposedly neutral environments."10 We have to be very careful when we provide complex tools and services to complex, messy people. Because if we lose sight of the fact that we are dealing with real people, not smiling, 2-dimensional buckets of data, we can do real harm.
A lot of this is systemic. The library world is overwhelmingly white, and those making software for libraries are overwhelmingly male.
This lack of diversity is a user experience issue! Who you hire to build your tools is a user experience issue! Who you hire to work your desk is a user experience issue! Who you hire to choose the collections is a user experience issue! Who gets a voice in your staff meetings is a user experience issue! These all affect your users' experience of the library, and must be treated accordingly with your moral judgment in play from the beginning.
Your
library
is
people
This is the best profession to be in. Because our mission is genuinely to help people.
But we need to keep in mind the whole humanity of those we are designing for. And that might mean some changes to the way we do things. But our users will thank us for it.
Thank you.
Footnotes
- de Botton, A. (2006). The Architecture of Happiness. New York: Vintage International. p.93 ↩
- Goodman, P. (1969). Can Technology Be Humane? New York Review of Books, Nov. 20, 1969 ↩
- From the talk, How Designers Destroyed the World ↩
- A fire alarm went off at this point in my talk, and the 175+ attendees and I tromped down 9 flights of stairs and then back up 9 flights of stairs between this sentence and the next. The gap in the video recording was pretty interesting. ↩
- IDEO. (2016). The Little Book of Design Research, p.41 ↩
- Chimero, F. (2011). The Space Between You and Me. The Manual, 1. p. 19 ↩
- Hall, E. (2013). Just Enough Research. New York: A Book Apart. p. 79 ↩
- McGrane, K. (2011). Lesson. The Manual (2). p.45 ↩
- Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. Boczkowski & K. Foot (Eds.), Media Technologies: Essays on Communication, Materiality, and Society. Cambridge, MA: MIT Press. p. 175 ↩
- The Atlantic, The Curious Connection Between Apps for Gay Men and Sex Offenders" ↩