Is learning about AI in libraries preparation for a future without librarians?

April 1, 2020
Catherine Nicole Coleman
An artwork by James Bridle that shows a car surrounde by a solid white line an a broken white line.

The image associated with this post is from "Autonomous Trap" by James Bridle.

If you attended or watched the talks at Fantastic Futures December 2019, you know that the answer to that question is emphatically No. Both of the keynote speakers addressed the essential role of libraries in providing curated data to improve AI and in preserving the data, models, and records for oversight of how the technology is implemented. Lightning talks (recordings available) demonstrated applications of AI by practitioners operating within libraries, archives, and museums. And Teemu Roos presented Elements of AI, a free online course for everyone designed to demystify AI.



The course is organized into six-chapters recommended to be be completed over six weeks. About forty of us in the Stanford Libraries have just completed the second week.

We are meeting for an hour each week in cohorts of 5-10 people to review the material and share our thoughts about how what we are learning relates to work we do in the library. Each cohort includes people from different units across the library, which has led to lively conversations about how the technology might be implemented and its implications for our work. We have supplemented the course with a related reading list.

The weekly online meetings are a community building exercise, surfacing affinities across units that we did not know existed, as well as occasional tensions. The last question in the first chapter of the course asks each person to come up with their own definition of AI. That is a challenge for anyone, which is part of the point of the exercise. Sharing responses among our colleagues reveals how much libraries are already engaged in thinking about the challenge of defining intelligence across linguistic barriers, cultural differences, and the implicit bias we all bring to the conversation.

In one of the recommended readings on our internal list (not part of Elements of AI), Algorithms of Oppression: How search engines reinforce racism, Safiya Noble demonstrates the harmful power of algorithms to perpetuate social inequality and uncritically prioritize some values over others. The case she makes is particularly important for our conversation because she is speaking from within the field of information science and reminds us that search engines only amplify the harm that has long existed in traditional classification methods. 
 
From the perspective of computer science, Melanie Mitchell, in another recommended reading, Artificial Intelligence: A Guide for Thinking Humans, gives insight into how decisions are made to solve the easy problems at the expense of the more complex problems that touch peoples lives directly. Distinguishing dogs and cats, she points out is not so controversial, and yet that is the kind of success against which we measure the intelligence of machines. The bigger challenge is  facing the many questions where there is no one correct answer like, "What is AI?"  Confronting that question and sharing our opinions with our colleagues can be difficult. It exposes our fears and our certainties, and prepares us to face the uncertainty together.

accessibilityaccessprivsarrow-circle-rightaskus-chataskus-librarianbarsblogsclosecoffeecomputercomputersulcontactsconversationcopierelectricaloutleteventsexternal-linkfacebook-circlegroupstudyhoursindividualinterlibrarynewsnextoffcampusopenlateoutdoorpeoplepolicypreviousprinterprojectsquietreservesscannersearchstudysupportingtabletourstwitter-circleworking