Fantastic Futures: think old, not new

Stanford Libraries hosted the 2nd International Conference on AI for Libraries, Archives, and Museums December 4-6, 2019. Visit the website fantasticfutures.stanford.edu for recordings of the talks mentioned below.
Despite the giddy optimism that the name implies, the Fantastic Futures conference offered a studied enthusiasm for how applications of AI can unlock our past for the benefit of the future. The conference hosts, Michael Keller (Vice Provost & University Librarian & Vice Provost for Teaching and Learning, Stanford University) and Aslak Myrhe (National Librarian, National Library of Norway) both underscored the importance of libraries, archives and museums in civil society and encouraged a renewed dedication to access, preservation, and learning through the application of machine learning. The intention was not only to consider how we can employ these technologies to improve existing processes, but to think about how the technologies can engender new possibilities for knowledge institutions. In short, let us make AI our own.
Democratizing and demystifying AI
"Finland has shown us the path. It set up the goal of training 1% of its population to get basic knowledge of how AI works and can be used." So said Emmanuel Macron at the Global Forum on Artificial Intelligence for Humanity in Paris. He was referring to the course Elements of AI developed by Teemu Roos (University of Helsinki). In the session 'Democratizing AI' Roos and Rachel Thomas presented free online programs that help people learn about and implement AI. Thomas is co-founder of fast.ai which provides courses on deep learning, natural language processing, and more. Both programs encourage people around the world to learn practical skills, in the process helping to increase the diversity of AI practioners. The explicit goal of Fast.ai is to make AI non-exclusive and demonstrate that success in the field does not require tremendous resources or a PhD in Computer Science. As Bryan Catanzaro put it in his keynote address,"the priesthood of ML researchers is becoming less and less necessary for a lot of these applications."
Lightning talks on current work in AI for Cultural Heritage demonstrated the wide range of applications that have the potential to open access to collections that are either too big (85 kilometers of shelving in the case of the Vatican Apostolic Archive) or too difficult to describe with traditional methods. For example, converting speech to text to index an audio and video archive as demonstrated by Karen Cariani and James Pustejovsky and automating entity extraction on Norwegian media content by Svein Arne Brygfjeld's Nancy project. Elena Nieddu (In Codice Ratio, Thomas Van Dijk, and Katie McDonough (Living with Machines) showed how machine learning can also be tremendously effective for extracting information from rare maps and manuscripts where each item has unique and distinct qualities. Peter Leonard and Thomas Smits showed the power of convolutional neural networks in transforming image discoverability. By vectorizing image collections and calculating the similarity of images in vector space, researchers can take advantage of clustering to explore the entire landscape of a collection as semantic fields (Pixplot) and spot trends in large collections of digitized visual sources (Chronic).
Access to curated data is essential to machine learning
A number of the projects shared in these sessions are the result of the changing research questions and practices of the digital humanities. As research practices change, the needs and expectations that researchers have from the library are also changing. Emmanuelle Bermès addressed this in her talk about the Bibliothèque nationale de France's new Data Lab which combines a physical work space in the library with services for working with large text corpora. Bermès pointed out that libraries have been building interfaces to digital collections that are modeled on analogue modes of engagement: access to one or a few items at a time, pan and zoom for images, etc. But researchers now want access to the raw data. The shift in research practices is happening quickly and it is no longer a matter of distant reading vs close reading of content, but a complementary combination of both.
Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia, framed his keynote address as a request for help. He came to us 'hat in hand', as he put it, to ask for properly curated datasets. The problem, he said, is that in computer science the norm is to train models with datasets chosen simply because they are available, without attention to the content. But they are realizing that the dataset is as important as the model in training an algorithm. He shared examples from both computer vision and natural language processing of 'Same Model - Different Data' producing very different results. Catanzaro's team trained a generative model for rendering graphics of street scenes which could in turn be used to train self-driving cars. Mario Klingemann, an artist and resident at Google Arts and Culture, trained the same model on portrait paintings of white men from the 19th century and earlier and hooked it up to a web cam to "turn your face into an oil painting in real time."
Whereas, according to Catanzaro, too often computer scientists tend to focus only on the model and think that the data is someone else's job, data curation is fundamental to artists working with generative systems. Kenric McDowell who leads the Artists + Machine Intelligence program at Google spoke about the significance of curating a training set in the afternoon panel hosted by Vanessa Kam, Visual Art and Algorithmic Making. An example he gave from his program is Anna Ridler's Mosaic Virus, 2019. The work is trained on meticulously catalogued photographs of tulips and the result is an AI that generates tulips that vary according to the price of bitcoin, tying the 17th century Dutch speculation around tulips to crypto-currency speculation today. The set of 10,000 hand-labelled photos used for training is itself a titled work: Myriad (Tulips), 2018. Another project by Ross Goodwin generated poetry based on data collected from a surveillance camera and a Foursquare API during a roadtrip, revealing the scarcity of what McDowell referred to as 'real' information in the commercialization landscape we inhabit. His take-away from the project was, at least in part, how technologies coming out of industry are un-critical of context and unaware of the lived reality of the environment where they operate.
Catanzaro shared an example of the challenges of data availability for industry in IBM's use of the Yahoo Flickr Creative Commons 100 Million (YFCC100M) dataset to train a facial recognition algorithm. After assessments of leading facial recognition programs by MIT researchers demonstrated the white and male bias of the training data, IBM attempted to improve their results by using the more diverse YFCC100M dataset. Catanzaro pointed out that their choice raised concerns from Flickr users who did not intend to licenses their photographs to train facial recognition. Complicating the issue further, in the panel on Data and Privacy convened by Ashley Jester, Sociologist Angèle Christin asked whether it is even ethical to make facial recognition more accurate. The lack of governance and its harmful applications, particularly in surveillance and policing, has led to calls for banning facial recognition and techniques to help people avoid being recognized.
Data privacy as a public good and AI governance
On the issue of bias and harmful applications of AI, the second keynote speaker, Joanna Bryson shared her own research showing that building artificial models does not in itself, result in biased systems. It maps to lived human experience, making implicit human bias explicit. AI is not the problem, but it can disguise decision-making. Bryson sees an important role for knowledge institutions in holding people accountable.
"We do need a lot more archivists in the world. " Joanna Bryson
The problem, for example, with predictive policing applications like PredPol and COMPAS may begin as a problem of data and flawed methodology, but Bryson is also concerned about the decisions that may be made to keep an unjust system in place intentionally. She would like to see greater attention paid to the provenance of data libraries and preservation of model parameters as well as the logs of decisions made in operationalizing these systems.
Bryson also addressed the issue of losing jobs to automation, a concern keenly felt in libraries, archives, and museums. Fears are stoked by technologists who see AI as a panacea for all sorts of purported human inefficiencies. Bryson played a clip from Geoff Hinton saying in 2016, "people should stop training radiologists now" since neural networks were being trained to detect disease in chest x-rays. But Bryson pointed out that there are more radiologists than ever before and they are producing more value because they benefit from AI.
Bryson cited Autor's "Why are there still so many jobs?" in which he makes the case, in response to Erik Brynjolfsson's The Second Machine Age, that history shows that human labor can often complement new technology. And that AI in particular is useful for discrete tasks but not for the range of activities that make up our jobs. Bryson makes it a matter of policy. Underscoring 'artificial' in AI, she insists that whatever we build and implement is deliberately built to facilitate our intentions and we are responsible for it.
Gate Openers and Collaborators
So what kind of relationships ought we to build with our new patrons who want to train models on our data? As Thomas Smits asked Bryan Catanzaro, is it a collaboration or a transaction?
Aslak Myrhe reminded us that libraries used to be gate keepers, but we are now gate openers and ought to be agents, missionaries, and educators. The message of the conference is that the work of knowledge institutions is necessary to further advances in AI and and we can benefit from applying the technology ourselves as long as we do not automate-away human work. Positioning ourselves to share the value we bring not just as a repository but as professionals with knowledge and skills that are desperately needed may require learning enough about this data-hungry technology to be conversant and to bring our expertise to the work.
Futurs fantastiques!
By the end of the last Unconference day AI4LAM, an international, participatory community focused on advancing the use of artificial intelligence in, for and by libraries, archives, and museums established a secretariat. The members are the National Library of Norway, Stanford Libraries, Bibliotheque Nationale de France, the Smithsonian, and the British Library. And now we can officially announce the 3rd International Conference on AI for Libraries, Archives and Museums will take place Dec 9-12, 2020 at the Bibliothèque nationale de France, Paris.