Talk at Re:Publica – Curating the Digital Commons
Last week, the thirteenth edition of the Re:Publica conference was organised in Berlin. With more than 5000 people attending, it is one of the biggest events around new media, journalism and activism. The OpenGLAM team was there to give a talk about the curation of the digital cultural commons.
Together with Daniel Dietrich, chairman of the Open Knowledge Foundation Germany, and member of the OpenGLAM working group, I prepared the talk which is largely inspired by the recent post on OpenGLAM about Small Data in GLAMs. At the moment we are able to get access to such vast amounts of data, that it not longer becomes comprehensible. We therefore need better infrastructure, access and tools to create the most value out of all this metadata and content.
We started the talk by explaining the notion of a commons: the cultural and natural resources accessible to all members of a society. The traditional notion of the environmental commons has been debated many times and are often referred to as ‘the tragedy of the commons’ as these natural resources are not as non-rivalrous and non-excludable as we used to think. However, a digital commons has the quality that when I make a copy of it, any other person is still able to make that exact same copy of the dataset, which will never deplete.
“Digital commons are defined as an information and knowledge resources that are collectively created and owned or shared between or among a community and that be (generally freely) available to third parties. Thus, they are oriented to favor use and reuse, rather than to exchange as a commodity.” – Mayo Fusster Morelli
The fact that these digital artefacts can be re-used by anybody is perhaps the greatest assest of the digital commons, everybody can curate, connect, annotate and remix these materials indefinitely.
After an explanations about the difference between metadata and content (and how difficult the distinction often is!) and an overview of some leading open culture projects such as Europeana and the Digital Public Library of America it became clear how much content we actually have access to at the moment. Just Europeana and the DPLA together provide 30.000.000 metadata records that all link to a digitised object. Wikimedia Commons and the Internet Archive give access to another 25.000.000 media objects. How can a user make sense of that?
For that reason we need to stop thinking about just adding more data and creating huge databases. The Commons need to be structured and made accessible in a way that the user can get meaningful results out of this content and data, and is able to collect the relevant data for his research. The institutions and the users should be able to easily create small data ‘packages’, for example collecting all of Van Gogh’s work. The internet is exceptionally well placed to bring together content in one place, something that would never be possible physically. At the same time we can provide relevant links between collections, artists, time-periods and so on, so the user can explore more related content. This also comes down to good quality metadata, something that is not always there at the moment, not surprising when combining data from thousands of cultural institutions.
Finally we need the relevant tools that allow us to re-use the digital commons. With them, we are able to curate, annotate, visualise, mashup, and much more. Combined, the user and the cultural institution can work together to create the most value out of this enormous amount of digitised content and data.
For a video recording of the talk, click here.