Symposium Rediscovered: new technologies on historical artefacts


Liber Pontificalis. UBL VLQ 60, fol. 20r. Images made with various filters of the White Light Portable Light Dome, developed by KU Leuven. 

The materiality of historical artefacts and the development of new digital technologies might seem to contradict, however, quite the opposite is true. Increasingly digital technology is deployed to deepen our knowledge of cultural heritage in the broadest sense. On November 16. 2018 Leiden University Libraries invited a variety of speakers to discuss the rediscovery of historical artefacts through new technology, by focusing not only on the technology itself but also on the implications for historical research and our understanding of our material heritage.

The symposium is part of the program Beyond content, with which the UBL focuses on the materiality of text and images through a series of activities like workshops, presentations and an exhibition. Specific attention is paid to the forms in which historical texts and images have been handed down, but also to the digital techniques that have recently been developed to better study them. For more information on the programme see the website Beyond content.

The speakers introduced us to a range of methods, tools and algorithms often borrowed from the beta sciences and applied within the humanities. That these do not always need to be high end and expensive, was shown by the first keynote speaker Kate Rudy (University of St Andrews/NIAS). In her presentation Four technologies to spy on the past, she talked about the projects she will start as part of her upcoming Leverhulme Fellowship. As a medieval art historian she is interested in the production and use of illuminated manuscripts.  In an earlier project she studied the use of texts and miniatures in a manuscript by measuring the grime with a densitometer. The calliper she will use to measure parchment thickness costs only 100 EUR, but serves perfectly find out whether leaves or quires were added to a manuscript.

Rudy also stressed the importance of handheld amateur photography. When libraries digitise a manuscript, they often focus on lavishly decorated and untouched manuscripts. But many researchers like her are particularly interested in the ugly, worn and broken ones. And cleaning a manuscript as part of a conservation project will lead to loss of information on the use of a manuscript as well. When researchers visit our library, they take many pictures from unstudied and non-digitised manuscripts or from surprising angles. These pictures are sometimes shared on Twitter, but most of the time they are only kept on standalone computers, unavailable to others. Although a lot of researchers do use free cloud storage like Google Photos and Flickr, this is not a reliable solution; the platforms can change the terms and conditions (Flickr recently limited the possibilities for free accounts), and sometimes simply shut down (just think of Picasa). This led to an interesting public discussion: do research institutions have a responsibility to store and share the results of DIY digitization? As a service it turned out to be very much desired by researchers.

Hannah Busch (Huygens ING) participated in the eCodicology project, in which several tools were developed to analyse large amounts of data taken from medieval manuscripts. In her presentation Machines and Manuscripts: New technologies for the exploration of medieval libraries she explained the use of algorithms for the automatic identification of lay-out elements, like columns, initials and miniatures. These data are added to the information taken from the descriptions in traditional catalogues. When combined they form a rich source for data visualisations of libraries as a whole. This makes it possible to gain better insight in book historical aspects like the relationship between format and size, or the percentage of manuscripts with decorations or miniatures.

In her new project at Huygens KNAW called Digital forensics for historical documents. Cracking cold cases with new technology the goal is to build a tool for script analysis in manuscripts based on convolutional neural networks. This technique is also used in image- and face technology.[1]

Meanwhile Hannah Busch offered a very useful summary of the needs of researchers as well. What she wants is to:

  • Perform your own ingest with IIIF
  • Run different types of analysis
  • Share data
  • Search/export/visualize
  • Allow other people to annotate and correct

A prerequisite for this is of course to have the data FAIR: findable, accessible, interoperable and reusable.

Francien Bossema (Centrum Wiskunde en Informatica/Rijksmuseum/UvA) demonstrated the FlexRay Lab, a method for 3D visualisation using XRays and CT.[2] The non-invasive method can be used for medical imaging and food industry, but also for art historical research.


With the CT scanner it is possible to look inside objects in 3D during the scanning process. Together with the Rijksmuseum a workflow was developed that can be used both for both research during a conservation process and to reconstruct a production process. Bossema explained the method by reconstructing the production of a so-called Chinese puzzle ball. These decorative balls were made in the mid-18th ct from one single piece of ivory, consisting of several concentric spheres each of which rotates freely. Using the CT scanner it became clear that the spheres were made with a set of “L” shaped tools with progressively lengthening cutters. Only the outermost balls were carved elaborately.[3] Currently, they are working on an in-house scanner for the Rijksmuseum, to make the transition from 2D to 3D scanning possible by providing a standardised process for art historical research. As a result of these activities the Rijksmuseum is collecting large amounts of data. The museum is thus entering a new field and cooperation with institutions with more experience in this field, such as research institutes and libraries, is necessary.

The last years libraries and archives are increasingly confronted with growing collections of born digital scholarly archives. Peter Verhaar is working both for the Centre for Digital Scholarship of Leiden University Libraries and for the master’s programme in Book and Digital Media Studies. In his presentation Durable Access to Book Historical Data he discussed the challenges he was faced with in the acquisition of the digital archive of Professor Paul Hoftijzer. Hoftijzer, who is working on the Leiden book trade in the early modern period, has produced a rich collection of Word documents and Excel spreadsheets that he wanted to donate to the library. As a first step, Verhaar cleaned the unstructured data and transposed them to a database in systematic format. This essentially resulted in a new archive. The question for the audience was whether both of the archives should be kept. Cleaning up the “data grime” will in either case lead to a loss of information, in the same way as cleaning a physical manuscript does.

The pilot is also set up to raise awareness among researchers. The university library offers courses in data management to ensure that researchers know how to make their data FAIR. But we are now in the middle of a transition, researchers who are retiring the next couple of years never received these instructions, and in case no measures are taken, this will lead to loss of research data. Paul Hoftijzer, who was also attending the symposium, stressed the importance of keeping both a personal and a professional archive. In his opinion, only the combination of both can ensure a correct interpretation of the data.

Martijn Storms (Leiden University Libraries) introduced the audience to the crowdsourcing project Maps in the crowd that is running for more than 3 years now and has been very successful. With the help of enthusiastic volunteers almost 10.000 maps have been georeferenced, which means that users can find and use maps in an intuitive, geographic way, by browsing on a map.  The maps can also be used in geographical data systems, e.g. to facilitate landscape analysis. The project attracted a lot of press, providing a large audience of map enthusiasts the opportunity to connect with the library and the collections.

In the afternoon the audience was invited to participate in an introductory workshop to IIIF. IIIF IMG_8300You can try it out yourself here:

Additionally, a pop-up exhibition was set up showing a selection of materials from the collection.

The final keynote by Giles Bergel (University of Oxford) focused on the physical and material aspects of the digital. He started his paper called Beyond fixity: the printing press in the age of digital reproduction by telling the story of the Doves press, responsible for the famous Doves font. After the two partners Thomas James Cobden-Sanderson and Emery Walker got into a severe dispute about the rights on the matrices in 1913, Cobden-Sanderson threw all of them into the Thames river. Since 2013 the Doves Type has been revived digitally by the designer Robert Green. He managed to recover 150 pieces of the original type from the Thames, which helped him to reproduce the font, including the imperfections of the original matrices. This story shows that “digital”, although increasingly experienced as something immaterial or even imaginary, has a materiality in itself as well. This sense of materiality is essential for book historical research, even when this is performed with a laptop and a package of software.

Giles Bergel is part of the Visual Geometry Group in Oxford, where tools are developed for visual analysis in of image and video content in a variety of academic disciplines like Humanities, History and Zoology. He is also Digital Humanities Ambassador in the Seebibyte project. One of the open source products developed is VISE, an application that can be used to make a large collection of images searchable by using image regions as query. VIA is an image annotation tool that can be used to define regions in an image and create textual descriptions of them. The Traherne digital collator finally makes it easy to compare copies of the same text in order to identify variants between them. Thanks to this tool, researchers no longer have to follow the so-called “Wimbledon-method” to compare prints, which means that headaches are fortunately something from the past.

The presentations can be found here:


[1] For an introduction see: or read this article by Dominique Stutzmann in which the same technology is applied:

[2] For more information on the project:

[3]  With images and extensive description.


Geef een antwoord

Het e-mailadres wordt niet gepubliceerd.

Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.