New Book Out Now! Electronic Legal Deposit: Shaping the Library Collections of the Future

I’m delighted that my latest edited collection, with Paul Gooding, is out now with Facet Publishing: Electronic Legal Deposit: Shaping the Library Collections of the Future. Stemming from our Digital Library Futures AHRC funded project, which looked at the  impact of changes to electronic legal deposit legislation upon UK academic deposit libraries and their users, we’ve pulled together this collection from contributing experts worldwide to look at issues and successes in preserving our digital publishing environment.

For those who don’t know what electronic legal deposit legislation is, lets back up a bit. It is of course related to legal deposit, and as we say in the introduction:

Legal deposit is the regulatory requirement that a person or group submit copies of their publications to a trusted repository. First introduced by France in the sixteenth century… legal deposit has since been adopted around the world: as of 2016, 62 out of 245 national and state libraries worldwide either benefited from legal deposit regulations or participated in legal deposit activities… Regulations permitting legal deposit of printed publications have played a vital role in supporting libraries to build comprehensive national collections for the public good… In the last two decades, the scope of legal deposit has grown to formally incorporate ‘electronic’ or ‘non-print’ publication; those published in digital and other non-print formats. (Gooding and Terras 2020, p.xxiv).

We believe that this is the first book to attempt to draw together an overview of contemporary activities in major organisations and institutions trying to preserve our digital publishing world, which of course includes the world wide web, and how it is archived. We do so from a user perspective, looking at the implications this will have from users of the collections, both now and in the future. And we poke a big stick at the intersection of copyright and legal deposit legislation which often conspire to make user access so limited and tricky to negotiate that end users are presented with a series of obstacles to even get basic access to electronic legal deposit content. You can find a break down of the chapters and contributors here, including those from the National Library of Sweden, Biblioteca Nacional de México, National Archives of Zimbabwe, etc etc!

We’ll be holding a book launch on 5th November 2020, online, for those who want to hear some excellent speakers on the topic, including from the National Library of Scotland, and Universidad Nacional Autónoma de México. And I’m particularly taken with the cover of this one, which is an art work created from an actual LiDar scan of the National Library of Scotland stacks, by Edinburgh College of Art PhD student Asad Khan. I love it when a plan comes together.

For those who want a sneak peek of the content, under Facet’s Green Open Access rules, I’m allowed to share the author’s last copy of a single chapter from an edited collection. So here, from Paul and I, is our chapter on how the digital turn has affected legal deposit legislation, showing that ” print era notions that influence the NPLD access and reuse regulations are increasingly out of step with broader developments in publishing, information technology, and broader socio-political trends in access to information”. Have at it, and enjoy.

Gooding, Paul and Terras, Melissa (2020). An Ark to Save Learning from Deluge’? Reconceptualising Legal Deposit after the Digital Turn. In Gooding, Paul and Terras, Melissa (2020) (Eds).  Electronic Legal Deposit: Shaping the Library Collections of the Future. Facet: London, 203-228.

New paper: Understanding multispectral imaging of cultural heritage: Determining best practice in MSI analysis of historical artefacts

What do people actually do when they undertake multispectral imaging of cultural heritage? I’m really pleased that our latest paper has been published, that helps set out the answer to this question, and provides a literature review on heritage digitisation that has been using multispectral imaging, comparing and contrasting methods. This formed part of Dr Cerys Jones’ PhD research, and I was really delighted to supervise this with Adam Gibson and Christina Duffy:

Jones, C, Terras, M, Duffy, C & Gibson, A 2020 “Understanding multispectral imaging of cultural heritage: Determining best practice in MSI analysis of historical artefacts”. Journal of Cultural Heritage.

You can see the journal version online here – but the link above will take you to the authors’ submitted copy.  Enjoy!

Fully Funded AHRC Studentship: “Adopting Transkribus in the National Library of Scotland: Understanding how Handwritten Text Recognition Will Change Management and Use of Digitised Manuscripts”

I’m pleased to say that we’ve won a Scottish Graduate School for Arts and Humanities (SGSAH) 3.5 year scholarship for a PhD student, looking at how we can embed Handwritten Text Written software into digitisation practices, whilst supporting users, working with Transkribus and the National Library of Scotland. The advert will go live soon on our official channels – but for now – here are the details and I’d appreciate folks sharing with any interested EU or UK Master’s students! Closing date of 22nd June. Thank you!

The University of Edinburgh, the National Library of Scotland, and the University of Glasgow, in conjunction with the READ-COOP, are seeking a doctoral student for an AHRC-funded Collaborative Doctoral Award, “Adopting Transkribus in the National Library of Scotland: Understanding How Handwritten Text Recognition Will Change Management and Use of Digitised Manuscripts”. The project has been awarded funding by the Scottish Graduate School for Arts and Humanities (SGSAH) and will be supervised by Professor Melissa Terras (College of Arts, Humanities and Social Sciences, University of Edinburgh), Dr Paul Gooding (Lecturer in Information Studies, University of Glasgow), Dr Sarah Ames (Digital Scholarship Librarian, National Library of Scotland) and Stephen Rigden (Digital Archivist, National Library of Scotland).

The studentship will commence on 14th September 2020. We warmly encourage applications from candidates with a background in digital humanities, information studies, library science, user experience and human computer interaction, history, manuscript studies, and/or palaeography. This is an extraordinary opportunity for a strong PhD student to explore their own research interests, while working closely with a major cultural heritage organisation, two world-leading universities, and the team behind Transkribus (, the machine learning platform for generating transcripts of historical manuscripts via Handwritten Text Recognition.

The student will be based in the School of Literature, Languages and Cultures, at the George Square campus of the University of Edinburgh, but will also spend considerable time at the National Library of Scotland, and liaising with the Transkribus team (based at the University of Innsbruck). Much of the research can be undertaken offsite.

The student stipend is approximately £15,285 per annum + tuition fees for 3.5 years. The award will include a number of training opportunities offered by SGSAH, including their Core Leadership Programme and additional funding to cover travel between partner organisations and related events. This studentship will also benefit from training, support, and networking via the Edinburgh Centre for Data, Culture and Society, and the Edinburgh Futures Institute. The student will be invited to join National Library PhD cohort activities.

Project Details

“Adopting Transkribus in the National Library of Scotland: Understanding how Handwritten Text Recognition Will Change Management and Use of Digitised Manuscripts”

Libraries are investing in mass digitisation of manuscript collections but until recently textual content has only been available to those who have the resources for manual transcription of digital images. This project will study institutional reception to machine-learning processes to transcribe handwritten texts at scale. The use of Handwritten Text Recognition (HTR) to generate transcripts from digitised historical texts with machine learning approaches will transform access to researchers, institutions and the general public.

The PhD candidate will work with the National Library of Scotland and its user community to gain a holistic view of how HTR is changing access to the text contained within digitised images of manuscripts, from both an institutional and user context, at a time when the Library is scaling up its own mass digitisation practices. A student placement at the National Library of Scotland will link this to wider research questions. The candidate will learn how to use Transkribus at an expert level, work closely with the digital team at the National Library of Scotland to understand how best to apply HTR within a heritage digitisation context, and investigate how best to encourage and support the uptake of this technology with users of digitised content. This will result in a holistic, user focused analysis of the current provision of HTR, while also assisting the National Library of Scotland and other cultural heritage institutions in being able to understand how best to deploy this new technology in an effective manner, understanding implications for themselves, and their users, as well as contributing to the growth of the only freely available HTR solution currently available for the heritage community.

This CDA therefore gives unique access to a rapidly growing community, and tool for historical research, which has not yet been studied from a user or institutional perspective. The outputs of this research will be of use to both the National Library of Scotland, other institutions using HTR, those considering this approach, and the READ-COOP, who manage Transkribus.


At the University of Edinburgh, to study at postgraduate level you must normally hold a degree in an appropriate subject, with an excellent or very good classification (equivalent to first or upper second class honours in the UK), plus meet the entry requirements for the specific degree programme ( In this case, applicants should offer a UK masters, or its international equivalent, with a mark of at least 65% in your dissertation of at least 10,000 words.

To be eligible to apply for the studentship you must meet the residency criteria set out by UKRI. For further details please see the UKRI Training Grant Guide document, p17.

The AHRC also expects that applicants to PhD programmes will hold, or be studying towards, a Masters qualification in a relevant discipline; or have relevant professional experience to provide evidence of your ability to undertake independent research. Please ensure you provide details of your academic and professional experience in your application letter.

Prior experience of digital tools and methods, an understanding of digitisation and the digitised cultural heritage environment, use of qualitative and quantitative research methods, and an experience of palaeography, history, or interest in historical manuscript material will be of benefit to the project. However, this is not a prerequisite so while preference may be given to candidates with prior experience in these areas, others are warmly encouraged to apply.

Application Process

The application will consist of a single Word file or PDF which includes:

  1. a brief cover note that includes your full contact details together with the names and contact details of two referees (1 page).
  2. a letter explaining your interest in the studentship and outlining your qualifications for it, as well as an indication of the specific areas of the project you would like to develop (2 pages).
  3. a curriculum vitae (2 pages).
  4. a sample of your writing – this might be an academic essay or another example of your writing style and ability.

Applications should be emailed to no later than 5pm on Monday 22nd June. Applicants will be notified if they are being invited to interview by Thursday 2nd July. Interviews will take place on Thursday 16th July via an online video meeting platform.

Further information

If you have any queries about the application process, please contact:   Informal enquiries relating to the Collaborative Doctoral Award project can be made to Professor Melissa Terras.

More Info:

Libraries and archives are investing in digitisation of manuscript collections at scale, but until recently transcriptions of digitised texts have only been available to those with the resources to manually transcribe individual passages. AI is now used within archives for a growing range of tasks: tagging of large image sets; detecting specific content types in digitised newspapers; discovering archival materials; and supporting appraisal, selection and sensitivity review. Successful machine learning approaches to transcribing images of historical papers by Handwritten Text Recognition (HTR) will transform access to our written past for the use of researchers, institutions and the general public.

This project will explore how Handwritten Text Recognition (HTR) can be embedded into digitisation workflows in a way that best benefits an institution’s users. Transkribus (, currently the only non-commercial HTR platform capable of generating transcriptions of up to 98% accuracy, will be used as the research foundation. Transkribus is the result of eight years of EU funded research into the automatic generation of transcripts from digitised images of historical text through the application of machine learning. There are now 25,000 Transkribus users, including individuals and major libraries, archives and museums worldwide ( Recently, a not for profit foundation (READ-COOP has been established to ensure that the Transkribus software will be sustained. While recent publications have considered HTR from the perspective of platform development (Muehlberger et al., 2019, there has been no research published to date on how user communities are using HTR, the effect this will have on scholarly workflows, and the potential HTR has for institutions.

The project will partner with the National Library of Scotland, with support from its Digital and Archives and Manuscript Divisions. This will enable the student to pursue relevant areas of interest, such as:

  • Experience and analyse the staged processes of digitisation workflows in context;
  • Apply an understanding of HTR to the delivery and presentation of transcribed material online;
  • Work with digitised cultural archival resources, using HTR to generate transcriptions;
  • Apply ethnographic approaches to understand how HTR relates to traditional palaeographic practice;
  • Identify and work with the Library’s user communities and undertake user experience testing with them to evaluate barriers and opportunities.

This will result in a holistic, user focused analysis of the current provision of HTR, while also assisting The National Library of Scotland and other cultural heritage institutions in being able to understand how best to deploy this new technology in an effective manner, understanding implications for themselves, and their users.

The successful student is likely to have relevant experience and qualifications. This might include qualifications in Library and Information Studies, Computer Science, Human Computer Interaction, Digital Humanities or cognate disciplines. They are likely to have knowledge of the Library and Archival sector gained either through professional or academic engagement. Alternatively, an appropriately strong academic background in addition to professional experience of the library sector and/or software development for cultural heritage organisations could substitute for specific qualifications. The placement part of the PhD will be carefully tailored to complement the candidate’s existing skillset, and the National Library of Scotland will give the opportunity to understand both existing digitisation workflows, and to contribute to discussions of future embedded use of HTR.

The University of Innsbruck is the home of Transkribus, and is coordinating the READ COOP. There will be opportunities within this studentship to visit Innsbruck, particularly for the Transkribus annual user conference, and to liaise with the team about developments with the software, including spending time with and in regular contact with the delivery team to understand how the Transkribus infrastructure operates. This will be done with full knowledge and support of the PhD supervisors.

New Paper: an examination of the implicit and explicit selection criteria that shape digital archives of historical newspapers

We’ve recently had a paper accepted to Archival Science journal, on work which has emerged from the Oceanic Exchanges project, which has been looking into reuse of mass digitised newspapers archives, as part of our Digging Into Data funded activities. The question is: how has the selection of historical newspapers for digitisation affected the type of text mining based research we can undertake – and how are institutions making decisions about what should be digitised? I’ve provided a link to the author’s accepted text, which we can share under the licensing for this journal:

Tessa Hauswedell, Julianne Nyhan, Melodee Beals, Melissa Terras, and Emily Bell (Forthcoming 2020). Of global reach yet of situated contexts: an examination of the implicit and explicit selection criteria that shape digital archives of historical newspapers Accepted: Archival Science.


A large literature addresses the processes, circumstances and motivations that have given rise to archives. These questions are increasingly being asked of digital archives, too. Here, we examine the complex interplay of institutional, intellectual, economic, technical, practical and social factors that have shaped decisions about the inclusion and exclusion of digitised newspapers in and from online archives. We do so by undertaking and analysing a series of semi-structured interviews conducted with public and private providers of major newspaper digitisation programmes. Our findings contribute to emerging understandings of factors that are rarely foregrounded or highlighted yet fundamentally shape the depth and scope of digital cultural heritage archives and thus the questions that can be asked of them, now and in the future. Moreover, we draw attention to providers’ emphasis on meeting the needs of their end-users and how this is shaping the form and function of digital archives. The end user is not often emphasised in the wider literature on archival studies and we thus draw attention to the potential merit of this vector in future studies of digital archives.

Keywords: digitization; newspaper; selection rationale; cultural heritage; critical heritage

New paper – How Open is OpenGLAM? Identifying Barriers to Commercial and Non-Commercial Reuse of Digitised Art Images

I’m delighted to be a co-author on a new paper recently published in the Journal of Documentation: How Open is OpenGLAM: Identifying barriers to commercial and non-commercial reuse of digitised art images (PDF of accepted manuscript).

This results from Foteini Valeonti’s work on, where she has built a “virtual museum that democratises art”, including (or at least, trying to include!) many openly licensed images of artworks, testing out the limits of open licensing for both commercial and non-commercial applications. Are they really that open? what barriers are in the way?

The full citation is:

Valeonti, F., Terras, M. and Hudson-Smith, A., 2019. How open is OpenGLAM? Identifying barriers to commercial and non-commercial reuse of digitised art images. Journal of Documentation. doi/10.1108.

The authors’ last uploaded version is available to download here. I’ll paste the abstract, below!



In recent years, OpenGLAM and the broader open license movement have been gaining momentum in the cultural heritage sector. The purpose of this paper is to examine OpenGLAM from the perspective of end users, identifying barriers for commercial and non-commercial reuse of openly licensed art images.


Following a review of the literature, the authors scope out how end users can discover institutions participating in OpenGLAM, and use case studies to examine the process they must follow to find, obtain and reuse openly licensed images from three art museums.


Academic literature has so far focussed on examining the risks and benefits of participation from an institutional perspective, with little done to assess OpenGLAM from the end users’ standpoint. The authors reveal that end users have to overcome a series of barriers to find, obtain and reuse open images. The three main barriers relate to image quality, image tracking and the difficulty of distinguishing open images from those that are bound by copyright.
This study focusses solely on the examination of art museums and galleries. Libraries, archives and also other types of OpenGLAM museums (e.g. archaeological) stretch beyond the scope of this paper.

Practical implications

The authors identify practical barriers of commercial and non-commercial reuse of open images, outlining areas of improvement for participant institutions.


The authors contribute to the understudied field of research examining OpenGLAM from the end users’ perspective, outlining recommendations for end users, as well as for museums and galleries.



New Book Chapter – On Virtual Auras: The Cultural Heritage Object in the Age of 3D Digital Reproduction

Still from the Shipping Gallery video, showing the figurehead from HMS North Star. From Hindmarch (2015).
Still from the Science Museum, London’s, Shipping Gallery Lidar scan video, showing the figurehead from HMS North Star. From Hindmarch (2015, p. 145) with acknowledgement to Scanlab.

We’re really pleased to see the release of a new book, The Routledge International Handbook of New Digital Practices in Galleries, Libraries, Archives, Museums and Heritage Sites, Edited by Hannah Lewi, Wally Smith, Dirk vom Lehn, Steven Cooke (2019). Which has a book chapter from me and my colleagues in it! Based on the PhD research of Dr John Hindmarch, which was supervised by myself and Prof Stuart Robson, this chapter asks if digital heritage 3D objects have their own aura…

Hindmarch, J., Terras, M., and Robson, S. (2019). On Virtual Auras: The Cultural Heritage Object in the Age of 3D Digital ReproductionIn: H. Lewi; W Smith; S Cooke; D vom Lehn (eds) (2019). The Routledge international Handbook of New Digital Practices in Galleries, Libraries, Archives, Museums and Heritage Sites. London: Routledge, pp. 243-256.

Making 3D models for public facing cultural heritage applications currently concentrates on creating digitised models that are as photo realistic as possible. The virtual model should have, if possible, the same informational content as its subject, in order to act as a ‘digital surrogate’. This is a reasonable approach, but due to the nature of the digitisation process and limitations of the technology, it is often very difficult, if not impossible.

However, museum objects themselves are not merely valued for their informational content; they serve purposes other than simply imparting information. In modern museums exhibits often appear as parts of a narrative, embedded within a wider context, and in addition, have physical properties that also retain information about their creation, ownership, use, and provenance. This ability for an object to tell a story is due to more than just the information it presents. Many cultural heritage objects have, to borrow an old term, aura: an affectual power to engender an emotional response in the viewer. Is it possible that a 3D digitised model can inherit some of this aura from the original object? Can a virtual object also have affectual power, and if so, fulfil the role of a museum object without necessarily being a ‘realistic’ representation?

In this chapter we will first examine the role of museums and museum exhibits, particularly as regards to their public-facing remits, and what part aura plays. We will then ask if digitised objects can also have aura, and how they might help to fulfil the museums’ roles. We will see in the case of the Science Museum’s Shipping Gallery scan, that a digitised resource can, potentially, exhibit affectual power, and that this ability depends as much on the presentation and context of the resource as the information contained within it.

Under the licensing for this book, we are allowed to host the author’s last version on our own websites, so you can download a PDF of the full chapter here. Tim Sherratt is also rounding up other author’s last versions here, for other contents of the book!

Academia and Children’s Literature: Books out now!

Picture-Book Professors, book cover Professor in Children's Literature: An Anthology, cover.

In Open Access week, in October 2018, I was very pleased to publish my two books about the relationship of academia to children’s literature. The first, Picture-Book Professors: Academia and Children’s Literature, is published by Cambridge University Press, and contains an analysis of academics as they appear in children’s literature, looking at biases, and what we are teaching children about expertise – but also how these biases map onto the constituency of the real life academy. It is free to view online or download in PDF from CUP,  and freely available for Kindle, over at Amazon. You can buy it in physical form, too, of course, if that is your thing.

Its sibling volume, which expands on and provides many of the early examples in the monograph, The Professor in Children’s Literature: An Anthology, published by Fincham Press (the press of the Department of English and Creative Writing at the University of Roehampton, which hosts the National Centre of Research in Children’s Literature), is available for free download in open access, both in EPub and PDF.  Kindle may be coming soon! And, of course, its available in print, should you want to buy it, too.

I’ve been lucky to have quite some coverage of these, so rather than repeat myself here, I thought I would list that coverage:

Melissa Terras at CUP

We had a lovely evening launching the book in Cambridge.

And how did it “do”? At the end of Open Access week 2018, Picture-Book Professors had been downloaded 1460 times:

CUP graph showing downloadsMy first analogue-only monograph only had a total original print run of 300. (That seems a lifetime ago – before open access, before free downloads, where the gold standard in the humanities was to have a short-run monograph from a major university press – I was happy with that, then!). In changed times, I can’t imagine why you would not want to share your work with as wide an audience as possible, and already the reach of this monograph is potentially far wider than my others, which were not published in open access. (I cover the costs of all this, btw, over on the Fincham Press blog).

The Professor in Children’s Literature has been downloaded 589 times since launch:Downloads from Fincham Press The anthology project was always about marrying open access with digitisation, to provide a means of showing your working out in the humanities, akin to the open-science approach to making your data available. I’m very proud of this anthology (although I learned that putting anthologies together is much harder work than it looks!)

I’m going to miss this topic: it was very much the “fun” thing I had ticking along in the background for over five years, chasing a research rabbit down the research rabbit hole for the pure joy of it, alongside the much larger, technical projects I take part in for the day job. It’s been a blast, and I’m incredibly proud of these books! So there we have it. A project that started off on twitter, moved to a Tumblr Blog, and began to consider this seriously in a blog post, has ended up in not one but two related academic books, which I manage to publish on the same day (with a lot of help from friends in the presses: everyone is thanked at length in the acknowledgements section of both books!).

I now need a lie down, before I start my next secret squirrel project in earnest…


New Paper – ‘Making such bargain’: Transcribe Bentham and the quality and cost-effectiveness of crowdsourced transcription

We (Tim Causer, Kris Grint, Anna-Maria Sichani, and me!) have recently published an article in Digital Scholarship in the Humanities on the economics of crowdsourcing, reporting on the Transcribe Bentham project, which is formally published here:

Alack, due to our own economic situation, its behind a paywall there. Its also embargoed for two years in our institutional repository (!). But I’ve just been alerted to the fact that the license of this journal allows the author to put the “post-print on the authors personal website immediately”. Others publishing in DSH may also not be aware of this clause in the license!

So here it is, for free download, for you to grab and enjoy in PDF.

I’ll stick the abstract here. It will help people find it!

In recent years, important research on crowdsourcing in the cultural heritage sector has been published, dealing with topics such as the quantity of contributions made by volunteers, the motivations of those who participate in such projects, the design and establishment of crowdsourcing initiatives, and their public engagement value. This article addresses a gap in the literature, and seeks to answer two key questions in relation to crowdsourced transcription: (1) whether volunteers’ contributions are of a high enough standard for creating a publicly accessible database, and for use in scholarly research; and (2) if crowdsourced transcription makes economic sense, and if the investment in launching and running such a project can ever pay off. In doing so, this article takes the award-winning crowdsourced transcription initiative, Transcribe Bentham, which began in 2010, as its case study. It examines a large data set, namely, 4,364 checked and approved transcripts submitted by volunteers between 1 October 2012 and 27 June 2014. These data include metrics such as the time taken to check and approve each transcript, and the number of alterations made to the transcript by Transcribe Bentham staff. These data are then used to evaluate the long-term cost-effectiveness of the initiative, and its potential impact upon the ongoing production of The Collected Works of Jeremy Bentham at UCL. Finally, the article proposes more general points about successfully planning humanities crowdsourcing projects, and provides a framework in which both the quality of their outputs and the efficiencies of their cost structures can be evaluated.

Causer, T., Grint, K., Sichani, A. M., & Terras, M. (2018). ‘Making such bargain’: Transcribe Bentham and the quality and cost-effectiveness of crowdsourced transcription. Digital Scholarship in the Humanities.

On endings and new beginnings: My new role at the University of Edinburgh!



The start of the new academic semester sees the dust settling on a new adventure for me and my family: in October 2017 I left UCL to join the University of Edinburgh, where I am the new Chair of Digital Cultural Heritage. I’m truly excited to have joined a university that has made such a strong commitment to applying data science into all aspects of academic, civic, and industrial life. As well as leading Digital Scholarship in the College of Arts, Humanities, and Social Sciences, I’ll be establishing a new research centre in data science, culture and society (yet to be formally named! we’re still deciding… ) which will bootstrap, enable, support, and promote digital and data-based research in the Arts, Humanities and Social Sciences. My post is part of an expansion built around the new Edinburgh Futures Institute: a new university institute which will tackle societal and cultural issues via data science, and offer a raft of innovative teaching programmes. In 2021 the EFI will move into its permanent home at the heart of the University in the refurbished Old Royal Infirmary, in the city centre of Edinburgh, and it is exhilarating to be part of the team helping to scope out the direction and implementation of a new institute, with all the opportunities and challenges it will bring. I’ve posted a picture of the EFI, above: our very own digital arts/humanities/social science Hogwarts! (Image courtesy of Bennetts Associates).

Of course, having been at UCL for over 14 years previously, I’m missing colleagues, friends, and Bloomsbury, but I’m keeping up research projects and links, whilst forging new opportunities north of the border, and beyond. I’m delighted that both UCL Centre for Digital Humanities and UCL Department of Information Studies have made me Honorary Professor. I also know that I leave UCLDH – which I co-founded and directed for many years – in good hands, with Simon Mahony now as UCLDH Director. I have every faith that it will continue to flourish, and am promised a cuppa and a cupcake – as is the UCLDH tradition! – when I’m passing through.

Edinburgh is great so far. As well as the challenges of a new job, and meeting hosts of new colleagues, and the bubbling away of new research ideas and approaches, I’m enjoying the change of scene, and getting my head around a new institution. In the three months I’ve been at Edinburgh I’ve almost stopped counting how many weeks I’ve been in the place, and saying “at UCL we did it like this…” (I had to edit the opening line to “semester” rather than “term” and that’s just the start of the mental remodelling involved). I’m now living near family, I walk to work, I live in a Victorian mansion, and my children are happy, in great schools. I no longer have a punishing commute (you don’t think most people who work in London can afford to live in London do you?). I’ve a new, glorious, European city to explore, which feels like home already, returning to Scottish culture and society (although I grew up relatively near here, I didn’t know Edinburgh that well at all before we moved). I’m aware I’m living the academic dream, which is a lovely feeling to have: I’m both aware of and appreciating my privilege. This is home now. This.

And, as usual, I have much work to do!

An embarrassment of Lectures – Videos, Slides, & Transcripts of my recent keynotes

Between the start of October the end of November 2016 I was asked to do a variety of keynotes and guest talks. I’m cutting down on travel at the moment, especially during teaching terms, but things in London are fair game… although imagine my surprise to find out I had managed to book myself in to talk at five big events in around as many weeks, at the start of the academic year! Gulp. Videos, transcripts, reports, and audio of these have trickled in, so I thought I would collect it all in one handy blogpost for your perusing pleasure.

First up was the Linnean Society Annual Conference on 10th October, which this year had the theme “What Should Be in Your Digital Toolbox” and my talk “If you teach a computer to READ: Transcribe Bentham, Transkribus, and Handwriting Technology Recognition.” For the past six years, the Transcribe Bentham project has been generating high quality crowdsourced transcripts of the writings of the philosopher and jurist Jeremy Bentham (1748-1832), held at University College London, and latterly, the British Library. Now with nearly 6 million words transcribed by volunteers, little did we know at the outset that this project would provide an ideal, quality controlled dataset to provide “ground truth” for the development of Handwriting Technology Recognition. This talk demonstrated how our research on the EU framework 7 Transcriptorium, and now H2020 READ projects is working towards a service to improve the searching and analysis of digitised manuscript collections across Europe.

Presentations from other speakers are online too, and they are well worth a peek.

Next up was the Jisc Historical TextsUK Medical Heritage Live Lab” which I hosted at the Wellcome Library on 26th October. The UK Medical Heritage Library makes newly available 68,000 19th century texts relating to the history of medicine, with more than 20 million pages of books digitised and put freely online. The lab brought together  students and researchers from various disciplines to explore and develop ideas around the use of the rich text and image assets which the collection provides. It was also a chance for researchers to work with Jisc developers, experimenting with the affordances of the interface, working together to understand user needs and desires. It was a great day, and I reported on the findings at the UK Medical Heritage Library symposium, which launched the online resource at the Wellcome Library, on the 27th October, in possibly the fastest turnaround of “do some Digital Humanities user based work and report on it to an audience” for me, ever. The slides covering the result of this hackday are up on slideshare, – no video, but I commented so you should be able to get the gist.

Next up was the British Library Lab’s Annual Symposium on November 7th. My talk was called ‘’Unexpected repurposing: the British Library’s Digital Collections and UCL teaching, research and infrastructure”. I highlighted how we have been using the British Library’s digitised book collection – 60,000 volumes which are now in the public domain – to explore processing of large scale digitised collections, both with researchers and computing science students at UCL. I’m told a video is coming really soon, but in the meantime, the slides are up over at slideshare, and there is also a wonderful “Lecture Report” (PDF) available on this by Conrad Taylor (thanks!) who also recorded the audio of the talk which you can hear here:

Finally, on 16th November I gave the QMUL Annual Digital Humanities Lecture, which I titled “Beyond Digitisation: Reimagining the Image in Digital Humanities”. The digitisation of primary source material is often held up as a means to open up collections, democratising their contents whilst improving access. Yet Digital Humanities has made little use of digitised image collections, beyond wishing to get access to individual items, or the text that can be generated via Optical Character Recognition or transcription of primary sources. Why is this, and what opportunities lie for image processing and computer graphics in the field of Digital Humanities? What barriers are in place that stop scholars being able to utilise and analyse images using advanced processing? Given the importance to text for Digital Humanities, how can we begin to reconceptualise what we can do with large bodies of digital images? I showcased work from projects as diverse as the Great Parchment Book, Transcribe Bentham, and the Deep Imaging Mummy Cases projects, demonstrating how those in the Digital Humanities can contribute to advanced cultural heritage imaging research. No video as yet, but I’m told its coming and I will add it here when it does. Here’s a picture of me in full flow: it is dark, as we turned the lights down to concentrate on the images.

Melissa Terras talking

I enjoy public speaking, and these events were all great – I learn so much from discussing different topics with the varied audience. However, this was quite a lot in October/ November, on top of the start of the academic year, my normal teaching load, marking all last year’s MA and MSc dissertations, PhD supervision, a PhD examination, and preparing for exam boards! I made it difficult for myself in talking on different topics, some of which I had to write speeches from scratch on, too.  It is probably enough public speaking for a few months (and also another reason why I’m going quiet this term – I’m now in a phase of writing, which you can’t do when giving bi-weekly keynotes. Its just a different phase of academic life – these talks and the feedback from them will emerge later in my writing).

And why “An embarrassment”? Well, you don’t think I ever watch videos of me speaking, do you?????