Between the start of October the end of November 2016 I was asked to do a variety of keynotes and guest talks. I’m cutting down on travel at the moment, especially during teaching terms, but things in London are fair game… although imagine my surprise to find out I had managed to book myself in to talk at five big events in around as many weeks, at the start of the academic year! Gulp. Videos, transcripts, reports, and audio of these have trickled in, so I thought I would collect it all in one handy blogpost for your perusing pleasure.
First up was the Linnean Society Annual Conference on 10th October, which this year had the theme “What Should Be in Your Digital Toolbox” and my talk “If you teach a computer to READ: Transcribe Bentham, Transkribus, and Handwriting Technology Recognition.” For the past six years, the Transcribe Bentham project has been generating high quality crowdsourced transcripts of the writings of the philosopher and jurist Jeremy Bentham (1748-1832), held at University College London, and latterly, the British Library. Now with nearly 6 million words transcribed by volunteers, little did we know at the outset that this project would provide an ideal, quality controlled dataset to provide “ground truth” for the development of Handwriting Technology Recognition. This talk demonstrated how our research on the EU framework 7 Transcriptorium, and now H2020 READ projects is working towards a service to improve the searching and analysis of digitised manuscript collections across Europe.
Presentations from other speakers are online too, and they are well worth a peek.
Next up was the Jisc Historical Texts “UK Medical Heritage Live Lab” which I hosted at the Wellcome Library on 26th October. The UK Medical Heritage Library makes newly available 68,000 19th century texts relating to the history of medicine, with more than 20 million pages of books digitised and put freely online. The lab brought together students and researchers from various disciplines to explore and develop ideas around the use of the rich text and image assets which the collection provides. It was also a chance for researchers to work with Jisc developers, experimenting with the affordances of the interface, working together to understand user needs and desires. It was a great day, and I reported on the findings at the UK Medical Heritage Library symposium, which launched the online resource at the Wellcome Library, on the 27th October, in possibly the fastest turnaround of “do some Digital Humanities user based work and report on it to an audience” for me, ever. The slides covering the result of this hackday are up on slideshare, – no video, but I commented so you should be able to get the gist.
Next up was the British Library Lab’s Annual Symposium on November 7th. My talk was called ‘’Unexpected repurposing: the British Library’s Digital Collections and UCL teaching, research and infrastructure”. I highlighted how we have been using the British Library’s digitised book collection – 60,000 volumes which are now in the public domain – to explore processing of large scale digitised collections, both with researchers and computing science students at UCL. I’m told a video is coming really soon, but in the meantime, the slides are up over at slideshare, and there is also a wonderful “Lecture Report” (PDF) available on this by Conrad Taylor (thanks!) who also recorded the audio of the talk which you can hear here:
Finally, on 16th November I gave the QMUL Annual Digital Humanities Lecture, which I titled “Beyond Digitisation: Reimagining the Image in Digital Humanities”. The digitisation of primary source material is often held up as a means to open up collections, democratising their contents whilst improving access. Yet Digital Humanities has made little use of digitised image collections, beyond wishing to get access to individual items, or the text that can be generated via Optical Character Recognition or transcription of primary sources. Why is this, and what opportunities lie for image processing and computer graphics in the field of Digital Humanities? What barriers are in place that stop scholars being able to utilise and analyse images using advanced processing? Given the importance to text for Digital Humanities, how can we begin to reconceptualise what we can do with large bodies of digital images? I showcased work from projects as diverse as the Great Parchment Book, Transcribe Bentham, and the Deep Imaging Mummy Cases projects, demonstrating how those in the Digital Humanities can contribute to advanced cultural heritage imaging research. No video as yet, but I’m told its coming and I will add it here when it does. Here’s a picture of me in full flow: it is dark, as we turned the lights down to concentrate on the images.
I enjoy public speaking, and these events were all great – I learn so much from discussing different topics with the varied audience. However, this was quite a lot in October/ November, on top of the start of the academic year, my normal teaching load, marking all last year’s MA and MSc dissertations, PhD supervision, a PhD examination, and preparing for exam boards! I made it difficult for myself in talking on different topics, some of which I had to write speeches from scratch on, too. It is probably enough public speaking for a few months (and also another reason why I’m going quiet this term – I’m now in a phase of writing, which you can’t do when giving bi-weekly keynotes. Its just a different phase of academic life – these talks and the feedback from them will emerge later in my writing).
And why “An embarrassment”? Well, you don’t think I ever watch videos of me speaking, do you?????