Categories
DH Project Update Research Projects Undergraduate Fellows

Mapping the Scottish Reformation: Transatlantic Adventures in the Digital Humanities

[Please enjoy this guest post by Michelle D. Brock, Associate Professor of History at Washington and Lee University. Professor Brock has been a fabulous supporter of DH at W&L through the years and we’re thrilled to see this project take off.]

In the spring of 2020 (before the world seemed to change overnight), I spent just over two wonderful months as a Digital Scholarship Fellow at the Institute for Advanced Studies in the Humanities at the University of Edinburgh during my sabbatical from W&L. During this time, I pursued work on a project called Mapping the Scottish Reformation (MSR), directed by myself and Chris Langley of Newman University and featuring Mackenzie Brooks on our project team and Paul Youngman on our advisory board.

Mapping the Scottish Reformation (MSR) is a digital prosopography of ministers who served in the Church of Scotland between the Reformation Parliament of 1560 to the Revolution in 1689. By extracting data from thousands of pages of ecclesiastical court records held by the National Records of Scotland (NRS), Mapping the Scottish Reformation tracks clerical careers, showing where they were educated, how they moved between parishes, and their personal and disciplinary history. This early modern data drives a powerful mapping engine that will allow users to build their own searches to track clerical careers over time and space.

The need for such a project was born of the fact that, despite a few excellent academic studies of individual ministers written in recent years, we still know remarkably little about this massive and diverse group. Many questions remain unanswered: How many ministers were moving from one area of Scotland to another? What was the influence of key presbyteries—the regional governing bodies of the Scottish kirk—or universities in this process? What was the average period of tenure for a minister? As of now, there is no way to answer such questions comprehensively, efficiently, and accurately. The voluminous ecclesiastical court records that contain the most detail about the careers of the clergy are not indexed, cumbersome to search, and completely inaccessible to the public or scholars less familiar with the challenges of Scottish handwriting. The multi-volume print source with much of this biographical data on ministers, Hew Scott’s invaluable Fasti Ecclesiae Scoticanae, is not searchable across volumes and contains numerous errors and omissions. A new resource is thus necessary to both search and visualize clerical data, and we intend Mapping the Scottish Reformation to be that resource.

Our project began in earnest in 2017, when, thanks to funding from a W&L Mellon grant, Caroline Nowlin ’19 and Damien Hansford (a postgraduate at Newman University) began working with the Project Directors to pull initial data from the Fasti that could be used to test the feasibility of the project. Three years and a National Endowment for the Humanities HCRR grant later, we are in the pilot “proof of concept” phase of MSR, centered on gathering data on the clergy in the Synod of Lothian and Tweeddale—a large and complex region that includes modern day Edinburgh. As such, my time at IASH was spent almost exclusively going through the presbytery records from this synod region to collect data on ministers at all levels in their clerical careers. I have often referred to this as the “unsexy” part of our work—dealing with the nitty gritty of navigating often challenging and inconsistent records in order to gather the data that will power Mapping the Scottish Reformation. There was, of course, no better setting to do this work in than IASH, an institute in the heart of the very university where many of the ministers in the Synod of Lothian and Tweeddale were educated and near to the parishes where many of the most prominent of them served.

Throughout my fellowship period, two questions were at the forefront of my mind: Are there patterns, chronological or regional, that account for the great variance in ministerial lives and trajectories? Was any such thing as a “typical” clerical career at all? What Dr. Langley and I have learned over the previous months is that the answers to these questions are significantly more complicated than previously understood by both historians and the wider public.

As we discussed during a presentation given in January at the Centre for Data, Culture and Society, the clerical career path was far less standardized than scholars usually assume. The terminology generally applied by historians and drawn from Hew Scott’s work— of “admitting,” “instituting,” and “transferring” ministers — was one of a distinct profession. Unfortunately, by applying such terms to the early modern ministry, we may be transposing a system and language of formality that just wasn’t there or wasn’t yet fully developed. Thus, one of our central goals is to shed light on the complexity of clerical experiences and development of the ministerial profession by capturing messy data from manuscripts and turning it into something machine readable and suited to a database and visualization layer. In short, we hope to make the qualitative quantitative, and to do so in a way that can also serve as a supplementary finding aid to the rich church court records held at NRS.

To date, my co-Director and I have gone through approximately 3,000 pages of presbytery minutes and collected information on over 300 clerics across more than twenty categories using Google Sheets. Dr. Langley has begun the process of uploading this data to Wikidata and running initial queries using SPARQL to generate basic data-driven maps. The benefit of using Wikidata at this phase in our project is that it is a linked open data platform and is already used as a data repository for the Survey of Scottish Witchcraft, which captured information on most of the parishes and a number of the ministers in our project. We are deeply grateful to the University of Edinburgh’s “Wikimedian in Residence” Ewan McAndrew, who met with us early in my fellowship period to explore opportunities for using Wikidata, which is now a critical part of the technical infrastructure of our project. Thanks to a recently awarded grant from the Strathmartine Trust, in the coming months we hope to collaborate with an academic technologist to build our own Mapping the Scottish Reformation interface, driven by our entries in Wikidata.

Though I sadly had to cut my fellowship period two weeks short due to the COVID-19 crisis, I had a wonderful and productive two months as a Digital Scholarship Fellow at IASH, thanks in no small part to the general sabbatical support from Washington and Lee. In this time, Mapping the Scottish Reformation progressed by leaps and bounds, thanks to the generosity and support of the Scottish history and digital humanities communities at the University of Edinburgh, as well as our colleagues at NRS. Our talk at the Edinburgh’s Centre for Data, Culture and Society, which drew an audience not only of academics but also genealogists and local residents, was a real highlight, allowing us to make connections with a wide range of people interested in the history of Scotland, family history, the Reformation, and the digital humanities. These connections, and the ability to make access to data widely available, are more important than ever on both sides of the Atlantic, and I am looking forward to continuing this work at home in Virginia.

Categories
DH People Project Update

Seeing W&L from a New Position

For eight months I’ve held a new title, no longer “Student” at Washington and Lee but now “Digital Humanities Post-Baccalaureate Fellow” in the W&L Library. In this position, I’ve contributed to the development of a new initiative called Rewriting the Code: Women and Technology, aided in the promotion of W&L’s new Digital Culture and Information (DCI) minor, and explored the university’s decision to adopt coeducation in the mid-1980s.

Rewriting the Code is a cross-departmental, collaborative effort which aims to inspire women at W&L to pursue majors, careers, and interests at the intersection of technology and the humanities. We started with two fall workshops, one covering HTML/CSS and the other on Python. After receiving double the number of applicants (60!) as spots available, we decided to host a second round of these workshops during the beginning of winter term. Coming up, we will be hosting a forum that includes a keynote presentation on March 1 and panel discussions on various topics, plus a mentoring lunch, on March 2. I have spent a significant portion of my work time aiding in the planning and execution of these events.

My work for DCI has primarily involved encouraging students, through the use of social media, to sign up for DCI classes. The goal is to have some of these students declare the minor after trying out the classes. In both the fall and winter terms, we have seen most DCI classes nearly full or at maximum capacity (and a couple with long waitlists, too!). It is exciting to see the enthusiasm students, and especially the underclassmen, have for DCI classes and the valuable skills they get to learn.

Researching the coeducation decision has allowed me to explore the vast holdings of W&L’s Special Collections. I have learned that it is very easy to begin skimming through documents in a folder, realize it is unlikely I will find any information related to coeducation within those documents, but be so intrigued by what I’m reading that I continue looking through it anyway. Nonetheless, the (re)discoveries that are made related to coeducation as I search through our collections is exciting for me and the other staff members who work in Special Collections. Although this project still has considerably more work to be completed before my one-year appointment comes to an end, before I leave I expect to have a website created with digital facsimiles of a variety of different types of documents related to coeducation and the experience of women at W&L more generally.

So, what is a post-baccalaureate fellowship? These types of positions are typically open to graduating seniors or recent graduates (those who graduated within the past one to three years) and are relatively short in duration (one or two years). Post-baccalaureate fellowships provide a great transition for students and recent grads because they offer the opportunity to gain hands-on work experience as well as mentorship from colleagues. For myself in particular, I believe that this position has been able to provide me with valuable experience as I transition from college life to the working world. Although I worked over the summers while in college, I didn’t have the “traditional” W&L internships, especially before my senior year. Instead, during the summer of 2017, I helped my mother remodel our house and equestrian property before selling it later that year. In my spare time, I worked for the government, visiting farms in the area to talk to farmers and collect data about their crops and livestock. While I felt that I had a productive summer in my own way, the feedback I received in interviews during my senior year was often along the lines of, “It seems like you are capable of accomplishing many things, but we don’t have enough solid examples of your ability.” This position allows me to demonstrate my skills through the projects I’m completing. While I did have a work study position as a student, I have considerably more responsibility now, aiding in the planning, organizing, and promotion of our various events and other projects on campus. Unlike with my work study position, I am able to be a part of these projects from start to finish.

Further, this type of position also benefits current students who get to participate in the events that are occurring through my work. For example, Rewriting the Code is a brand new initiative this year, yet over 60 women will have been impacted in some way through the workshops we held. Even more will benefit once we hold the forum in March. The coeducation project is also involving students to research issues important to them. Currently, a student is aiding in the development of a background story and oral histories on Asians, Asian-Americans, and exchange students (with a focus on women) to add to our collective knowledge about the impact of coeducation.

I also have discovered this fellowship to be an easier transition to life after college, as I’m already in a place where I am comfortable. Although at times it feels strange being an employee at W&L while still having friends who are students, much of the culture that I became accustomed to as a student is the same. This has made it easier to focus on my work tasks without being concerned about adjusting to a new company culture.

Although I have yet to decide where my path will take me after this fellowship ends, I feel confident in the skills I’ve gained and demonstrated through my position. I’m also excited about the impact of my work, in particular with the Rewriting the Code initiative. In the future, I hope to see more opportunities for other students to have experiences similar to mine.

– Kellie Harra ’18, Digital Humanities Post-Baccalaureate Fellow

Categories
DH Event on campus Project Update Research Projects Speaker Series

Report on “Pray for Us: The Tombs of Santa Croce and Santa Maria Novella”

In her public talk on January 16, 2019, Dr. Anne Leader discussed her DH project Digital Sepoltuario, which will offer students, scholars and the general public an online resource for the study of commemorative culture and medieval and renaissance Florence. Supported by the Institute for Advanced Technology in the Humanities (IATH) team at the University of Virginia, Digital Sepoltuario will chart the locations, designs and epitaphs of tombs made for Florentine families in sacred spaces across the city from about 1200 to about 1500, and then uses archival data to analyze social networks, patterns of patronage and markers of status in the late Middle Ages and Early Modern period.

While the project is not yet complete, it will include transcriptions, translations, photographs and analysis of fragile manuscripts, like registers that kept track of where different people were buried and records that indicate which tombs have been moved or destroyed. These documents demonstrate that tombs were frequently recycled from one family to another when lineages died out or when the family could no longer afford it. Because these records sometimes lost track of the owners of some tombs or the decorations faded away or disintegrated over time, there remains some uncertainty about some tombs’ owners that makes it impossible for historians to figure out now.

From these documents, scholars like Leader gain insight into why people chose certain tombs or churches as their final resting places. The tombstones are imbedded in the floors of churches in Florence, carpeting the churches with stone slabs that mark people’s final resting places and serving as reminders of everyone’s ultimate death. People would look down at the floor and contemplate what lay beneath the beautiful paintings and frescoes on the tombstones and within the churches, encouraging them to prepare for the final judgment and consider: am I ready for what’s to come?

By examining these records and incorporating them in a DH project, scholars can begin to answer questions about Florentines’ burial practices and ultimately about Florentines’ lives. Leader is interested in questions such as: How did Florentines decide on their final resting places, and how did they decide on the tombstones’ designs? So far, Leader noted that most people chose to be buried in their own parishes and close to their homes. However, she finds it interesting that increasing numbers of citizens requested burial elsewhere. This trend transformed the topography of Florence, causing tension within churches that relied on money from burying their dead and enriching some parishes while impoverishing others. Burial placement was one of the most important decisions Florentines would make, so considering why people wanted to be buried elsewhere and understanding the  implications these decisions had on social status help scholars today decipher how early modern Europeans thought about burial and death. Digital Sepoltuario will make all of this possible.

This event was sponsored by Washington and Lee University’s Art History Department, the Digital Humanities Cohort and the Digital Humanities Mellon Grant.

-Jenny Bagger ’19, DH Undergraduate Fellow

Categories
DH Event on campus Project Update Research Projects

DH Research Talk with Stephen P. McCormick

DH Research Talk with Stephen P. McCormick
Wednesday, February 6th, 2019
12:15 PM – 1:15 PM
IQ Center
Lunch is provided. Please register here!


Join Stephen P. McCormick to learn more about his Huon d’Auvergne project and his work with DH students!

McCormick will speak on his research and work with the digital and facsimile edition of Huon d’Auvergne, a pre-modern Franco-Italian epic. Linking institutions and disciplines, the Huon d’Auvergne Digital Archive is a collaborative scholarly project that presents for the first time to a modern reading audience the Franco-Italian Huon d’Auvergne romance epic.

This talk is sponsored by the Medieval and Renaissance Studies Program and the Digital Humanities Cohort.

Categories
DH Project Update Trip Report Undergraduate Fellows

ILiADS 2017

As an incoming freshman last year, I never imagined I would have the opportunity to work as a research assistant by my second semester at W&L. During orientation week, I met Dr. Stephanie Sandberg and learned about her play, Stories In Blue, which tells the stories of six sex trafficking survivors in Michigan. Through the Digital Humanities Initiative, I was given an opportunity to work with Dr. Sandberg on the adaptation of her play into a website that is a resource for people to learn more about the intricacies of domestic sex trafficking as well as how they can help bring it to an end.

In the first week of August, I traveled with Dr. Sandberg, Associate University Librarian Jeff Barry, Digital Humanities Fellow Sydney Bufkin and Digital Humanities Librarian Mackenzie Brooks to the Institute for Liberal Arts Digital Scholarship (ILiADS) conference at the College of Wooster. As it says on their website, “ILiADS is a project-based and team-based opportunity for focused support of a digital project.” Making this conference unique, is the liaison model where each team is assigned an expert liaison who assists on different digital aspects of the project. Monday through Thursday were devoted to working as a team on our project, where we brainstormed what we wanted the structure of the website to be and then began building it as well as generating content. As a student with limited digital literacy skills, ILiADS provided me with an opportunity to not only take the research I have been collecting and turn it into synthesized articles, but to also learn more about what it takes to build a useable and informational website.

ILiADS is a great opportunity for students and faculty from different universities to come together for a week, work on digital humanities projects and compare what each of their institutions are doing to promote digital scholarship as technology becomes a necessity in higher learning. To be able to have this experience as a student was amazing for me because I not only got to see how important digital humanities is to our project at W&L, but how it is being used at other universities. Digital humanities is allowing the research done by students and faculty, that may otherwise get lost to the ages, to live on through easily accessible platforms.

Coming out of ILiADS, I will continue to work on research, but will also be writing more content for the website and entering my research into our hidden database structure that will make finding information easier.

Categories
Announcement Project Update Undergraduate Fellows

Work in Progress: Updates from Our DH Fellows

Join us on April 4th from 2-4pm to hear project updates from our current Mellon DH Undergraduate Fellows. If you’re interested in becoming a fellow next year, this is the perfect chance to learn what it’s all about!

Applications are now open for the 2017-2018 academic year.

We’ll be in the DH Workspace (Leyburn 218). There will be snacks.

Categories
Project Update Undergraduate Fellows

In the Words of Jay-Z, “Allow me to reintroduce myself”

Dear internet,

It’s been a while since we’ve spoken, but hopefully we can get right back into the swing of things.

I joined the team in the Winter of 2016, working on a project called Lions, Jungles, and Natives. My project uses special collections materials to curate an online exhibit centered around a discussion of the misrepresentations of Africa. Click here for past blog posts.

My project was cut short by my decision to study abroad. I spent my fall term in England, studying in a small town about an hour and a half from London called Bath. The experience was super rewarding. I learned a lot about British culture and literature, and I also got to visit a total of nine European cities.

Here’s a photo of me in my favorite city, Madrid, AKA the home of my colonizers:

Oddly enough, my project hiatus was super helpful. It gave me a lot of clarity about what I want and how I want to execute it.

The biggest conclusion I’ve come to relates to theme and purpose. I want my website to be centered around the idea of tropes–how we think of Africa through images and themes, and how these tropes may contain inaccuracies. Moreover, I want this website to be an instructive tool. Largely inspired by my work with History professor TJ Tallie, I want this website to function as a teaching tool for classes like HIST 279 – Africa in the Western Imaginationt. Something that could be used to encourage students and others to think more about tropes snd how they affect our understandings.

With this in mind, one of the changes I’ve made to my project plan is a shift away from transcription and toward annotation. The website will house the pages of two diaries, one belonging to Thomas Hills’ wife and the other, his daughter. Originally, my plan was to use the Scripto plug-in to crowdsource transcriptions for the diaries. However, because of my shifting vision and the plug-in’s limited functionality, I’ve decided to transcribe the diaries myself and incorporate an annotation tool to allow others to identify the tropes at place.

Some of my original plans are staying in place. I’d still like to map the photos to provide a visual of the Hills family’s journey. I’d also want to construct an interactive spiderweb map that lets the user see the relationships between photos and tropes.

Nevertheless, all these ideas are secondary goals directed at constructing exhibits. The primary goal at the moment is to make the collections. To upload onto the website all the photos, diary pages, videos, and their corresponding transcriptions and metadata.

My hope is to have all these materials uploaded by the end of Winter or Spring term, while also spending some time to work on the website’s design and layout. By the end of my senior year, I would ideally like to have a completed website and a paper in the process of being co-written with Professor Tallie.

For a more detailed version of my project, visit my project plan page on github.

Maybe I’ll be able to get all of this done–then again, maybe not. Still, a girl can dream.

Sincerely,
Hov Arlette

Categories
DH Project Update Research Projects

Reading Speech: Virginia Woolf, Machine Learning, and the Quotation Mark

[Cross-posted on the my personal blog as well as the Scholars’ Blog. What follows is a slightly more fleshed out version of what I presented this past week at HASTAC 2016 (complete with my memory-inflected transcript of the Q&A). I gave a bit more context for the project at the event than I do here, so it might be helpful to read my past two posts on the project here and here before going forward. This talk continues that conversation.]

This year in the Scholar’s Lab I have been working with Eric on a machine learning project that studies speech in Virginia Woolf’s fiction. I have written elsewhere about the background for the project and initial thoughts towards its implications. For the purposes of this blog post, I will just present a single example to provide context. Consider the famous first line of Mrs. Dalloway:

Mrs Dalloway said, “I will buy the flowers myself.”

Nothing to remark on here, except for the fact that this is not how the sentence actually comes down to us. I have modified it from the original:

Mrs Dalloway said she would buy the flowers herself.

My project concerns moments like these, where Woolf implies the presence of speech without marking it as such with punctuation. I have been working with Eric to lift such moments to the surface using computational methods so that I can study them more closely.

I came to the project by first tagging such moments myself as I read through the text, but I quickly found myself approaching upwards of a hundred instances in a single novel-far too many for me to keep track of in any systematic way. What’s more, the practice made me aware of just how subjective my interpretation could be. Some moments, like this one, parse fairly well as speech. Others complicate distinctions between speech, narrative, and thought and are more difficult to identify. I became interested in the features of such moments. What is it about speech in a text that helps us to recognize it as such, if not for the quotation marks themselves? What could we learn about sound in a text from the ways in which it structures such sound moments?

These interests led me towards a particular kind of machine learning, supervised classification, as an alternate means of discovering similar moments. For those unfamiliar with the concept, an analogy might be helpful. As I am writing this post on a flight to HASTAC and just finished watching a romantic comedy, these are the tools that I will work with. Think about the genre of the romantic comedy. I only know what this genre is by virtue of having seen my fair share of them over the course of my life. Over time I picked up a sense of the features associated with these films: a serendipitous meeting leads to infatuation, things often seem resolved before they really are, and the films often focus on romantic entanglements more than any other details. You might have other features in mind, and not all romantic comedies will conform to this list. That’s fine: no one’s assumptions about genre hold all of the time. But we can reasonably say that, the more romantic comedies I watch, the better my sense of what a romantic comedy is. My chances of being able to watch a movie and successfully identify it as conforming to this genre will improve with further viewing. Over time, I might also be able to develop a sense of how little or how much a film departs from these conventions.

Supervised classification works on a similar principle. By using the proper tools, we can feed a computer program examples of something in order to have it later identify similar objects. For this project, this process means training the computer to recognize and read for speech by giving it examples to work from. By providing examples of speech occurring within quotation marks, we can teach the program when quotation marks are likely to occur. By giving it examples of what I am calling ‘implied speech,’ it can learn how to identify those as well.

For this machine learning project, I analyzed Woolf texts downloaded from Project Gutenberg. Eric and I put together scripts in Python 3 that used a package known as the Natural Language Toolkit] for classifying. All of this work can be found at the project’s GitHub repository.

The project is still ongoing, and we are still working out some difficulties in our Python scripts. But I find the complications of the process to be compelling in their own right. For one, when working in this way we have to tell the computer what features we want it to pay attention to: a computer does not intuitively know how to make sense of the examples that we want to train it on. In the example of romantic comedies, I might say something along the lines of “while watching these films, watch out for the scenes and dialogue that use the word ‘love.'” We break down the larger genre into concrete features that can be pulled out so that the program knows what to watch out for.

To return to Woolf, punctuation marks are an obvious feature of interest: the author suggests that we have shifted into the realm of speech by inserting these grammatical markings. Find a quotation mark-you are likely to be looking at speech. But I am interested in just those moments where we lose those marks, so it helps to develop a sense of how they might work. We can then begin to extrapolate those same features to places where the punctuation marks might be missing. We have developed two models for understanding speech in this way: an external and an internal model. To illustrate, I have taken a single sentence and bolded what the model takes to be meaningful features according to each model. Each represents a different way of thinking about how we recognize something as speech.

External Model for Speech:

“I love walking in London,” said Mrs. Dalloway.  “Really it’s better than walking in the country.”

The external model was our initial attempt to model speech. In it, we take an interest in the narrative context around quotation marks. In any text, we can say that there exist a certain range of keywords that signal a shift into speech: said, recalled, exclaimed, shouted, whispered, etc. Words like these help the narrative attribute speech to a character and are good indicators that speech is taking place. Given a list of words like this, we could reasonably build a sense of the locations around which speech is likely to be happening. So when training the program on this model, we had the classifier first identify locations of quotation marks. Around each quotation mark, the program took note of the diction and parts of speech that occurred within a given distance from the marking. We build up a sense of the context around speech.

Internal Model for Speech:

I love walking in London,” said Mrs. Dalloway. “Really it’s better than walking in the country.”

The second model we have been working with works in an inverse direction: instead of taking an interest in the surrounding context of speech, an internal model assumes that there are meaningful characteristics within the quotation itself. In this example, we might notice that the shift to the first-person ‘I’ is a notable feature in a text that is otherwise largely written in the third person. This word suggests a shift in register. Each time this model encounters a quotation mark it continues until it finds a second quotation mark. The model then records the diction and parts of speech inside the pair of markings.

Each model suggests a distinct but related understanding for how sound works in the text. When I set out on this project, I had aimed to use the scripts to give me quantifiable evidence for moments of implied speech in Woolf’s work. The final step in this process, after all, is to actually use these models to identify speech: looking at texts they haven’t seen before, the scripts insert a caret marker every time they believe that a quotation mark should occur. But it quickly became apparent that the construction of the algorithms to describe such moments would be at least as interesting as any results that the project could produce. In the course of constructing them, I have had to think about the relationships among sound, text, and narrative in new ways.

The algorithms are each interpretative in the sense that they reflect my own assumptions about my object of study. The models also reflect assumptions about the process of reading, how it takes place, and about how a reader converts graphic markers into representations of sound. In this sense, the process of preparing for and executing text analysis reflects a certain phenomenology of reading as much as it does a methodology of digital study. The scripting itself is an object of inquiry in its own right and reflects my own interpretation of what speech can be. These assumptions are worked and reworked as I craft algorithms and python scripts, all of which are as shot through with humanistic inquiry and interpretive assumptions as any close readings.

For me, such revelations are the real reasons for pursuing digital study: attempting to describe complex humanities concepts computationally helps me to rethink basic assumptions about them that I had taken for granted. In the end, the pursuit of an algorithm to describe textual speech is nothing more or less than the pursuit of deeper and enriched theories of text and speech themselves.

Postscript

I managed to take note of the questions I got when I presented this work at HASTAC, so what follows are paraphrases of my memory of them as well as some brief remarks that roughly reflect what I said in the moment. There may have been one other that I cannot quite recall, but alas such is the fallibility of the human condition.

Q: You distinguish between speech and implied speech, but do you account at all for the other types of speech in Woolf’s novels? What about speech that is remembered speech that happened in earlier timelines not reflected in the present tense of the narrative’s events?

A: I definitely encountered this during my first pass at tagging speech and implied speech in the text by hand. Instead of binaries like quoted speech/implied speech, I found myself wanting to mark for a range of speech types: present, actual; remembered, might not have happened; remembered incorrectly; remembered, implied; etc. I decided that a binary was more feasible for the machine learning problems that I was interested in, but the whole process just reinforced how subjective any reading process is: another reader might mark things differently. If these processes shape the construction of the theories that inform the project, then they necessarily also affect the algorithms themselves as well as the results they can produce. And it quickly becomes apparent that these decisions reflect a kind of phenomenology of reading as much as anything: they illlustrate my understanding of how a complicated set of markers and linguistic phenomenon contribute to our understanding that a passage is speech or not.

Q: Did you encounter any variations in the particular markings that Woolf was using to punctuate speech? Single quotes, etc., and how did you account for them?

A: Yes – the version of Orlando that I am working with used single quotes to notate speech. So I was forced to account for such edge cases. But the question points at two larger issues: one authorial and one bibliographical. As I worked on Woolf I was drawn to the idea of being able to run such a script against a wider corpus. Since the project seemed to impinging on how we also understand psychologized speech, it would be fascinating to be able to search for implied speech in other authors. But, if you are familiar with, say, Joyce, you might remember that he hated quotation marks and used dashes to denote speech. The question is how much can you account for such edge cases, and, if not, the study becomes only one of a single author’s idiosyncrasies (which still has value). But from there the question spirals outwards. At least one of my models (the internal one) relies on quotation marks themselves as boundary markers. The model assumes that quotation marks will come in pairs, and this is not always the case. Sometimes authors, intentionally or accidentally, omit a closing quotation mark. I had to massage the data in at least half a dozen places where there was no quotation mark in the text and where its lack was causing my program to fail entirely. As textual criticism has taught us, punctuation marks are the single most likely things to be modified over time during the process of textual transmission by scribes, typesetters, editors, and authors. So in that sense, I am not doing a study of Woolf’s punctuation so much as a study of Woolf’s punctuation in these particular versions of the texts. One can imagine an exhaustive study that works on all versions of all Woolf’s texts as a study that might approach some semblance of a correct and thorough reading. For this project, however, I elected to take the lesser of two evils that would still allow me to work through the material. I worked with the texts that I had. I take all of this as proof that you have to know your corpus and your own shortcomings in order to responsibly work on the materials – such knowledge helps you to validate your responses, question your results, and reframe your approaches.

Q: You talked a lot about text approaching sound, but what about the other way around – how do things like implied speech get reflected in audiobooks, for example? Is there anything in recordings of Woolf that imply a kind of punctuation that you can hear?

A: I wrote about this extensively in my dissertation, but for here I will just say that I think the textual phenomenon the questioner is referencing occurs on a continuum. Some graphic markings, like pictures, shapes, punctuation marks, do not clearly translate to sound. And the reverse is true: the sounded quality of a recording can only ever be remediated by a print text. There are no perfect analogues between different media forms. Audiobook performers might attempt to convey things like punctuation or implied speech (in the audiobook of *Ulysses*, for example, Jim Norton throws his voice and lowers his volume to suggest free indirect discourse). In the end, I think such moments are playing with an idea of what my dissertation calls audiotextuality, the idea that all texts recordings of texts, to varying degrees, contain both sound and print elements. The two spheres may work in harmony or against each other as a kind of productive friction. The idea is a slippery one, but I think it speaks to moments like the implied punctuation mark that come through in a particularly powerful audiobook recording.

Categories
Announcement DH Project Update Publication Research Projects

In Case You Missed the News

We’re caught up in the craziness of our four week spring term here at W&L, but we wanted to make sure you were caught up on some recent news from our DH community.

Ancient Graffiti Project wins NEH Digital Humanities Start-Up Grant

Heralded as the “epitome of liberal arts,” the Ancient Graffiti Project was recently awarded $75,000 to continue work on their database for textual and figural graffiti. Learn more from the W&L press release or the Atlantic Monthly article. Congrats to Sara Sprenkle, Rebecca Benefiel, and the rest of their team!


Stephen P. McCormick wins Mednick Fellowship from the Virginia Foundation for Independent Colleges

Stephen P. McCormick, Assistant Professor of French, has been awarded the 2016 Menick Fellowship by VFIC for his work on the Huon d’Auvergne project. Learn more about McCormick’s work on one of the last unpublished Franco-Italian Romance Epics from this article or dig into the digital edition yourself.


Joel Blecher publishes chapter on Digital Humanities pedagogy

Joel Blecher, Assistant Professor of Religion, won a DH Incentive Grant in fall of 2014 for incorporating data visualization into a History of Islamic Civilization course. You can now read about this experience in a new title from De Grutyer, The Digital Humanities and Islamic & Middle East Studies. Blecher’s chapter is titled, “Pedagogy and the Digital Humanities: Undergraduate Exploration into the Transmitters of Early Islamic Law” which you can read in print or electronic form through Leyburn Library.


Look forward to reports on our summer activities coming soon. We have teams going to DHSI, ILiADS, the Oberlin Digital Scholarship Conference, and more!

Categories
DH Project Update Undergraduate Fellows

Lasso-ing the Laisses: A Digital Journey Through Annotations, Javascript, and More!

Guest post by Sarah Schaffer ’16

Introduction

Hi, my name is Sarah and I am a senior Business Administration major with a French minor. This past semester of independent study I worked with Professor McCormick on his current Huon d’Auvergne project. You may be wondering: “What is a business major doing here?” but in the spirit of a liberal arts college I’ve taken advantage of the wide variety of classes offered here. My journey with Digital Humanities began Winter 2015 when I registered for Professor McCormick’s class French 341: La Legende Arthurienne, which included a Digital Humanities lab. It was in this class that I became fascinated with TEI and how Digital Humanities have transformed our interactions with various works.

Before Digital Editions

The first step of research was to understand the importance of the work itself, before it becomes a digital edition. Through reading both the books Introduction to Manuscript Studies and On Editing Old French Texts, I began to better understand the work that Professor McCormick was doing. As someone without much background knowledge of historical manuscripts, it had never crossed my mind to consider even half the elements discussed. Each element, such as the writing support it’s written on, the manuscript errors, corrections made, and annotations, add to the way the document is understood and interpreted. Every new edition of the work needs to take into account the editor’s personality and what they chose to include or exclude. Each component plays such a huge role in editing and choosing what to display on the digital edition that is being presented. This makes choosing what to include even more important in the way that the text is being displayed and available for interpretation.

Theory of Digital Editions

As I moved from my readings about the physical documents themselves, Professor McCormick and I discussed Peter Robinson’s article “The Theory of Digital Editions.” Digital editions in their infancy tried to include everything, but quickly found that resources are limited which restricted what could be included. However, what digital editions can do is include a new level of involvement with the document between both the reader and the editor, something that is not possible with a printed document. Unlike a primary document or editorial text, a digital edition allows the reader “to see the text of the document construct itself, layer by layer, from blank page to fully written text” (Robinson 110). The article and discussion with Professor McCormick opened my eyes to the idea that the text-as-document is intimately linked to the text-as-work within the digital edition.

Putting Ideas Together

While learning about digital editions, I researched the different ways other digital editions included annotations, the platforms they used, and the way their works were displayed. I spent a large amount of time looking through various digital editions and searching through DIRT for tools we could use for the final website. We looked into using Hypothes.is as an annotation tool, but it didn’t quite provide the functionality that we were looking for. Eventually after researching and working with various different platforms, we decided to build our own system, using Ruby on Rails. Instead of trying to tailor an already made platform to the project’s needs, creating a new system allowed for the upmost customization.

Prototyping

If I could look at different examples of digital editions and click through them all day, I would, but at some point I needed to come up with some ideas on my own. Based off of various other editions, understanding the history and theory of digital editions, and being aware of what Professor McCormick was looking for I got to work. The best way to begin prototyping is just sitting down with some blank sheets of paper and a pencil and draw out what to design. So, I got to work sketching out several ways the website could be organized. Once I had one or two ideas down, I found more ways to organize the various laisses and show functionality as well. A laisse is best defined as a narrative unit, similar to a stanza but varies in length. Each version of Huon d’Auvergne has a large number of laisses, which makes the organization and display of them even more important. Below you’ll see some basic prototypes created for the display of different versions of Huon d’Auvergne laisses and the annotations.
Screen Shot 2016-04-11 at 9.56.27 AM

Coding

The final step of my project was to begin building the prototypes that I had created. Luckily, I’ve had some experience coding in Professor McCormick’s class before, as well as during some business classes so the task didn’t seem too daunting. I got to work on learning javascript and jQuery through the courses on Codecademy – a website I highly recommend if you’re trying to learn a new coding skill. Once I learned the basics, I did a quick review of HTML and CSS to prep myself for creating a mock-up website. I forgot how intimidating it is to stare at a blank text editor, but once I got started it didn’t seem nearly as daunting.

Gif of frustrated woman staring at laptop

I worked with basic text generated from Lorem Ipsum in order to more easily put my new coding skills to work. After setting up basic structural parts of the website to work with, I added some CSS styling. I then continued with the javascript portion of the website and worked through hiding and revealing the different laisses. I struggled with this part the most because it was such a new skill. Much like learning a foreign language, every new programming language takes time and effort to work through figuring out a solution.

Reflecting on the Semester

Overall, this past semester has been a great learning experience. Beyond the new skills that I learned, this opportunity allowed me to take my liberal arts education beyond the classroom and apply it to a really unique project It was an honor to work with Professor McCormick’s team and be a part of such an incredible project.

Work Cited:

Robinson, Peter. “Towards a Theory of Digital Editions.” The European Society for Textual Scholarship 10 (2013): 105-31. Web.