Digital Classicist London, summer 2018

May 3rd, 2018 by Gabriel Bodard

Digital Classicist London 2018

Institute of Classical Studies

Fridays at 16:30 in room 234*, Senate House south block, Malet Street, London WC1E 7HU
(*except June 1 & 15, room G21A)

ALL WELCOME

Seminars will be screencast on the Digital Classicist London YouTube channel, for the benefit of those who are not able to make it in person.

Discuss the seminars on Twitter at #DigiClass.

RSS feed

*Jun 1 Zena Kamash (Royal Holloway) Embracing customization in post-conflict reconstruction
Jun 8 Thibault Clérice (Sorbonne) et al. CapiTainS: challenges for the generalization and adoption of open source software (abstract)
*Jun 15 Rune Rattenborg (Durham) Further and Further Into the Woods: Lessons from the Crossroads of Cuneiform Studies, Landscape Archaeology, and Spatial Humanities Research (abstract)
Jun 22 Joanna Ashe, Gabriel Bodard, Simona Stoyanova (ICS) Annotating the Wood Notebooks workshop (abstract)
Jun 29 Monica Berti, Franziska Naether (Leipzig) & Eleni Bozia (Florida) The Digital Rosetta Stone Project (abstract)
Jul 6 Emma Bridges (ICS) and Claire Millington (KCL) The Women in Classics Wikipedia Group (abstract)
Jul 13 Elizabeth Lewis (UCL), Katherine Shields (UCL) et al. Presentation and discussion of Sunoikisis Digital Classics student projects
Jul 20 Anshuman Pandey (Michigan) Tensions of Standardization and Variation in the Encoding of Ancient Scripts in Unicode (abstract)
Jul 27 Patrick J. Burns (NYU) Backoff Lemmatization for Ancient Greek with the Classical Language Toolkit (abstract)

This seminar series addresses the tension between standardisation and customisation in digital and other innovative and collaborative classics research. The topic encompasses all areas of classics, including ancient history, archaeology and reception (including cultures beyond the Mediterranean). Seminars will be pitched at a level suitable for postgraduate students or interested colleagues in Archaeology, Classics, Digital Humanities and related fields.

Digital Classicist London seminar is organized by Gabriel Bodard, Simona Stoyanova and Valeria Vitale (ICS) and Simon Mahony and Eleanor Robson (UCL).

For more information, and links to the live casts on YouTube, see http://www.digitalclassicist.org/wip/wip2018.html

Seminar: Digital Restoration of Herculaneum Papyri (Oxford, Mar 15, 2018)

March 8th, 2018 by Gabriel Bodard

The University of Kentucky’s Digital Restoration Initiative & Chellgren Center for Undergraduate Excellence present

Prof. W. Brent Seales:
Reading the Invisible Library: The Digital Restoration of Herculaneum Papyri

From the inner wraps of carbonized scrolls, to the fused and buckled pages of disintegrating books, the world’s vast invisible library can finally be made visible, thanks to technology. Join Professor Seales and his research team as they demonstrate the use of digital imaging tools to enhance the readability of Herculaneum fragment PHerc.118. The challenges, successes, and promises of digital restoration for revealing the elusive hidden texts of carbonized papyri—in a completely noninvasive, damage-free way—will be presented.

Thursday, March 15th, 3:00 PM
Merton Lecture Room, University College
10 Merton Street, Oxford
Drinks reception immediately following.

Doctoral studentship, Digital Grammar of Greek Documentary Papyri, Helsinki

February 21st, 2018 by Gabriel Bodard

Posting for Marja Vierros. Full details and applications forms at University of Helsinki site.

Applications are invited for a doctoral student for a fixed term of up to 4 years, starting in the fall of 2018 to work in the University of Helsinki. The selected doctoral candidate will also need to apply for acceptance in the Doctoral Programme for Language Studies at the Faculty of Arts during the fall application period. The candidate’s main duties will consist of PhD studies and writing of a dissertation.

The doctoral candidate will study a topic of his/her choice within the historical development and linguistic variation of Greek in Egypt (e.g. certain morphosyntactic variation as a sign of bilingualism), by way of producing a selected, morphosyntactically annotated corpus of documentary papyri, according to Dependency Grammar. The candidate’s duties include participation in regular team meetings and presenting his/her research at seminars and academic conferences. The candidate is expected to also take part in designing the online portal that presents the results of the project.

The appointee to the position of doctoral student must hold a Master’s degree in a relevant field and must subsequently be accepted as a doctoral candidate in the Doctoral Programme mentioned above. Experience in linguistic annotation, corpus linguistic methods or programming are an asset, but not a requirement. The appointee must have the ability to conduct independent scientific research. The candidate should have excellent analytical and methodological skills, and be able to work both independently and collaboratively as part of a multidisciplinary scientific community. The successful candidates are expected to have excellent skills in written and oral English. Skills in Finnish or Swedish are not required. Relocation costs can be negotiated and the director will offer help and information for the practicalities, if needed.

CFP Digital Classicist London, summer 2018 seminar

February 16th, 2018 by Gabriel Bodard

The Digital Classicist invites proposals for the summer 2018 seminar series, which will run on Friday afternoons in June and July in the Institute of Classical Studies, Senate House, London.

We would like to see papers that address the tension between standardisation and customisation in digital and other innovative and collaborative classics research. The seminar encompasses all areas of classics, including ancient history, archaeology and reception (including cultures beyond the Mediterranean). Papers from researchers of all levels, including students and professional practitioners, are welcome.

There is a budget to assist with travel to London (usually from within the UK, but we have occasionally been able to assist international presenters to attend). To submit a paper, please email an abstract of up to 300 words as an attachment to valeria.vitale@sas.ac.uk by Monday, March 19th, 2018.

EpiDoc & EFES training workshop, London April 9–13, 2018

January 25th, 2018 by Gabriel Bodard

We invite applications for a five-day training workshop in text encoding for epigraphy and papyrology, and publication of ancient texts, at the Institute of Classical Studies, University of London, April 9–13, 2018.

The training will be offered by Gabriel Bodard (ICS), Martina Filosa (Köln), Simona Stoyanova (ICS) and Polina Yordanova (ICS/Sofia) and there will be no charge for the workshop. Thanks to the generosity of the Andrew W. Mellon Foundation, a limited number of bursaries are available to assist students and other unfunded scholars with the costs of travel and accommodation.

EpiDoc (epidoc.sf.net) is a community of practice, recommendations and tools for the digital editing and publication of ancient texts based on TEI XML. EFES (github.com/EpiDoc/EFES) is a publication platform closely geared to EpiDoc projects and designed for use by non-technical editors. No expert computing skills are required, but a working knowledge of Greek/Latin or other ancient language, epigraphy or papyrology, and the Leiden Conventions will be assumed. The workshop is open to participants of all levels, from graduate students to professors and professionals.

To apply for a place on this workshop please email gabriel.bodard@sas.ac.uk, by Feb 28, 2018 including the following information:

  1. a brief description of your reason for interest
  2. your relevant background and experience
  3. If you would like to request a bursary, estimate how much you would need.

“Digital Editions, Digital Corpora, and new Possibilities for the Humanities in the Academy and Beyond” – NEH Institute – Tufts University, July 2018

January 19th, 2018 by Monica Berti

This is a second request for applications for a NEH Institute at Tufts University in July 2018. Deadline is February 1st.

The Perseus Digital Library at Tufts University invites applications to “Digital Editions, Digital Corpora, and new Possibilities for the Humanities in the Academy and Beyond” a two-week NEH Institute for Advanced Technology in the Digital Humanities (July 16-27, 2018). This institute will provide participants the opportunity to spend two intensive weeks learning about a range of advanced new methods for annotating textual sources including but not limited to Canonical Text Service Protocols, linguistic and other forms of textual annotation and named entity analysis. By the end of the institute, participants will have concrete experience applying all of these techniques not just to provided texts and corpora but to their own source material as well.

Faculty, graduate students, and library professionals are all encouraged to apply and international participants are welcome. Applications are due by February 1, 2018.

Full application information regarding the application process may be found here:
https://sites.tufts.edu/digitaleditions/applications/

For more information, please visit the institute website: https://sites.tufts.edu/digitaleditions or send an email to perseus_neh@tufts.edu

Survey on Digital Humanities collaborations

November 27th, 2017 by Gabriel Bodard

Posted for Max Kemman:

A distinguishing feature of DH is the collaboration between humanists and computational researchers. As part of my PhD research on digital history practices, I therefore am conducting an online survey to investigate the practices of collaboration. If you are part of a DH collaboration, I would like to kindly ask you to participate in this survey.

This survey is held to gain an overview of how collaborations in digital history and digital humanities are organised. Questions will focus on the organisation of people in the collaboration, the physical space, and the time frame of the collaboration. Filling out the survey should take about 10 minutes.

All data will be reported anonymously. The anonymous data will be made available open access later.

To participate in the survey, please follow www.maxkemman.nl/survey
To learn more about the study, please see www.maxkemman.nl/aboutsurvey

If you have any questions or comments, do not hesitate to contact me via max.kemman@uni.lu.

Max Kemman MSc
PhD Candidate
University of Luxembourg
Luxembourg Centre for Contemporary and Digital History (C²DH)

DCO 3,2 (2017) is out!

November 16th, 2017 by mromanello

A new special issue of the Digital Classics Online journal was published a few days ago, and it contains a selection of papers presented at the Digital Classicist Seminar Berlin during its first four years (2012-2015).

Table of Contents:

Editorial

M. Romanello, M. Trognitz, U. Lieberwirth, F. Mambrini, F. Schäfer. A Selection of Papers from the Digital Classicist Seminar Berlin (2012-2015), pp. 1-4. DOI: 10.11588/dco.2017.0.36870.

Articles

P. Hacıgüzeller. Collaborative Mapping in the Age of Ubiquitous Internet: An Archaeological Perspective, pp. 5-16. DOI: 10.11588/dco.2017.0.35670.

A. Trachsel. Presenting Fragments as Quotations or Quotations as Fragments, pp. 17-27. DOI: 10.11588/dco.2017.0.35671.

G. Bodard, H. Cayless, M. Depauw, L. Isaksen, K.F. Lawrence, S.P.Q. Rahtz†. Standards for Networking Ancient Person data: Digital approaches to problems in prosopographical space, pp. 28-43. DOI: 10.11588/dco.2017.0.37975.

R. Varga. Romans 1 by 1 v.1.1 New developments in the study of Roman population, pp. 44-59. DOI: 10.11588/dco.2017.0.35822.

U. Henny, J. Blumtritt, M. Schaeben, P. Sahle. The life cycle of the Book of the Dead as a Digital Humanities resource, pp. 60-79. DOI: 10.11588/dco.2017.0.35896.

K. E. Piquette. Illuminating the Herculaneum Papyri: Testing new imaging techniques on unrolled carbonised manuscript fragments, pp. 80-102. DOI: 10.11588/dco.2017.0.39417.

T. Roeder. Mapping the Words: Experimental visualizations of translation structures between Ancient Greek and Classical Arabic, pp. 103-123. DOI: 10.11588/dco.2017.0.35951.

F. Elwert, S. Gerhards, S. Sellmer. Gods, graves and graphs – social and semantic network analysis based on Ancient Egyptian and Indian corpora, pp. 124-137. DOI: 10.11588/dco.2017.0.36017.

R. Da Vela. Social Networks in Late Hellenistic Northern Etruria: From a multicultural society to a society of partial identities, pp. 138-159. DOI: 10.11588/dco.2017.0.39433.

 

 

 

 

Digital Classicist Seminar Berlin 2017/18

October 6th, 2017 by mromanello

We are delighted to announce that the programme for this year’s Digital Classicist Seminar Berlin 2017/18 is now available. You can find it online at http://de.digitalclassicist.org/berlin/seminar2017, at the bottom of this email or as a ready-to-print poster.

The seminar series will start on Oct. 16 with a talk by Rebecca Kahn (HIIG Berlin) “An Introduction to Peripleo 2 – Pelagios Commons’ Linked Data Exploration Engine”. This year’s keynote lecture will be given by Leif Scheuermann (ACDH, University Graz) on Oct. 30 and is entitled “Approaches towards a genuine digital hermeneutic”.

This year there will be a couple of organizational changes: seminars will take place on Mondays starting at 17:00 (instead of Tuesdays), and they will be held on a fortnightly basis either in Berlin-Mitte at the Humboldt University (Hausvogteiplatz 5-7, Institutsgebäude (HV 5), room 0319) or in Berlin-Dahlem at the Deutsches Archäologisches Institut (Wiegandhaus, Podbielskiallee 69-71, entrance via Peter-Lenne-Str.).

We would also like to draw your attention to the possibility for students to attend the seminar as part of their curriculum, because the seminar is also offered by the Humboldt University as a “Ringvorlesung” (see the HU’s course catalog and the DH Course Regisrtry).

We would be very grateful if you could disseminate this email and the poster to others. We are looking forward to seeing you in the Seminar!

Programme

16.10.2017 (HU)
Rebecca Kahn et al. (HIIG)
“An Introduction to Peripleo 2 – Pelagios Commons’ Linked Data Exploration Engine”

30.10.2017 (DAI)
Leif Scheuermann (TOPOI)
“Approaches towards a genuine digital hermeneutic”

13.11.2017 (HU)
Gregory Gilles (KCL)
“Family or Faction? Using Cicero’s Letters to Map the Political, Social and Familial Relationships Between Senators During the Civil War of 49-45BC”

27.11.2017 (DAI)
Ainsley Hawthorn (LMU Munich)
“Hacking Sumerian A Database Approach to the Analysis of Ancient Languages”

11.12.2017 (HU)
Lieve Donnelland (Uni Amsterdam)
“Network analysis as a tool for studying early urbanisation in Italy”

8.1.2018 (DAI)
Sabrina Pietrobono (Università Degli Studi Dell’Aquila)
“GIS tool for interdisciplinary landscape studies”

22.1.2018 (HU)
Simona Stoyanova & Gabriel Bodard (ICS)
“Cataloguing Open Access Classics Serials”

5.2.2018 (DAI)
Francesco Mambrini et al. (DAI)
“The iDAI.publications from open digital publishing to text mining”

OEDUc: Exist-db mashup application

August 2nd, 2017 by pietroliuzzo

Exist-db mashup application working
group

This working group has worked to develop a demo app built with exist-db, a natively XML database which uses XQuery.

The app is ugly, but was built reusing various bits and pieces in a bit less than two days (the day of the unconference and a bit of the following day) and it uses different data sources with different methods to bring together useful resources for an epigraphic corpus and works in most of the cases for the examples we wanted to support. This was possible because exist-db makes it possible and because there were already all the bits available (exist-db, the xslt, the data, etc.)

Code, without data, has been copied to https://github.com/EpiDoc/OEDUc .

The app is accessible, with data from EDH data dumps of July at http://betamasaheft.aai.uni-hamburg.de:8080/exist/apps/OEDUc/

Preliminary twicks to the data included:

  • Adding an @xml:id to the text element to speed up retrival of items in exist. (the xquery doing this is in the AddIdToTextElement.xql file)
  • Note that there is no Pleiades id in the EDH XML (or in any EAGLE dataset), but there are Trismegistos Geo ID! This is because it was planned during the EAGLE project to get all places of provenance in Trismegistos GEO to map them later to Pleiades. This was started using Wikidata mix’n’match but is far from complete and is currently in need for update.

The features

  • In the list view you can select an item. Each item can be edited normally (create, update, delete)
  • The editor that updates files reproduces in simple XSLT a part of the Leiden+ logic and conventions for you to enter data or update existing data. It validates the data after performing the changes against the tei-epidoc.rng schema. Plan is to have it validate before it does the real changes.
  • The search simply searches in a number of indexed elements. It is not a full text index. There are also range indexes set to speed up the queries beside the other indexes shipped with exist.
  • You can create a new entry with the Leiden+ like editor and save it. it will be first validated and in case is not ok you are pointed to the problems. There was not enough times to add the vocabularies and update the editor.
  • Once you view an item you will find in nasty hugly tables a first section with metadata, the text, some additional information on persons and a map:
  • The text exploits some of the parameters of the EpiDoc Stylesheets. You can
    change the desired value, hit change and see the different output.
  • The ids of corresponding inscriptions, are pulled from the EAGLE ids API here in Hamburg, using Trismegistos data. This app will be soon moved to Trismegistos itself, hopefully.
  • The EDH id is instead used to query the EDH data API and get the information about persons, which is printed below the text.
  • For each element with a @ref in the XML files you will find the name of the element and a link to the value. E.g. to link to the EAGLE vocabularies
  • In case this is a TM Geo ID, then the id is used to query Wikidata SPARQL endpoint and retrive coordinates and the corresponding Pleiades id (given those are there). Same logic could be used for VIAF, geonames, etc. This task is done via a http request directly in the xquery powering the app.
  • The Pleiades id thus retrieved (which could be certainly obtained in other ways) is then used in javascript to query Pelagios and print the map below (taken from the hello world example in the Pelagios repository)
  • At http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all and http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all/void two rest XQ function provide the ttl files for Pelagios (but not a dump as required, although this can be done). The places annotations, at the moment only for the first 20 entries. See rest.xql.

Future tasks

For the purpose of having a sample app to help people get started with their projects and see some of the possibilities at work, beside making it a bit nicer it would be useful if this could also have the following:

  • Add more data from EDH-API, especially from edh_geography_uri which Frank has added and has the URI of Geo data; adding .json to this gets the JSON Data of place of finding, which has a “edh_province_uri” with the data about the province.
  • Validate before submitting
  • Add more support for parameters in the EpiDoc example xslt (e.g. for Zotero bibliography contained in div[@type=’bibliography’])
  • Improve the upconversion and the editor with more and more precise matchings
  • Provide functionality to use xpath to search the data
  • Add advanced search capabilities to filter results by id, content provider, etc.
  • Add images support
  • Include all EAGLE data (currently only EDH dumps data is in, but the system scales nicely)
  • Include query to the EAGLE media wiki of translations (api currently unavailable)
  • Show related items based on any of the values
  • Include in the editor the possibility to tag named entities
  • Sync the Epidoc XSLT repository and the eagle vocabularies with a webhook

OEDUc: Disambiguating EDH person RDF working group

July 25th, 2017 by Tom Gheldof

One of the working groups at the Open Epigraphic Data Unconference (OEDUc) meeting in London (May 15, 2017) focussed on disambiguating EDH person RDF. Since the Epigraphic Database Heidelberg (EDH) has made all of its data available to download in various formats in an Open Data Repository, it is possible to extract the person data from the EDH Linked Data RDF.

A first step in enriching this prosopographic data might be to link the EDH person names with PIR and Trismegistos (TM) references. At this moment the EDH person RDF only contains links to attestations of persons, rather than unique individuals (although it attaches only one REF entry to persons who have multiple occurrences in the same text), so we cannot use the EDH person URI to disambiguate persons from different texts.

Given that EDH already contains links to PIR in its bibliography, we could start with extracting (this should be possible using a simple Python script) and linking these to the EDH person REF. In the case where there is only one person attested in a text, the PIR reference can be linked directly to the RDF of that EDH person attestation. If, however (and probably in most cases), there are multiple person references in a text, we should try another procedure (possibly by looking at the first letter of the EDH name and matching it to the alphabetical PIR volume).

A second way of enriching the EDH person RDF could be done by using the Trismegistos People portal. At the moment this database of persons and attestations of persons in texts consists mostly of names from papyri (from Ptolemaic Egypt), but TM is in the process of adding all names from inscriptions (using an automated NER script on the textual data from EDCS via the EAGLE project). Once this is completed, it will be possible to use the stable TM PER ID (for persons) and TM person REF ID (for attestations of persons) identifiers (and URIs) to link up with EDH.

The recommended procedure to follow would be similar to the one of PIR. Whenever there’s a one-to-one relationship with a single EDH person reference the TM person REF ID could be directly linked to it. In case of multiple attestations of different names in an inscription, we could modify the TM REF dataset by first removing all double attestations, and secondly matching the remaining ones to the EDH RDF by making use of the order of appearance (in EDH the person that occurs first in an inscription receives a URI (?) that consists of the EDH text ID and an integer representing the place of the name in the text (e.g., http://edh-www.adw.uni-heidelberg.de/edh/person/HD000001/1 is the first appearing person name in text HD000001). Finally, we could check for mistakes by matching the first character(s) of the EDH name with the first character(s) of the TM REF name. Ultimately, by using the links from the TM REF IDs with the TM PER IDs we could send back to EDH which REF names are to be considered the same person and thus further disambiguating their person RDF data.

This process would be a good step in enhancing the SNAP:DRGN-compliant RDF produced by EDH, which was also addressed in another working group: recommendations for EDH person-records in SNAP RDF.

OEDUc: EDH and Pelagios location disambiguation Working Group

July 5th, 2017 by Valeria Vitale

From the beginning of the un-conference, an interest in linked open geodata seemed to be shared by a number of participants. Moreover, an attention towards gazetteers and alignment appeared among the desiderata for the event, expressed by the EDH developers. So, in the second part of the unconference, we had a look at what sort of geographic information can be found in the EDH and what could be added.

The discussion, of course, involved Pelagios and Pleiades and their different but related roles in establishing links between sources of geographical information. EDH is already one of the contributors of the Pelagios LOD ecosystem. Using the Pleiades IDs to identify places, it was relatively easy for the EDH to make its database compatible with Pelagios and discoverable on Peripleo, Pelagios’s search and visualisation engine.

However, looking into the data available for downloads, we focused on a couple things. One is that each of the epigraphic texts in the EDH has, of course, a unique identifier (EDH text IDs). The other is that each of the places mentioned has, also, a unique identifier (EDH geo IDs), besides the Pleiades ID. As one can imagine, the relationships between texts and places can be one to one, or one to many (as a place can be related to more than one text and a text can be related to more than one place). All places mentioned in the EDH database have an EDH geo ID, and the information becomes especially relevant in the case of those places that do not have already an ID in Pleiades or GeoNames. In this perspective, EDH geo IDs fill the gaps left by the other two gazetteer and meet the specific needs of the EDH.

Exploring Peripleo to see what information from the EDH can be found in it and how it gets visualised, we noticed that only the information about the texts appear as resources (identified by the diamond icon), while the EDH geo IDs do not show as a gazetteer-like reference, as it happen for other databases, such as Trismegistos or Vici.

So we decided to do a little work on the EDH geo IDs, more specifically:

  1. To extract them and treat them as a small, internal gazetteer that could be contributed to Pelagios. Such feature wouldn’t represent a substantial change in the way EDH is used, or how the data are found in Peripleo, but we thought it could  improve the visibility of the EDH in the Pelagios panorama, and, possibly, act as an intermediate step for the matching of different gazetteers that focus in the ancient world.
  2. The idea of using the EDH geo IDs as bridges sounded interesting especially when thinking of the possible interaction with the Trismegistos database, so we wondered if a closer collaboration between the two projects couldn’t benefit them both. Trismegistos, in fact, is another project with substantial geographic information: about 50.000 place-names mapped against Pleiades, Wikipedia and GeoNames. Since the last Linked Past conference, they have tried to align their place-names with Pelagios, but the operation was successful only for 10,000 of them. We believe that enhancing the links between Trismegistos and EDH could make them better connected to each other and both more effectively present in the LOD ecosystem around the ancient world.

With these two objectives in mind, we downloaded the geoJSON dump from the EDH website and extracted the texts IDs, the geo IDs, and their relationships. Once the lists (that can be found on the git hub repository) had been created, it becomes relatively straightforward to try and match the EDH geoIDs with the Trismegistos GeoIDs. In this way, through the intermediate step of the geographical relationships between text IDs and geo IDs in EDH, Trismegistos also gains a better and more informative connection with the EDH texts.

This first, quick attempt at aligning geodata using their common references, might help testing how good the automatic matches are, and start thinking of how to troubleshoot mismatches and other errors. This closer look at geographical information also brought up a small bug in the EDH interface: in the internal EDH search, when there is a connection to a place that does not have a Pleiades ID, the website treats it as an error, instead of, for example, referring to the internal EDH geoIDs. Maybe something that is worth flagging to the EDH developers and that, in a way, underlines another benefit of treating the EDH geo IDs as a small gazetteer of its own.

In the end, we used the common IDs (either in Pleiades or GeoNames) to do a first alignment between the Trismegistos and EDH places IDs. We didn’t have time to check the accuracy (but you are welcome to take this experiment one step further!) but we fully expect to get quite a few positive results. And we have a the list of EDH geoIDs ready to be re-used for other purposes and maybe to make its debut on the Peripleo scene.

OEDUc: recommendations for EDH person-records in SNAP RDF

July 3rd, 2017 by Gabriel Bodard

At the first meeting of the Open Epigraphic Data Unconference (OEDUc) in London in May 2017, one of the working groups that met in the afternoon (and claim to have completed our brief, so do not propose to meet again) examined the person-data offered for download on the EDH open data repository, and made some recommendations for making this data more compatible with the SNAP:DRGN guidelines.

Currently, the RDF of a person-record in the EDH data (in TTL format) looks like:

<http://edh-www.adw.uni-heidelberg.de/edh/person/HD000001/1>
    a lawd:Person ;
    lawd:PersonalName "Nonia Optata"@lat ;
    gndo:gender <http://d-nb.info/standards/vocab/gnd/gender#female> ;
    nmo:hasStartDate "0071" ;
    nmo:hasEndDate "0130" ;
    snap:associatedPlace <http://edh-www.adw.uni-heidelberg.de/edh/geographie/11843> ,
        <http://pleiades.stoa.org/places/432808#this> ;
    lawd:hasAttestation <http://edh-www.adw.uni-heidelberg.de/edh/inschrift/HD000001> .

We identified a few problems with this data structure, and made recommendations as follows.

  1. We propose that EDH split the current person references in edh_people.ttl into: (a) one lawd:Person, which has the properties for name, gender, status, membership, and hasAttestation, and (b) one lawd:PersonAttestation, which has properties dct:Source (which points to the URI for the inscription itself) and lawd:Citation. Date and location etc. can then be derived from the inscription (which is where they belong).
  2. A few observations:
    1. Lawd:PersonalName is a class, not a property. The recommended property for a personal name as a string is foaf:name
    2. the language tag for Latin should be @la (not lat)
    3. there are currently thousands of empty strings tagged as Greek
    4. Nomisma date properties cannot be used on person, because the definition is inappropriate (and unclear)
    5. As documented, Nomisma date properties refer only to numismatic dates, not epigraphic (I would request a modification to their documentation for this)
    6. the D-N.B ontology for gender is inadequate (which is partly why SNAP has avoided tagging gender so far); a better ontology may be found, but I would suggest plain text values for now
    7. to the person record, above, we could then add dct:identifier with the PIR number (and compare discussion of plans for disambiguation of PIR persons in another working group)

Global Philology Workshop Week in Leipzig

June 29th, 2017 by Monica Berti

Within the framework of the BMBF funded Global Philology Planning Project, we would like to announce three workshops that will be taking place at the University of Leipzig in the next two weeks:

Head of Digital Research at the National Archives

June 21st, 2017 by Gabriel Bodard

Posted for Olivia Pinto (National Archives, Kew, UK):

Job Opportunity at The National Archives

Head of Digital Research

About the role

The National Archives has set itself the ambition of becoming a digital archive by instinct and design. The digital strategy takes this forward through the notion of a disruptive archive which positively reimagines established archival practice, and develops new ways of solving core digital challenges. You will develop a research programme to progress this vision, to answer key questions for TNA and the Archives Sector around digital archival practice and delivery. You will understand and navigate through the funding landscape, identifying key funders (RCUK and others) to build relations at a senior level to articulate priorities around digital archiving, whilst taking a key role in coordinating digitally focused research bids. You will also build key collaborative relationships with academic partners and undertake horizon scanning of the research landscape, tracking and engaging with relevant research projects nationally and internationally. You will also recognise the importance of developing an evidence base for our research into digital archiving and will lead on the development of methods for measuring impact.

About you

As someone who will be mentoring and managing a team of researchers, as well as leading on digital programing across the organisation, you’ll need to be a natural at inspiring and engaging the people you work with. You will also have the confidence to engage broadly with external stakeholders and partners. Your background and knowledge of digital research, relevant in the context of a memory institution such as The National Archives, will gain you the respect you need to deliver an inspiring digital research programme. You combine strategic leadership with a solid understanding of the digital research landscape as well as the tools and technologies that will underpin the development of a digital research programme. You will come with a strong track record in digital research, a doctorate in a discipline relevant to our digital research agenda, and demonstrable experience of relationship development at a senior level with the academic and research sectors.

Join us here in beautiful Kew, just 10 minutes walk from the Overground and Underground stations, and you can expect an excellent range of benefits. They include a pension, flexible working and childcare vouchers, as well as discounts with local businesses. We also offer well-being resources (e.g. onsite therapists) and have an on-site gym, restaurant, shop and staff bar.

To apply please follow the link: https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1543657

Salary: £41,970

Closing date: Wednesday 28th June 2017

Pleiades sprint on Pompeian buildings

June 20th, 2017 by Valeria Vitale

Casa della Statuetta Indiana, Pompei.

Monday the 26th of June, from 15 to 17 BST, Pleiades organises an editing sprint to create additional URIs for Pompeian buildings, preferably looking at those located in Regio I, Insula 8.

Participants will meet remotely on the Pleiades IRC chat. Providing monument-specific IDs will enable a more efficient and granular use and organisation of Linked Open Data related to Pompeii, and will support the work of digital projects such as the Ancient Graffiti.

Everyone is welcome to join, but a Pleiades account is required to edit the online gazetteer.

OEDUc: EDH and Pelagios NER working group

June 19th, 2017 by Sarah Middle

Participants:  Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta

Report: https://github.com/EpiDoc/OEDUc/wiki/EDH-and-Pelagios-NER

The EDH and Pelagios NER working group was part of the Open Epigraphic Data Unconference held on 15 May 2017. Our aim was to use Named Entity Recognition (NER) on the text of inscriptions from the Epigraphic Database Heidelberg (EDH) to identify placenames, which could then be linked to their equivalent terms in the Pleiades gazetteer and thereby integrated with Pelagios Commons.

Data about each inscription, along with the inscription text itself, is stored in one XML file per inscription. In order to perform NER, we therefore first had to extract the inscription text from each XML file (contained within <ab></ab> tags), then strip out any markup from the inscription to leave plain text. There are various Python libraries for processing XML, but most of these turned out to be a bit too complex for what we were trying to do, or simply returned the identifier of the <ab> element rather than the text it contained.

Eventually, we found the Python library Beautiful Soup, which converts an XML document to structured text, from which you can identify your desired element, then strip out the markup to convert the contents of this element to plain text. It is a very simple and elegant solution with only eight lines of code to extract and convert the inscription text from one specific file. The next step is to create a script that will automatically iterate through all files in a particular folder, producing a directory of new files that contain only the plain text of the inscriptions.

Once we have a plain text file for each inscription, we can begin the process of named entity extraction. We decided to follow the methods and instructions shown in the two Sunoikisis DC classes on Named Entity Extraction:

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-II

Here is a short outline of the steps might involve when this is done in the future.

  1. Extraction
    1. Split text into tokens, make a python list
    2. Create a baseline
      1. cycle through each token of the text
      2. if the token starts with a capital letter it’s a named entity (only one type, i.e. Entity)
    3. Classical Language Toolkit (CLTK)
      1. for each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
      2. Compare to baseline
    4. Natural Language Toolkit (NLTK)
      1. Stanford NER Tagger for Italian works well with Latin
      2. Differentiates between different kinds of entities: place, person, organization or none of the above, more granular than CLTK
      3. Compare to both baseline and CLTK lists
  2. Classification
    1. Part-Of-Speech (POS) tagging – precondition before you can perform any other advanced operation on a text, information on the word class (noun, verb etc.); TreeTagger
    2. Chunking – sub-dividing a section of text into phrases and/or meaningful constituents (which may include 1 or more text tokens); export to IOB notation
    3. Computing entity frequency
  3. Disambiguation

Although we didn’t make as much progress as we would have liked, we have achieved our aim of creating a script to prepare individual files for NER processing, and have therefore laid the groundwork for future developments in this area. We hope to build on this work to successfully apply NER to the inscription texts in the EDH in order to make them more widely accessible to researchers and to facilitate their connection to other, similar resources, like Pelagios.

OEDUc: Images and Image metadata working group

June 13th, 2017 by simonastoyanova

Participants: Sarah Middle, Angie Lumezeanu, Simona Stoyanova
Report: https://github.com/EpiDoc/OEDUc/wiki/Images-and-image-metadata

 

The Images and Image Metadata working group met at the London meeting of the Open Epigraphic Data Unconference on Friday, May 15, 2017, and discussed the issues of copyright, metadata formats, image extraction and licence transparency in the Epigraphik Fotothek Heidelberg, the database which contains images and metadata relating to nearly forty thousand Roman inscriptions from collections around the world. Were the EDH to lose its funding and the website its support, one of the biggest and most useful digital epigraphy projects will start disintegrating. While its data is available for download, its usability will be greatly compromised. Thus, this working group focused on issues pertaining to the EDH image collection. The materials we worked with are the JPG images as seen on the website, and the images metadata files which are available as XML and JSON data dumps on the EDH data download page.

The EDH Photographic Database index page states: “The digital image material of the Photographic Database is with a few exceptions directly accessible. Hitherto it had been the policy that pictures with unclear utilization rights were presented only as thumbnail images. In 2012 as a result of ever increasing requests from the scientific community and with the support of the Heidelberg Academy of the Sciences this policy has been changed. The approval of the institutions which house the monuments and their inscriptions is assumed for the non commercial use for research purposes (otherwise permission should be sought). Rights beyond those just mentioned may not be assumed and require special permission of the photographer and the museum.”

During a discussion with Frank Grieshaber we found out that the information in this paragraph is only available on this webpage, with no individual licence details in the metadata records of the images, either in the XML or the JSON data dumps. It would be useful to be included in the records, though it is not clear how to accomplish this efficiently for each photograph, since all photographers need to be contacted first. Currently, the rights information in the XML records says “Rights Reserved – Free Access on Epigraphischen Fotothek Heidelberg”, which presumably points to the “research purposes” part of the statement on the EDH website.

All other components of EDH – inscriptions, bibliography, geography and people RDF – have been released under Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows for their reuse and repurposing, thus ensuring their sustainability. The images, however, will be the first thing to disappear once the project ends. With unclear licensing and the impossibility of contacting every single photographer, some of whom are not alive anymore and others who might not wish to waive their rights, data reuse becomes particularly problematic.

One possible way of figuring out the copyright of individual images is to check the reciprocal links to the photographic archive of the partner institutions who provided the images, and then read through their own licence information. However, these links are only visible from the HTML and not present in the XML records.

Given that the image metadata in the XML files is relatively detailed and already in place, we decided to focus on the task of image extraction for research purposes, which is covered by the general licensing of the EDH image databank. We prepared a Python script for batch download of the entire image databank, available on the OEDUc GitHub repo. Each image has a unique identifier which is the same as its filename and the final string of its URL. This means that when an inscription has more than one photograph, each one has its individual record and URI, which allows for complete coverage and efficient harvesting. The images are numbered sequentially, and in the case of a missing image, the process skips that entry and continues on to the next one. Since the databank includes some 37,530 plus images, the script pauses for 30 seconds after every 200 files to avoid a timeout. We don’t have access to the high resolution TIFF images, so this script downloads the JPGs from the HTML records.

The EDH images included in the EAGLE MediaWiki are all under an open licence and link back to the EDH databank. A task for the future will be to compare the two lists to get a sense of the EAGLE coverage of EDH images and feed back their licensing information to the EDH image records. One issue is the lack of file-naming conventions in EAGLE, where some photographs carry a publication citation (CIL_III_14216,_8.JPG, AE_1957,_266_1.JPG), a random name (DR_11.jpg) and even a descriptive filename which may contain an EDH reference (Roman_Inscription_in_Aleppo,_Museum,_Syria_(EDH_-_F009848).jpeg). Matching these to the EDH databank will have to be done by cross-referencing the publication citations either in the filename or in the image record.

A further future task could be to embed the image metadata into the image itself. The EAGLE MediaWiki images already have the Exif data (added automatically by the camera) but it might be useful to add descriptive and copyright information internally following the IPTC data set standard (e.g. title, subject, photographer, rights etc). This will help bring the inscription file, image record and image itself back together, in the event of data scattering after the end of the project. Currently linkage exist between the inscription files and image records. Embedding at least the HD number of the inscription directly into the image metadata will allow us to gradually bring the resources back together, following changes in copyright and licensing.

Out of the three tasks we set out to discuss, one turned out to be impractical and unfeasible, one we accomplished and published the code, one remains to be worked on in the future. Ascertaining the copyright status of all images is physically impossible, so all future experiments will be done on the EDH images in EAGLE MediaWiki. The script for extracting JPGs from the HTML is available on the OEDUc GitHub repo. We have drafted a plan for embedding metadata into the images, following the IPTC standard.

Open Epigraphic Data Unconference report

June 7th, 2017 by Gabriel Bodard

Last month, a dozen or so scholars met in London (and were joined by a similar number via remote video-conference) to discuss and work on the open data produced by the Epigraphic Database Heidelberg. (See call and description.)

Over the course of the day seven working groups were formed, two of which completed their briefs within the day, but the other five will lead to ongoing work and discussion. Fuller reports from the individual groups will follow here shortly, but here is a short summary of the activities, along with links to the pages in the Wiki of the OEDUc Github repository.

Useful links:

  1. All interested colleagues are welcome to join the discussion group: https://groups.google.com/forum/#!forum/oeduc
  2. Code, documentation, and other notes are collected in the Github repository: https://github.com/EpiDoc/OEDUc

1. Disambiguating EDH person RDF
(Gabriel Bodard, Núria García Casacuberta, Tom Gheldof, Rada Varga)
We discussed and broadly specced out a couple of steps in the process for disambiguating PIR references for inscriptions in EDH that contain multiple personal names, for linking together person references that cite the same PIR entry, and for using Trismegistos data to further disambiguate EDH persons. We haven’t written any actual code to implement this yet, but we expect a few Python scripts would do the trick.

2. Epigraphic ontology
(Hugh Cayless, Paula Granados, Tim Hill, Thomas Kollatz, Franco Luciani, Emilia Mataix, Orla Murphy, Charlotte Tupman, Valeria Vitale, Franziska Weise)
This group discussed the various ontologies available for encoding epigraphic information (LAWDI, Nomisma, EAGLE Vocabularies) and ideas for filling the gaps between this. This is a long-standing desideratum of the EpiDoc community, and will be an ongoing discussion (perhaps the most important of the workshop).

3. Images and image metadata
(Angie Lumezeanu, Sarah Middle, Simona Stoyanova)
This group attempted to write scripts to track down copyright information on images in EDH (too complicated, but EAGLE may have more of this), download images and metadata (scripts in Github), and explored the possibility of embedding metadata in the images in IPTC format (in progress).

4. EDH and SNAP:DRGN mapping
(Rada Varga, Scott Vanderbilt, Gabriel Bodard, Tim Hill, Hugh Cayless, Elli Mylonas, Franziska Weise, Frank Grieshaber)
In this group we revised the status of SNAP:DRGN recommendations for person-data in RDF, and then looked in detail about the person list exported from the EDH data. A list of suggestions for improving this data was produced for EDH to consider. This task was considered to be complete. (Although Frank may have feedback or questions for us later.)

5. EDH and Pelagios NER
(Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta, Thomas Kollatz)
This group explored the possibility of running machine named entity extraction on the Latin texts of the EDH inscriptions, in two stages: extracting plain text from the XML (code in Github); applying CLTK/NLTK scripts to identify entities (in progress).

6. EDH and Pelagios location disambiguation
(Paula Granados, Valeria Vitale, Franco Luciani, Angie Lumezeanu, Thomas Kollatz, Hugh Cayless, Tim Hill)
This group aimed to work on disambiguating location information in the EDH data export, for example making links between Geonames place identifiers, TMGeo places, Wikidata and Pleiades identifiers, via the Pelagios gazetteer or other linking mechanisms. A pathway for resolving was identified, but work is still ongoing.

7. Exist-db mashup application
(Pietro Liuzzo)
This task, which Dr Liuzzo carried out alone, since his network connection didn’t allow him to join any of the discussion groups on the day, was to create an implementation of existing code for displaying and editing epigraphic editions (using Exist-db, Leiden+, etc.) and offer a demonstration interface by which the EDH data could be served up to the public and contributions and improvements invited. (A preview “epigraphy.info” perhaps?)

Digital Classicist London seminar 2017 programme

May 23rd, 2017 by Gabriel Bodard

Institute of Classical Studies

Senate House, Malet Street, London WC1E 7HU

Fridays at 16:30 in room 234*

Jun 2 Sarah Middle (Open University) Linked Data and Ancient World Research: studying past projects from a user perspective
Jun 9 Donald Sturgeon (Harvard University) Crowdsourcing a digital library of pre-modern Chinese
Jun 16* Valeria Vitale et al. (Institute of Classical Studies) Recogito 2: linked data without the pointy brackets
Jun 23* Dimitar Iliev et al. (University of Sofia “St. Kliment Ohridski”) Historical GIS of South-Eastern Europe
Jun 30

&

Lucia Vannini (Institute of Classical Studies) The role of Digital Humanities in Papyrology: Practices and user needs in papyrological research

Paula Granados García (Open University) Cultural Contact in Early Roman Spain through Linked Open Data resources

Jul 7 Elisa Nury (King’s College London) Collation Visualization: Helping Users to Explore Collated Manuscripts
Jul 14 Sarah Ketchley (University of Washington) Re-Imagining Nineteenth Century Nile Travel & Excavation for a Digital Age: The Emma B. Andrews Diary Project
Jul 21 Dorothea Reule & Pietro Liuzzo (Hamburg University) Issues in the development of digital projects based on user requirements. The case of Beta maāḥǝft
Jul 28 Rada Varga (Babeș-Bolyai University, Cluj-Napoca) Romans 1by1: Transferring information from ancient people to modern users

*Except Jun 16 & 23, room G34

digitalclassicist.org/wip/wip2017.html

This series is focussed on user and reader needs of digital projects or resources, and assumed a wide definition of classics including the whole ancient world more broadly than only the Greco-Roman Mediterranean. The seminars will be pitched at a level suitable for postgraduate students or interested colleagues in Archaeology, Classics, Digital Humanities and related fields.

Digital Classicist London seminar is organized by Gabriel Bodard, Simona Stoyanova and Valeria Vitale (ICS) and Simon Mahony and Eleanor Robson (UCL).

ALL WELCOME

Unlocking Sacred Landscapes, by Giorgos Papantoniou

May 5th, 2017 by Gabriel Bodard

Project report by Dr Giorgos Papantoniou, papantog@uni-bonn.de

Previous multi-dimensional approaches to the study of ancient Mediterranean societies have shown that social, economic and religious lives were closely entwined. In attempting to engage with Cyprus’s multiple identities – and the ways in which islanders may have negotiated, performed or represented their identities – several material vectors related to ritual and sacred space must be taken into consideration. The sharp modern distinction between sacred and profane is not applicable to antiquity, and the terms ritual, cult and sacred space in Unlocking Sacred Landscapes: Cypriot Sanctuaries as Economic Units are used broadly to include the domestic and funerary spheres of life as well as formally constituted sanctuaries. Perceiving ritual space as instrumental in forming both power relations and the worldview of ancient people, and taking ancient Cyprus as a case study, the Project aims at elucidating how meanings and identities were diachronically expressed in, or created by, the topographical and economic setting of ritual and its material depositions and dedications.

The evidence of cult or sacred space is very limited and ambiguous before the Late Bronze Age. During the Late Bronze Age (ca. 1700-1125/1100 BC), however ritual spaces were closely linked to industrial activities; the appropriation, distribution, and consumption of various resources (especially copper), labour and land was achieved by the elite through exploitation of supernatural knowledge. The Early Iron Age (ca. 1125/1100-750 BC) landscapes are very difficult to approach. We can, however, identify sanctuary sites in the countryside towards the end of this period. This phenomenon might well relate to the consolidation of the Iron Age Cypriot polities (known in the archaeological literature as Cypriot city-kingdoms) and their territories. While urban sanctuaries become religious communal centres, where social, cultural and political identities are affirmed, an indication of the probable use of extra-urban sanctuaries in the political establishment of the various polities of the Cypro-Archaic (ca. 750-480 BC) and Cypro-Classical (ca. 480-310 BC) periods has recently been put forward.

During the Hellenistic period (ca. 310-30 BC), a process of official neglect of the extra-urban sanctuaries signals a fundamental transformation in the social perception of the land. After the end of the city-kingdoms, and the movement from many political identities to a single identity, extra-urban sanctuaries were important mainly to the local extra-urban population. By the Roman period (ca. 30 BC-330 AD), the great majority of Hellenistic extra-urban sanctuaries are ‘dead’. When the social memory, elite or non-elite, that kept them alive ‘dies’, they ‘die’ with it; what usually distinguishes the surviving sites is what the defunct sites lacked: political scale and significance. As the topography of Roman sanctuary sites reveals, this is not to say that extra-urban sanctuaries did not exist anymore. Over time, however, they started to become primarily the concern of local audiences. The annexation and ‘provincialisation’ of Cyprus, with all the consequent developments, were accompanied by changes in memorial patterns, with less focus on regional or local structures, and more intense emphasis on stressing an ideology which created a more widely recognisable ‘pan-Cypriot’ myth-history, which was eventually related to Ptolemaic, and later to Roman imperial power and ideology.

This Project puts together a holistic, inter-disciplinary approach to the diachronic study of the ancient Cypriot ritual and cult. While it aims at bringing together textual, archaeological, epigraphic, art-historical, and sociological/anthropological evidence, for the first time it incorporates ‘scientific’ spatial analysis and more agent-centred computational models to the study of ancient Cypriot sanctuaries and religion. By inserting in a GIS environment the Cypriot sanctuary sites the relation of sacred landscapes with politico-economic geography put forward above is tested both at regional and at island-wide level.

The Project falls under the umbrella of a larger Research Network entitled Unlocking Sacred Landscapes.

For further information: http://www.ucy.ac.cy/unsala/

Dr Giorgos Papantoniou
Research Training Group 1878: Archaeology of Pre-Modern Economies
Rheinische Friedrich-Wilhelms-Universität Bonn
Institut für Archäologie und Kulturanthropologie
Abteilung für Klassische Archäologie
Lennéstr. 1
D-53113, Bonn
Germany

Job advertisement: Postdoctoral Research Associate (KCL)

May 3rd, 2017 by Gabriel Bodard

Posted on behalf of Will Wootton (to whom enquiries should be addressed):

Training in Action: From Documentation to Protection of Cultural Heritage in Libya and Tunisia

As part of this new project funded by the British Council’s Cultural Heritage Protection Fund, a Post-Doctoral position will be employed at King’s College London. The Research Associate will work on the project initially for 10 months, with the contract likely to be renewed for a further 12 months.

The deadline for applications is 10th May. For further information, see here:
https://www.hirewire.co.uk/HE/1061247/MS_JobDetails.aspx?JobID=76726
And here:
http://www.jobs.ac.uk/job/BAY043/post-doctoral-research-associate-on-training-in-action-from-documentation-to-protection-of-cultural-heritage-in-libya-and-tunisia/

We would be most grateful if you could circulate this email to interested parties as the deadline is imminent.

Dr Will Wootton
King’s College London,London WC2R 2LS.
Tel. +44 (0)207 848 1015
Fax +44 (2)07 848 2545

Open Epigraphic Data Unconference, London, May 15, 2017

May 2nd, 2017 by Gabriel Bodard

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

Historia Ludens: Conference on History and Gaming, 19 May 2017

April 28th, 2017 by Gabriel Bodard

Posted on behalf of Alexander von Lünen (to whom queries should be addressed):

University of Huddersfield
19 May 2017

This conference follows up on the workshop “Playing with History” that has been held in November 2015 in Huddersfield. Gaming and History is gaining more and more traction, either as means to “gamify” history education or museum experiences, or as computer games as prism into history like the popular History Respawned podcast series (http://www.historyrespawned.com/).

Besides discussing gamification or using (computer) games, we also want to explore gaming and playing in a broader historical-cultural sense. Can “playing” be used as category for historical scholarship, maybe alongside other categories such as gender, space or class? Historian Johan Huizinga’s Homo Ludens from 1938 looked at play and its importance for human culture. Can historians make similar cases for more specific histories? In recent publications historians have pointed to the connection between cities and play. Simon Sleight, for example, has worked on the history of childhood and urban history, i.e. young people appropriating public urban spaces for their ludic activities and their struggle with authorities over this. Archaeologists, as another example, have shown that much of the urban infrastructure of Ancient Rome was dedicated to games, playing and gambling, as it had such a big role in Roman life.

The conference will thus discuss terms like “gaming”, “playing” and “history” in broad terms. There are academic papers in the morning and round-table sessions in the afternoon for networking and demos.

Tickets (£10) are available via the University of Huddersfield web shop. Please note: there are travel/conference bursaries for postgraduate students available on request; please contact Dr Alexander von Lünen (a.f.vonlunen@hud.ac.uk) for details.

Full details and programme at https://hudddighum.wordpress.com/2017/03/06/historia-ludens-conference-on-history-and-gaming-19-may-2017/

CFP: Cyborg Classics: An Interdisciplinary Symposium

April 25th, 2017 by Gabriel Bodard

Forwarded on behalf of Silvie Kilgallon (to whom enqueries should be addressed):

We are pleased to announce a one-day symposium, sponsored by BIRTHA (The Bristol Institute for Research in the Humanities and Arts) to be held at the University of Bristol, on Friday July 7th 2017.

Keynote speakers:

  • Dr Kate Devlin (Goldsmiths)
  • Dr Genevieve Liveley (Bristol)
  • Dr Rae Muhlstock (NYU)

The aim of the day is to bring together researchers from different disciplines – scholars in Archaeology & Anthropology, Classics, English, History, and Theology as well as in AI, Robotics, Ethics, and Medicine – to share their work on automata, robots, and cyborgs. Ultimately, the aim is an edited volume and the development of further collaborative research projects.

Indicative key provocations include:

  • To what extent do myths and narratives about automata, robots, and cyborgs raise questions that are relevant to contemporary debates concerning robot, cyborg, and AI product innovation?
  • To what extent, and how, can contemporary debate concerning robot, cyborg, and AI product innovation rescript ancient myths and narratives about automata, robots, and cyborgs.
  • Can interdisciplinary dialogues between the ‘soft’ humanities and the ‘hard’ sciences of robotics and AI be developed? And to what benefit?
  • How might figures such as Pandora, Pygmalion’s statue, and Talos help inform current polarized debates concerning robot, cyborg, and AI ethics?
  • What are the predominant narrative scripts and frames that shape the public understanding of robotics and AI? How could these be re-coded?

We invite scholars working across the range of Classics and Ancient History (including Classical Reception) and across the Humanities more widely to submit expressions of interest and/or a title and abstract (of no more than 250 words) to the symposium coordinator, Silvie Kilgallon (silvie.kilgallon@bristol.ac.uk). PhD students are warmly encouraged to contribute. The deadline for receipt of abstracts is May 31st, 2017.