New Entry

In the search above you can use Lucene query symbols.

This is a test app built from the exist-db, a natively XML database which uses XQuery . It uses different data sources with different methods to bring together useful resources for an epigraphic corpus.

Preliminary twicks to the data included:

  • adding an xml:id to the text element to speed up retrival of items in exist.
  • note that there is no pleiades id in the xml, but there are trismegistos geo!

The features

  • In the list view you can select an item. Each item can be edited normally (create, update, delete)
  • The editor that updates files reproduces in simple XSLT a part of the Leiden+ logic and conventions for you to enter data or update existing data. It validates the data after performing the changes against the tei-epidoc.rng schema. Plan is to have it validate before it does the real changes.
  • The search simply searches in a number of elements. It is not a full text index. There are also range indexes set to speed up the queries.
  • You can create a new entry with the leiden plus editor and save it. it will be first validate and in case is not ok you are pointed to the problems. Here I did not yet have times to add the vocabularies and update the editor.

Once you view an item you will find in nasty hugly tables a first section with metadata, the text, some additional information on persons and a map:

  • The text exploits some of the parameters of the EpiDoc Stylesheets. You can change the desired value, hit change and see the difference.
  • The ids of corresponding inscriptions, pulled from the EAGLE ids api here in Hamburg, using Trismegistos data, which will be soon moved to Trismegistos itself.
  • The EDH id is instead used to query the EDH api and get the information about persons, which is printed below the text.
  • For each element with a @ref in the XML files you will find the name of the element and a link to the value. E.g. to link to the EAGLE vocabularies
  • In case this is a TM Geo ID, then the id is used to query Wikidata SPARQL endpoint and retrive coordinates and the corresponding Pleiades id (given those are there). Same logic could be used for VIAF, geonames, etc. There were uploads of ids last year and an attempt to align the non matched Pleiades and Trismegisots ids was also made in 2015/16. This task is done via a http request directly in the xquery powering the app.
  • The pleiades id thus retrived (which could be certainly retrived in other ways) is then used in javascript to query Pelagios and print the map below (taken from the hello world example in the pelagios repository)
  • at http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all and http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all/void two rest XQ function provide the ttl files for pelagios. the places annotations, at the moment only for the first 20 entries. See rest.xql.

Beside making it a bit nicer I think it would be usefull if this could also have the following, which I did not manage to do

  • validate before submitting
  • add more support for parameters in the EpiDoc example xslt (e.g. for Zotero bibliography contained in div[@type='bibliography'])
  • improve the upconversion and the editor with more and more precise matchings
  • provide functionality to use xpath to search the data
  • add advanced search capabilities to filter results by id, content provider, etc.
  • add images support
  • include all EAGLE data (currently only EDH dumps data is in)
  • include query to the EAGLE media wiki of translations (api currently unavailable)
  • show related items based on any of the values
  • include in the editor the possibility to tag named entities