CorpusReader

From TEIWiki

Jump to: navigation, search

Contents

Summary

CorpusReader (CR) is a tool for extracting subcorpora, KWIC and quantitative information from arbitrarily large corpora in the TEI vocabulary. It intends to provide ways for processing corpora containing milestoned annotation. It provides mechanism for merging several XML documents together.

Background

CorpusReader was developed for quantitative corpus linguistics. Its original goal was to provide a way for extracting quantitative information (such as co-occurrence matrix) using all information expressed in the XML infoset.

As a quantitative corpus linguistic tool, CorpusReader does not have any linguistic nor statistical skill... but provides help for:

  • importing outputs of existing linguistic tools into a corpus (tagger, parser, etc.),
  • exporting quantitative data in statistical tool formats (toward Matlab, R and DTM for instance).

The rationale is that existing linguistic and statistical tools should be reused and made easy to use in the context of a TEI corpus. Thus CorpusReader try to be a bridge between linguistic and statistical tools for creating and exploring empirically complex data.

Technical features

CR is written in java, and works with Java 1.4 and further. It relies on numerous external high quality open source libraries.

It runs at the command line.

It is released under the BSD licence.

There is a web site (See the bottom of the page for more links)

Properties

Functions are filters

The program rely on a "streaming API", where the document is processed as a stream and not as a tree. The "functions" of the program are implemented as "filters" applied on this stream. The program is mainly a collection of filters and a mechanism for plugging the filters into a pipeline.

Each filter is specialized into a precise task and takes few argument. This allows modularity and reusability. While each filter performs a simple task, the pipeline of filters may achieve complex tasks.

A query document looks like:

<query>
  <header>
    <name></name>
    <date></date>
    <desc></desc>
  </header>
 
  <corpus inURI="corpus.xml"
          outURI="sample-manuel-1.out"
          />
  <filterList>
    <filter name="myFilterName" javaClass="java.class.qualified.Name">
      <args>
        <!--  Argument subtree, passed to the filter if any, after
              validation if a schema is know for this filter.
        -->
      </args>
    </filter>
    <!--  etc.: as many filter as needed  -->
  </filterList>
</query>

In some cases, filters can communicate directly with each other (in addition to communicating through the stream of events).

Using high level languages on large documents (XPath, XSLT, XQuery)

High level languages (XPath, XSLT and XQuery) are made available: the stream of XML events is buffered, transformed or queried, and throw back to the next filter in the pipeline as a stream of XML events.

When the corpus does not fit in memory, a mechanism allows to address the subtrees to be buffered and transformed successively, one by one, as separate document. The split element in the query document divide the stream into sub-documents. Each filter into a split element see the corpus as several documents rooted at the elements defined by split/@localName:

<query>

  <header>
    <name></name>
    <date></date>
    <desc></desc>
  </header>

  <corpus inURI="path/to/corpus" outURI="path/to/output"></corpus>

  <filterList>
    <split localName="TEI">
      <filterList>
        <filter name="transform_my_div" javaClass="tei.cr.filters.XSLT">
          <args>
            <stylesheet URI="path/to/stylesheet"></stylesheet>
          </args>
        </filter>
      </filterList>
    </split>
  </filterList>
</query>

There is also a way for addressing elements in the stream through XPath expression evaluated against each element, one at one, as if they were a stand-alone document. For instance, the query document above could be written as:

<query>

  <header>
    <name></name>
    <date></date>
    <desc></desc>
  </header>

  <corpus inURI="path/to/corpus" outURI="path/to/output"></corpus>

  <filterList>
    <split elxpath="*[namespace-uri()='http://www.tei-c.org/ns/1.0' 
                      and local-name()='div']">
      <filterList>
        <filter name="transform_my_div" javaClass="tei.cr.filters.XSLT">
          <args>
            <stylesheet URI="path/to/stylesheet"></stylesheet>
          </args>
        </filter>
      </filterList>
    </split>
  </filterList>
</query>

Several filters allow XPath syntax for addressing nodes.

Merging documents

CR contains a mechanism for merging an external document into an already annotated corpus without breaking well-formedness. It is useful for reusing the outputs of existing linguistic annotation tools. There is a documentation (in French and already old).

Dealing with intersecting hierarchies

[TODO]

Using a low level API

I tried to make the program stand between a "tool" and an "API": it may be seen as a framework for facilitating the use of a low level API. Thus, there may be different ways of using it:

  • it provides ready-to-use functions for creating KWIC, extracting subcorpora, computing co-occurrence matrices, merging markup, etc.
  • it may be used more generally as a way of applying XSLT / XQuery on big corpora, whatever the vocabulary is.
  • it may be used for plugging any SAX filters; and reduce the complexity of SAX by allowing to write only a SAX handler code, while the program manage the parser, the pipeline, and the serialisation of the output of the pipeline back to disk. Any class implementing the XMLFilter interface may be plugged in the filter by providing the qualified name of the java class in the query document (this class should be found in the CLASSPATH variable), and can interact with the already existing filters.
  • it may be used for prototyping java code by embedding in the pipeline java code defining a SAX filter, see http://panini.u-paris10.fr/~sloiseau/CR/filtres/Script.html or http://panini.u-paris10.fr/~sloiseau/CR/exemples.html#embeddedJava

The goal was to make easy the use of a low-level API. "CorpusReader" is nammed after "XMLReader", the name of the class parsing a document in the java SAX API. It is intended to be a "layer" onto SAX for the TEI vocabulary.

Intended for documents in the TEI scheme

The program tries to rely on the TEI scheme: some structural properties of TEI documents are sometime needed, but I try not to make it relying on a specific TEI customisation. (I would like to develop a mechanism for using the "TEI customization" document produced by Roma for overriding the default vocabulary).

External Links

Personal tools