wiki:WAC-X

Version 4 (modified by Roland Schäfer, 9 years ago) ( diff )

--

10th Web as Corpus Workshop (WAC-X) and EmpiriST Shared Task

We are happy to announce that WAC-X will be co-located with ACL 2016 in Berlin. More information and a call for papers will be published in due time. There will be a tightly packed one-day schedule with the main workshop, a shared task final workshop, and a panel discussion.

Details

Organizers

Program committee (preliminary)

The workshop organizers plus:

  • Adrien Barbaresi, ÖAW (AT)
  • Silvia Bernardini, University of Bologna (IT)
  • Douglas Biber, Northern Arizona University (US)
  • Felix Bildhauer, Institut für Deutsche Sprache Mannheim (DE)
  • Katrien Depuydt, INL, Leiden (NL)
  • Jesse de Does, INL, Leiden (NL)
  • Cédrick Fairon, UC Louvain (BE)
  • William H. Fletcher, U.S. Naval Academy (US)
  • Iztok Kosem, Trojina, Institute for Applied Slovene Studies (SI)
  • Simon Krek, Jožef Stefan Institute (SI)
  • Lothar Lemnitzer, BBAW (DE)
  • Nikola Ljubešić, Sveučilišta u Zagrebu (HR)
  • Siva Reddy, University of Edinburgh (UK)
  • Steffen Remus, TU Darmstadt (DE)
  • Pavel Rychly, Masaryk University (CZ)
  • Kevin Scannell, Saint Louis University (US)
  • Serge Sharoff, University of Leeds (UK)
  • Klaus Schulz, LMU München (DE)
  • Kay-Michael Würzner, BBAW (DE)
  • Torsten Zesch, University of Duisburg-Essen (DE)
  • Pierre Zweigenbaum, LIMSI (FR)

WAC-X main workshop

The World Wide Web has become increasingly popular as a source of linguistic data, not only within the NLP communities, but also with theoretical linguists facing problems of data sparseness or data di­versity. Accordingly, web corpora continue to gain importance, given their size and diversity in terms of genres/text types. The field is still new, though, and a number of issues in web corpus construction need much additional research, both fundamental and applied. These issues range from questions of corpus design (e.g., corpus composition assessment, sampling strategies and their relation to crawling algorithms, handling of duplicated material) to more technical aspects (e.g., efficient implementation of individual post-processing steps in document cleansing and linguistic annotation, or large-scale paral­lelization to achieve web-scale corpus construction). Similarly, the systematic evaluation of web cor­pora, for example in the form of task-based comparisons to traditional corpora, has only recently shifted into focus. For almost a decade, the ACL SIGWAC (http://www.sigwac.org.uk/), and especially the highly suc­cessful Web as Corpus (WAC) workshops have served as a platform for researchers interested in com­pilation, processing and application of web-derived corpora. Past workshops were co-located with ma­jor conferences on computational linguistics and/or corpus linguistics (such as EACL, NAACL, LREC, WWW, Corpus Linguistics). As in previous years, the 10th Web as Corpus workshop (WAC-X) invites contributions pertaining to all aspects of web corpus creation, including but not restricted to

  • data collection (both for large web corpora and smaller custom web corpora)
  • cleaning/handling of noise
  • duplicate removal/document filtering
  • linguistic post-processing (including non-standard data)
  • automatic generation of meta data (including register, genre, etc.)
  • corpus evaluation (quality of text and annotations, comparison to other corpora, etc.)

Furthermore, aspects of usability and availability of web-derived corpora are highly relevant in the con­text of WAC-X

  • development of interfaces
  • visualization techniques
  • tools for statistical analysis of very large (e.g., web-derived) corpora
  • long-term archiving
  • documentation and standardization
  • legal issues

Finally, reports of the use of web corpora in language technology and linguistics are welcome, for ex­ample information extraction & opinion mining

  • language modeling, distributional semantics
  • machine translation
  • linguistic studies of web-specific forms of communication
  • linguistic studies of rare phenomena
  • web-specific lexicography, grammaticography, and language documentation

EmpiriST 2015 shared task

The EmpiriST 2015 shared task aims to encourage the developers of NLP applications to adapt their tools and resources to the processing of German discourse in genres of computer-mediated communica­tion (CMC), including both dialogical (chat, SMS, social networks, etc.) and monological (web pages, blogs, etc.) texts. Since there has been relatively little work in this area for German so far, the shared task focuses on tokenization and part-of-speech tagging as the core annotation steps required by virtu­ally all NLP applications. While we have a particular interest in robust tools that can be applied to dia­logical CMC and web corpora alike, participants are allowed to use different systems for the two sub­sets or submit results for one subset only. A substantial number of teams from German-speaking countries have already expressed their interest to participate in EmpiriST 2015. Knowledge of German is not essential for participation, though, since there are sufficient amounts of manually annotated training data (at least 10,000 tokens) and key docu­ments are provided in English.

The final workshop of EmpiriST 2015 will be co-located with WAC-X. It will include a detailed pre­sentation of the task and results, a poster session with all participating systems, oral presentations of se­lected systems, and a plenary discussion about the challenges of CMC in general as well as German CMC genres in particular.

Panel discussion "Corpora, open science, and copyright reforms"

As part of the 10th Web as Corpus workshop (WAC-X), a panel discussion will be organized. Web cor­pus designers are probably those who are most affected by issues and uncertainties of copyright legisla­tion and intellectual property rights, especially in the EU. While in some countries, such as the U.S., a Fair Use doctrine allows the use of data for non-commercial research purposes, the situation in Europe is more problematic. For example, German copyright law ("Urheberrecht") requires that any re-use of a work which reaches a certain threshold of creativity be explicitly approved by the author. This poses numerous problems for any corpus creator, but it is completely infeasible for large web corpora con­taining texts written by millions of different authors. Thus, corpora are re-distributed in crippled form as sentence shuffles (e.g. COW and the Leipzig Corpora Collection), and it is not even clear whether there really is a reliable legal exemption for single sentences. In the famous Infopaq case, a Danish court decided that even snippets of 11 words might be protected under EU copyright laws (http://bit.ly/1GYTDjR). This situation is highly undesirable. Large web corpora have been shown to be indispensable for many tasks in computational linguistics, in the documentation of standard and non-standard language, and in empirically oriented theoretical linguistics. Reports written by legal experts – such as the one recently commissioned by the German Research Council (http://bit.ly/1PG4Gq6) – only provide an interpretation of the given legal situation. Only ac­tive lobbying in favor of a reasonable copyright reform will eventually bring about the necessary changes such that researchers can build corpus resources and share them freely for academic purposes. Therefore, the goal of this panel discussion is to bring together corpus creators, active users of web cor­pora, and open science activists in order to share and discuss views on the copyright problem as a politi­cal rather than a legal problem. Ideally, a first draft of a joint declaration might come out of this discussion. With such a declaration, the (web) corpus community could make sure that its voice is heard, especially in the ongoing discussion about reforms of the European copyright legislation.

Attachments (14)

Note: See TracWiki for help on using the wiki.