wiki:WAC-XI

Version 22 (modified by Roland Schäfer, 8 years ago) ( diff )

--

11th Web as Corpus Workshop (WAC-XI)

at Corpus Linguistics 2017, Birmingham
featuring the First CleanerEval Shared Task panel discussion

24 July 2017

Endorsed by the Special Interest Group of the ACL on Web as Corpus (SIGWAC)

Contact: wacxi2017@gmail.com

Organizers

Workshop description, call for papers, and details

Workshop description

For almost a decade, the ACL SIGWAC, and most notably the Web as Corpus (WAC) workshops, have served as a platform for researchers interested in the com­pilation, processing and use of web-derived corpora as well as computer-mediated communication. Past workshops were co-located with major conferences on corpus linguistics and/or computational linguis­tics (such as ACL, EACL, Corpus Linguistics, LREC, NAACL, WWW). The eleventh Web as Corpus workshop (WAC-XI) emphasises the linguistic aspects of web corpus research more than the technological aspects while keeping in mind that the two are inseparable.

The World Wide Web has become increasingly popular as a source of linguistic evidence, not only within the computational linguistics community, but also with theoretical linguists facing problems such as data sparseness or the lack of variation in traditional corpora of written language. Accordingly, web corpora continue to gain relevance, given their size and diversity in terms of genres and text types. In lexicography, web data have become a major and well-established resource with dedicated research data and an environment such as the SketchEngine. In other areas of linguistics, the adoption rate of web corpora has been slower but steady. Furthermore, some areas of research dealing exclusively with web (or similar) data have emerged, such as the con­struction and exploitation of corpora based on short messages. Another example is the (manual or auto­matic) classification of web texts by genre, register, or – more generally speaking – text type, as well as topic area. Similarly, the areas of corpus evaluation and corpus comparison have been advanced greatly through the rise of web cor­pora, mostly because web cor­pora (especially larger ones in the region of several billions of tokens) are often created by download­ing texts from the web unselectively with respect to their text type or content. While the composition (or strati­fication) of such corpora cannot be determined before their construction, it is desirable to evaluate it afterwards, at least. Also, comparing web corpora to corpora that have been compiled in a traditional way is key in determining the quality of web corpora with respect to a given research question.

Call for papers

The eleventh Web as Corpus workshop (WAC-XI) takes a (corpus) linguistic look at the state of the art in all these areas. More specifically, in linguistic publications presenting case studies based on web data, some authors explicitly discuss and/or defend the validity of web corpus data for a specific type of research question – while others simply take web corpora as a new or complementary source of data without discussing fundamental questions of data quality and appropriateness of web data for specific research questions. We think it is vital to discuss such fundamental questions, and therefore ask researchers to present and discuss

  • case studies in corpus or computational linguistics where web data have been used
  • research specifically related to the validity of web data in corpus, computational, and theoretical linguistics,
  • research on the technical aspects web corpus construction which have a strong influence on theo­retical aspects of corpus design

For example, presentations could address questions (either as part of a case study or in the form of primary research):

  • Are there substantial differences in theoretical inferences when web data are used instead of data from traditionally compiled corpora? If so: Why? Are they expected?
  • Do findings from traditionally compiled corpora and web corpora converge when compared with evidence from other sources (such as psycholinguistic experiments)? If not: Which type of data matches the external findings better?
  • Is it possible to analyse lectal variation with web corpora, given the frequent lack of relevant meta data?
  • How good is the quality of the (automatic) linguistic annotation of web data compared to tradi­tionally compiled corpora? How does this affect empirical linguistic research with web corpora? What could corpus designers do to improve it?
  • Are there differences with regard to the dispersion of linguistic entities in web corpora com­pared to traditionally compiled corpora? If so: Why? Does it matter? How can we deal with it or even profit from it?
  • How do very large web corpora compare to smaller, more intentionally stratified web corpora created for a specific task? How can it be decided which type of corpus is better for a given research question?

Submission format

We call for anonymous extended abstracts of 1,000 – 1,500 words length (excluding references, tables, and figures). Submissions must be in PDF format. Authors of accepted papers will receive minimal formatting instructions for the publication of the abstracts on the WAC-XI website in due time. There will be no proceedings volume, but a successful workshop might lead to a special issue/edited volume on web (and similar) data in linguistics (with a new round of peer reviewing), for which a separate call for (full) papers would be published after the workshop.

Submission website

Please use our EasyChair installation exclusively.

Important dates

  • 16 February 2017: First call for workshop papers
  • 13 March 2017: Second call for workshop papers
  • 16 April 2017: Abstract due date (23:59 GMT)
  • 5 June 2017: Notification of acceptance
  • 24 July 2017: Workshop day

CleanerEval first panel discussion

As part of the workshop and consistent with its general theme, we plan to organise a panel discussion as the first meeting of the CleanerEval shared task on combined paragraph and document quality detec­tion for (web) documents. The CleanerEval shared task follows the successful CleanEval shared task organised by SIGWAC in 2006. While CleanEval focused specifically on boilerplate re­moval (the removal of automatically inserted and frequently repeated non-corpus material from web pages), CleanerEval goes beyond this basic task. Participating systems should be able to determine the linguistic quality of para­graphs and whole documents in an automatic fashion, such that corpus designers and/or users can decide whether to include them in their corpus or not. In the CleanerEval setting, boilerplate paragraphs are paragraphs with low quality, but there might be other, non-boilerplate paragraphs with low quality as well. CleanerEval was proposed by the organisers of WAC-XI during the final discussion of WAC-X, where the proposal was met with great interest. The WAC-XI panel discussion is intended to serve as a platform for the development of the operationalisation of the notions of paragraph and document quality, the an­notation guidelines, and the final schedule for the shared task. There can be no doubt that corpus lin­guists should define what counts as good corpus material and what does not. It would be misguided to threat this ques­tion as a purely technical one. The final meeting of the shared task is planned for to be part of WAC-XII in 2018.

Note: See TracWiki for help on using the wiki.