21 | | == Panel discussion: the !CleanerEval shared task == #cleanereval |
| 21 | The World Wide Web has become increasingly popular as a source of linguistic evidence, not only within the computational linguistics community, but also with theoretical linguists facing problems such as data sparseness or the lack of variation in traditional corpora of written language. Accordingly, web corpora continue to gain relevance, given their size and diversity in terms of genres and text types. In lexicography, web data have become a major and well-established resource with dedicated research data and an environment such as the !SketchEngine. In other areas of linguistics, the adoption rate of web corpora has been slower but steady. Furthermore, some areas of research dealing exclusively with web (or similar) data have emerged, such as the construction and exploitation of corpora based on short messages. Another example is the (manual or automatic) classification of web texts by genre, register, or – more generally speaking – text type, as well as topic area. Similarly, the areas of corpus evaluation and corpus comparison have been advanced greatly through the rise of web corpora, mostly because web corpora (especially larger ones in the region of several billions of tokens) are often created by downloading texts from the web unselectively with respect to their text type or content. While the composition (or stratification) of such corpora cannot be determined before their construction, it is desirable to evaluate it afterwards, at least. Also, comparing web corpora to corpora that have been compiled in a traditional way is key in determining the quality of web corpora with respect to a given research question. |
23 | | As part of the workshop, we plan to organize a panel discussion as the first meeting of the !CleanerEval shared task on combined paragraph and document quality detection for (web) documents. The !CleanerEval shared task follows the successful !CleanEval shared task organized by SIGWAC in 2006. While !CleanEval focussed specifically on so-called boilerplate removal, !CleanerEval goes beyond this and asks for systems that determine the linguistic quality of paragraphs and whole documents in an automatic fashion, such that corpus designers can decide whether to include them in their corpus or not. In the !CleanerEval setting, boilerplate paragraphs are paragraphs with low quality, but there might be other, non-boilerplate paragraphs with low quality as well. !CleanerEval was proposed by the organizers of WAC-XI during the final discussion of WAC-X, where the proposal was met with enthusiasm. The WAC-XI panel discussion is intended to serve as a platform for the development of the operationalization of the notions of paragraph and document quality, the annotation guidelines, and the final schedule for the shared task. The final meeting of the shared task is planned for to be part of WAC-XII in 2018. |
| 23 | The eleventh Web as Corpus workshop (WAC-XI) takes a (corpus) linguistic look at the state of the art in all these areas. More specifically, in linguistic publications presenting case studies based on web data, some authors explicitly discuss and/or defend the validity of web corpus data for a specific type of research question – while others simply take web corpora as a new or complementary source of data without discussing fundamental questions of data quality and appropriateness of web data for specific research questions. We think it is vital to discuss such fundamental questions, and therefore ask researchers to present and discuss |
| 24 | |
| 25 | * case studies in corpus or computational linguistics where web data have been used |
| 26 | * research specifically related to the validity of web data in corpus, computational, and theoretical linguistics, |
| 27 | * research on the technical aspects web corpus construction which have a strong influence on theoretical aspects of corpus design |
| 28 | |
| 29 | For example, presentations could address questions (either as part of a case study or in the form of primary research): |
| 30 | |
| 31 | * Are there substantial differences in theoretical inferences when web data are used instead of data from traditionally compiled corpora? If so: Why? Are they expected? |
| 32 | * Do findings from traditionally compiled corpora and web corpora converge when compared with evidence from other sources (such as psycholinguistic experiments)? If not: Which type of data matches the external findings better? |
| 33 | * Is it possible to analyse lectal variation with web corpora, given the frequent lack of relevant meta data? |
| 34 | * How good is the quality of the (automatic) linguistic annotation of web data compared to traditionally compiled corpora? How does this affect empirical linguistic research with web corpora? What could corpus designers do to improve it? |
| 35 | * Are there differences with regard to the dispersion of linguistic entities in web corpora compared to traditionally compiled corpora? If so: Why? Does it matter? How can we deal with it or even profit from it? |
| 36 | * How do very large web corpora compare to smaller, more intentionally stratified web corpora created for a specific task? How can it be decided which type of corpus is better for a given research question? |
| 37 | |
| 38 | |
| 39 | == !CleanerEval first panel discussion == |
| 40 | |
| 41 | As part of the workshop and consistent with its general theme, we plan to organise a panel discussion as the first meeting of the !CleanerEval shared task on combined paragraph and document quality detection for (web) documents. The "CleanerEval shared task follows the successful CleanEval shared task organised by SIGWAC in 2006. While !CleanEval focused specifically on boilerplate removal (the removal of automatically inserted and frequently repeated non-corpus material from web pages), "CleanerEval goes beyond this basic task. Participating systems should be able to determine the linguistic quality of paragraphs and whole documents in an automatic fashion, such that corpus designers and/or users can decide whether to include them in their corpus or not. In the "CleanerEval setting, boilerplate paragraphs are paragraphs with low quality, but there might be other, non-boilerplate paragraphs with low quality as well. "CleanerEval was proposed by the organisers of WAC-XI during the final discussion of WAC-X, where the proposal was met with great interest. The WAC-XI panel discussion is intended to serve as a platform for the development of the operationalisation of the notions of paragraph and document quality, the annotation guidelines, and the final schedule for the shared task. There can be no doubt that corpus linguists should define what counts as good corpus material and what does not. It would be misguided to threat this question as a purely technical one. The final meeting of the shared task is planned for to be part of WAC-XII in 2018. |
| 42 | |