Quantifying the impact of context on the quality of manual hate speech annotation. Natural Language Engineering

5 de sep. de 2022 · 42m 2s
Quantifying the impact of context on the quality of manual hate speech annotation. Natural Language Engineering
Descripción

The quality of annotations in manually annotated hate speech datasets is crucial for automatic hate speech detection. This contribution focuses on the positive effects of manually annotating online comments for...

mostra más
The quality of annotations in manually annotated hate speech datasets is crucial for automatic hate speech detection. This contribution focuses on the positive effects of manually annotating online comments for hate speech within the context in which the comments occur. We quantify the impact of context availability by meticulously designing an experiment: Two annotation rounds are performed, one in-context and one out-of-context, on the same English YouTube data (more than 10,000 comments), by using the same annotation schema and platform, the same highly trained annotators, and quantifying annotation quality through inter-annotator agreement. Our results show that the presence of context has a significant positive impact on the quality of the manual annotations. This positive impact is more noticeable among replies than among comments, although the former is harder to consistently annotate overall. Previous research reporting that out-of-context annotations favor assigning non-hate-speech labels is also corroborated, showing further that this tendency is especially present among comments inciting violence, a highly relevant category for hate speech research and society overall. We believe that this work will improve future annotation campaigns even beyond hate speech and motivate further research on the highly relevant questions of data annotation methodology in natural language processing, especially in the light of the current expansion of its scope of application.

Nikola Ljubešić Jožef Stefan Institute, Ljubljana, Slovenia Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia

Igor Mozetič Jožef Stefan Institute, Ljubljana, Slovenia

Petra Kralj Novak Jožef Stefan Institute, Ljubljana, Slovenia Central European University, Vienna, Austria


Corresponding author. Nikola Ljubešić E-mail: nikola.ljubesic@ijs.si

This is an Open Access article, distributed under the terms of the Creative Commons Attribution license, which permits unrestricted re use, distribution, and reproduction in any medium, provided the original work is properly cited.

Voice by voicemaker.in

This was produced by Brandon Casturo

Ljubešić, N., Mozetič, I., & Kralj Novak, P. (2022). Quantifying the impact of context on the quality of manual hate speech annotation. Natural Language Engineering, 1-14. doi:10.1017/S1351324922000353

https://www.cambridge.org/core/journals/natural-language-engineering/article/quantifying-the-impact-of-context-on-the-quality-of-manual-hate-speech-annotation/B6E813E528CE094DBE489ABD3A047D8A

Hate speech
mostra menos
Información
Autor Miranda Casturo
Organización Miranda Casturo
Página web -
Etiquetas

Parece que no tienes ningún episodio activo

Echa un ojo al catálogo de Spreaker para descubrir nuevos contenidos.

Actual

Portada del podcast

Parece que no tienes ningún episodio en cola

Echa un ojo al catálogo de Spreaker para descubrir nuevos contenidos.

Siguiente

Portada del episodio Portada del episodio

Cuánto silencio hay aquí...

¡Es hora de descubrir nuevos episodios!

Descubre
Tu librería
Busca