Poster presentation of the v-trel vocabulary trainer @ the RANLP conference
by Verena Lyding
The 2019 edition of the RANLP conference was organized in beautiful Varna, Bulgaria. Situated at the coast of the black sea, the city was still buzzing with late summer tourists, while on the first floor of the “Cherno More” (translate: Black Sea) hotel, a record number of participants was listening to over 60 presentations on three parallel tracks, covering the full range of NLP subjects from part-of-speech tagging over word sense disambiguation and word embeddings up to a small track on NLP for educational applications. In addition, five poster sessions gave room for discussing all types of recent research works within the NLP community, among them the v-trel vocabulary trainer.
The work on this initial-state but functional vocabulary trainer application all started during the 1st Crowdfest of enetCollect, organized in January 2019 in Brussels, Belgium. Within an interdisciplinary team of 8 people we built a generic architecture implementing the implicit crowdsourcing approach and created a first vocabulary trainer application with two user interfaces to operate it. Exercises for training semantic word relations are automatically generated from the commonsense ontology ConceptNet and are presented to the learners via a Telegram bot and a Bootstrap-based web interface. In the background, the crowdsourcing architecture caters for evaluating the learners’ input in relation to the knowledge encoded in ConceptNet as well as for generating new candidate word relations for extending ConceptNet. Since then we have continuously consolidated and extended this work — including more enetCollect members– and managed to prepare a first experiment to evaluate the validity of the proposed approach and its use from an educational perspective. All this work made up the content of our poster “v-trel: Vocabulary Trainer for Tracing Word Relations – An Implicit Crowdsourcing Approach” [full paper].
At the well-populated poster session on the ground floor of “Cherno More”, the presentation of our vocabulary trainer was a bit of an exotic fellow: Crowdsourcing language resources by relying on the brains of learners somewhat feels like the inversion of neural networks and deep learning based on language resources. However, the exotic fellow was well received and we collected a number of curious questions, encouraging remarks and pointers and suggestions of how to continue on our path!
In fact, the interested reader is invited to follow-up on an extended user study on the updated version of the vocabulary trainer soon to be presented and published at LREC 2020, titled “Using Crowdsourced Exercises for Vocabulary Training to Expand ConceptNet”.