I develop computational models of natural language learning, understanding and generation in people and machines, and my research focuses on basic scientific problems related to these models. I am especially interested in modeling the rich diversity of linguistic phenomena across the world’s languages.
I am a Reader in the Institute for Language, Cognition, and Computation in the School of Informatics at the University of Edinburgh. My research group is part of the larger Edinburgh natural language processing group and we collaborate with many people in Edinburgh and more widely. I am the co-director of the Centre for Doctoral Training in Natural Language Processing, which welcomed its first students in September 2019. I am also the outgoing co-director of the Centre for Doctoral Training in Data Science.
In September 2019, Arabella Jane Sinclair successfully defended her PhD thesis on Modelling Speaker Adaptation in Second Language Learner Dialogue. Arabella’s thesis shows that students and tutors adapt to each other in conversation across many different dimensions, and that the nature of this adaptation depends on the student’s ability. She is starting a postdoc at the University of Amsterdam.
In September 2019, Clara Vania successfully defended her PhD theisis, On Understanding Character-level Models for Representing Morphology. Clara’s thesis carefully teases apart cases in which neural character-level models successfully mimic the relationship between form and function in morphology, and cases where they don’t, across a variety of typologically different languages. She is starting a postdoc at New York University.
In September 2019, the Centre for Doctoral Training in Natural Language Processing welcomed its first students The centre, directed by my colleague Mirella Lapata, is a collaboration between informatics, linguistics, psychology, and design. It offers an innovative new PhD programme that integrates research and teaching across these disciplines, and is supported in part by a multi-million pound training grant from the UK government. I am the co-director.
In April 2019, Nikolay Bogoychev successfully defended his PhD thesis on Fast Machine Translation on Parallel and Massively Parallel Hardware. Nick’s thesis shows how careful thinking about memory accesses yields simple and very effective algorithms for MT. He is now a postdoc at the University of Edinburgh.
In January 2019, Sorcha Gilroy successfully defended her PhD thesis on Probabilistic Graph Formalisms for Meaning Representations. During her time as a student, she received an oustanding paper award at NAACL 2018 and was a finalist in University of Edinburgh 3-minute thesis competition, along with many other accomplishments. She is now a data scientist at Peak.ai.
Recent Research Highlights
- Semantic graph parsing with recurrent neural network DAG grammars
Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. In Proceedings of EMNLP. 2019.
- A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages
Clara Vania, Yova Kementchedjhieva, Anders Søgaard, and Adam Lopez. In Proceedings of EMNLP. 2019.
- The problem with probabilistic DAG automata for semantic graphs
Ieva Vasiljeva, Sorcha Gilroy, and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Understanding learning dynamics of language models with SVCCA
Naomi Saphra and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Pre-training on high-resource speech recognition improves low-resource speech-to-text translation
Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. In Proceedings of NAACL-HLT. 2019.
- What do character-level models learn about morphology? The case of dependency parsing
Clara Vania, Andreas Grivas, and Adam Lopez. In Proceedings of EMNLP. 2018.
- Indicatements that character language models learn English morpho-syntactic units and regularities
Yova Kementchedjhieva and Adam Lopez. In Proceedings of the Workshop on analyzing and interpreting neural networks for NLP. 2018.