I develop computational models of natural language learning, understanding and generation in people and machines, and my research focuses on basic scientific problems related to these models. I am especially interested in modeling the rich diversity of linguistic phenomena across the world’s languages.
I am a Reader in the Institute for Language, Cognition, and Computation in the School of Informatics at the University of Edinburgh. My research group is part of the larger Edinburgh natural language processing group and we collaborate with many people in Edinburgh and more widely. I will be the co-director of the new Centre for Doctoral Training in Natural Language Processing, which will welcome its first students in September 2019. Currently I am co-director of the Centre for Doctoral Training in Data Science.
In April 2019, Nikolay Bogoychev successfully defended his PhD thesis on Fast Machine Translation on Parallel and Massively Parallel Hardware. Nick’s thesis shows how careful thinking about memory accesses yields simple and very effective algorithms for MT.
In February 2019, the UK government announced its multi-million pound investment in a new Centre for Doctoral Training in Natural Language Processing in Edinburgh, directed by my colleague Mirella Lapata. The centre is a collaboration between informatics, linguistics, psychology, and design. It will offer an innovative new PhD programme that integrates research and teaching across these disciplines. I will be co-director.
In January 2019, Sorcha Gilroy successfully defended her PhD thesis on Probabilistic Graph Formalisms for Meaning Representations. During her time as a student, she received an oustanding paper award at NAACL 2018 and was a finalist in University of Edinburgh 3-minute thesis competition, along with many other accomplishments.
Current Research Highlights
- The problem with probabilistic DAG automata for semantic graphs
Ieva Vasiljeva, Sorcha Gilroy, and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Understanding learning dynamics of language models with SVCCA
Naomi Saphra and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Pre-training on high-resource speech recognition improves low-resource speech-to-text translation
Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. In Proceedings of NAACL-HLT. 2019.
- What do character-level models learn about morphology? The case of dependency parsing
Clara Vania, Andreas Grivas, and Adam Lopez. In Proceedings of EMNLP. 2018.
- Indicatements that character language models learn English morpho-syntactic units and regularities
Yova Kementchedjhieva and Adam Lopez. In Proceedings of the Workshop on analyzing and interpreting neural networks for NLP. 2018.