I develop computational models of natural language learning, understanding and generation in people and machines, and my research focuses on basic scientific problems related to these models. I am especially interested in modeling the rich diversity of linguistic phenomena across the world’s languages.
I am a Reader in the Institute for Language, Cognition, and Computation in the School of Informatics at the University of Edinburgh. My research group is part of the larger Edinburgh natural language processing group and we collaborate with many people in Edinburgh and more widely. I am also the co-director of the Centre for Doctoral Training in Natural Language Processing.
- In 2019, five of my PhD students successfully defended their theses:
- Sameer Bansal, whose thesis was on Low-Resource Speech Translation.
- Arabella Jane Sinclair, whose thesis was on Modelling Speaker Adaptation in Second Language Learner Dialogue. She is now a postdoctoral researcher at the University of Amsterdam.
- Clara Vania, whose thesis was On Understanding Character-level Models for Representing Morphology. She is now a postdoctoral researcher at New York University.
- Nikolay Bogoychev, whose thesis was on Fast Machine Translation on Parallel and Massively Parallel Hardware. He is now a postdoctoral researcher at the University of Edinburgh.
- Sorcha Gilroy, whose thesis was on Probabilistic Graph Formalisms for Meaning Representations. She is now a data scientist at Peak.ai.
- In September 2019, the Centre for Doctoral Training in Natural Language Processing welcomed its first students The centre, directed by my colleague Mirella Lapata, is a collaboration between informatics, linguistics, psychology, and design. It offers an innovative new PhD programme that integrates research and teaching across these disciplines, and is supported in part by a multi-million pound training grant from the UK government. I am the co-director.
Recent Research Highlights
- Semantic graph parsing with recurrent neural network DAG grammars
Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. In Proceedings of EMNLP. 2019.
- A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages
Clara Vania, Yova Kementchedjhieva, Anders Søgaard, and Adam Lopez. In Proceedings of EMNLP. 2019.
- The problem with probabilistic DAG automata for semantic graphs
Ieva Vasiljeva, Sorcha Gilroy, and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Understanding learning dynamics of language models with SVCCA
Naomi Saphra and Adam Lopez. In Proceedings of NAACL-HLT. 2019.
- Pre-training on high-resource speech recognition improves low-resource speech-to-text translation
Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. In Proceedings of NAACL-HLT. 2019.
- What do character-level models learn about morphology? The case of dependency parsing
Clara Vania, Andreas Grivas, and Adam Lopez. In Proceedings of EMNLP. 2018.
- Indicatements that character language models learn English morpho-syntactic units and regularities
Yova Kementchedjhieva and Adam Lopez. In Proceedings of the Workshop on analyzing and interpreting neural networks for NLP. 2018.