Deep learning is being increasingly used for NLP applications in academia and industry. Many NLP prediction tasks using neural and other probabilistic methods involve assigning values to mutually dependent variables. For example, when designing a model to automatically analyze the structure of a sentence, document or conversation (e.g., parsing, semantic role labeling, discourse analysis or dialogue slot filling), it is crucial to model the correlations between labels. Many other NLP tasks, such as machine translation, textual entailment, information extraction and summarization, can be also modeled as structured prediction problems.

In order to tackle such problems, various structured prediction approaches have been proposed, and their effectiveness has been demonstrated. Studying structured prediction is interesting from both NLP and machine learning (ML) perspectives. From the NLP perspective, syntax and semantics of natural language are clearly structured and advances in this area will enable researchers to understand the linguistic structure of data. From the ML perspective, the large amount of available text & graph/relational data and complex linguistic structures bring challenges to the learning community. Designing expressive yet tractable models and studying efficient learning and inference algorithms become important issues.

This workshop follows the four previous successful editions in 2020, 2019, 2017 and 2016 on Structured Prediction for NLP, as well as the closely related ICML 17 Workshop on Deep Structured Prediction. It is very timely, as there has been a renewed interest in structured prediction among NLP researchers due to recent advances in methods using continuous representations, able to learn with task-level supervision, or modeling latent linguistic structure.

Topics will include, but are not limited to the following:

  • Deep learning for structured prediction in NLP
  • Multi-task learning for structured output tasks
  • Reinforcement learning and imitation learning for structured learning in NLP
  • Deep learning on graphs & relational data (graph neural networks)
  • Graph embedding methods for Knowledge Graphs
  • Learning structured representations (e.g., relations, graphs) from language data
  • Reasoning with structured data for NLP tasks
  • Latent structured variable models
  • Structured deep generative models
  • Integer linear programming and other modeling techniques
  • Approximate inference for structured prediction
  • Structured training for non-linear models
  • Structured prediction software
  • Structured prediction applications in NLP

We invite submissions of the following kinds:

  • Research papers
  • Position papers
  • Tutorial/overview papers

Invited Speakers

  • Angela Fan
  • Wilker Aziz
  • Sebastian Riedel Nicola De Cao
  • Siva Reddy
  • Albert Gu

Organizers

Program Committee

  • Manling Li, University of Illinois, Urbana-Champaign, USA
  • Sha Li, University of Illinois, Urbana-Champaign, USA
  • Julius Cheng, University of Cambridge, UK
  • Pietro Lesci, University of Cambridge, UK
  • Moy Yuan, University of Cambridge UK
  • Zhijiang Guo, University of Cambridge, UK
  • Ignacio Iacobacci, Huawei Noah’s Ark Lab, UK
  • Philip John Gorinski, Huawei Noah’s Ark Lab, UK
  • Parag Jain, University of Edinburgh, UK
  • Vivek Srikumar, University of Utah, USA
  • Michail Korakakis, University of Cambridge, UK
  • Parisa Kordjamshidi,Michigan State University, USA
  • Tatsuya Hiraoka, Tokyo Institute of Technology, Japan
  • Naoaki Okazaki, Tokyo Institute of Technology, Japan
  • Youmi Ma, Tokyo Institute of Technology, Japan
  • Pedro Henrique Martins, Instituto Superior Técnico, Portugal
  • Yangfeng Ji, University of Virginia, USA
  • Zhen Han, Institut für Informatik, Germany
  • Guirong Fu, Bytedance
  • Patrick Fernandes, Carnegie Mellon University, USA
  • Yusuke Miyao, University of Tokyo, Japan
  • Daniel Daza, Vrije Universiteit Amsterdam, Netherlands
  • Marcos Vinicius Treviso, Instituto Superior Técnico, Portugal

Submissions

We invite submissions of the following kinds:

  • Research papers
  • Position papers
  • Tutorial/overview papers

Long/short papers should consist of eight/four pages of content plus unlimited pages for bibliography. Submissions must be in PDF format following the ACL-2022 templates, anonymized for review according to the ACL guidelines. Papers can be submitted as non-archival, so that their content can be reused for other venues. Add “(NON-ARCHIVAL)” to the title of the submission. Non-archival papers will be linked from this webpage if their authors wish to. Previously published work can also be submitted as non-archival in the same way, with the additional requirement to state on the first page the original publication.

Submission is electronic at https://openreview.net/group?id=aclweb.org/ACL/2022/Workshop/SPNLP

Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference should be avoided as well. Each submission will be reviewed by at least 2 program committee members.

Important Dates

  • 28 February, 2022: Due date for submissions via OpenReview directly for review
  • 12th of March 2022: Due date for submissions already reviewed via ARR
  • 31 March, 2022: Notification of Acceptance
  • April 10, 2022: Camera-ready papers due
  • May 27, 2022: Workshop Date