Deep learning is being increasingly used for NLP applications in academia and industry. Many NLP prediction tasks using neural and other probabilistic methods involve assigning values to mutually dependent variables. For example, when designing a model to automatically analyze the structure of a sentence, document or conversation (e.g., parsing, semantic role labeling, discourse analysis or dialogue slot filling), it is crucial to model the correlations between labels. Many other NLP tasks, such as machine translation, textual entailment, information extraction and summarization, can be also modeled as structured prediction problems.

In order to tackle such problems, various structured prediction approaches have been proposed, and their effectiveness has been demonstrated. Studying structured prediction is interesting from both NLP and machine learning (ML) perspectives. From the NLP perspective, syntax and semantics of natural language are clearly structured and advances in this area will enable researchers to understand the linguistic structure of data. From the ML perspective, the large amount of available text & graph/relational data and complex linguistic structures bring challenges to the learning community. Designing expressive yet tractable models and studying efficient learning and inference algorithms become important issues.

This workshop follows the four previous successful editions in 2020, 2019, 2017 and 2016 on Structured Prediction for NLP, as well as the closely related ICML 17 Workshop on Deep Structured Prediction. It is very timely, as there has been a renewed interest in structured prediction among NLP researchers due to recent advances in methods using continuous representations, able to learn with task-level supervision, or modeling latent linguistic structure.

Topics will include, but are not limited to the following:

  • Deep learning for structured prediction in NLP
  • Multi-task learning for structured output tasks
  • Reinforcement learning and imitation learning for structured learning in NLP
  • Deep learning on graphs & relational data (graph neural networks)
  • Graph embedding methods for Knowledge Graphs
  • Learning structured representations (e.g., relations, graphs) from language data
  • Reasoning with structured data for NLP tasks
  • Latent structured variable models
  • Structured deep generative models
  • Integer linear programming and other modeling techniques
  • Approximate inference for structured prediction
  • Structured training for non-linear models
  • Structured prediction software
  • Structured prediction applications in NLP

We invite submissions of the following kinds:

  • Research papers
  • Position papers
  • Tutorial/overview papers

Invited Speakers

  • Heng Ji, University of Illinois Urbana-Champaign
  • Rada Mihalcea, University of Michigan
  • Scott Wen-tau Yih, Facebook AI Research
  • Carolin Lawrence, NEC Labs Europe
  • Iryna Gurevych, Technical University of Darmstadt


Program Committee

  • Heng Ji, University of Illinois Urbana-Champaign, USA
  • Vivek Srikumar, University of Utah, USA
  • Roi Reichart, Technion - Israel Institute of Technology, Israel
  • Kevin Gimpel, TTI Chicago, USA
  • Ivan Titov, University of Edinburgh, Scotland
  • Wilker Aziz, University of Amsterdam, Netherlands
  • Hiko Schamoni, Heidelberg University, Germany
  • Arzoo Katiyar, Cornell University, USA
  • Tianze Shi, Cornell University, USA
  • Sabrina Mielke, Johns Hopkins University, USA
  • Pranava Madhyastha, Imperial College London, UK
  • Amr Sharaf, University of Maryland, USA
  • Tim Vieira, Johns Hopkins University, USA
  • Parag Jain, University of Edinburgh, UK
  • Sean Welleck, New York University, USA
  • Michail Korakakis, University of Cambridge, UK
  • Ran Zmigrod, University of Cambridge, UK
  • Chunchuan Lyu, University of Edinburgh, UK
  • Parisa Kordjamshidi,Michigan State University, USA


We invite submissions of the following kinds:

  • Research papers
  • Position papers
  • Tutorial/overview papers

Long/short papers should consist of eight/four pages of content plus unlimited pages for bibliography. Submissions must be in PDF format following the ACL-IJCNLP 2021 templates, anonymized for review. Papers can be submitted as non-archival, so that their content can be reused for other venues. Add “(NON-ARCHIVAL)” to the title of the submission. Non-archival papers will be linked from this webpage if their authors wish to. Previously published work can also be submitted as non-archival in the same way, with the additional requirement to state on the first page the original publication.

Submission is electronic and is managed by the START conference management system at

Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference should be avoided as well. Each submission will be reviewed by at least 2 program committee members.

Important Dates

  • April 26, 2021: Workshop Paper Due Date
  • May 28, 2021: Notification of Acceptance
  • June 7, 2021: Camera-ready papers due
  • August 5-6, 2021: Workshop Dates