Salvatore Postiglione, Motiv aus der Erzählung des Dekameron (Il Decamerone) von Giovanni Boccaccio

Storytelling Workshop

Co-located with NAACL 2018

June 5th or 6th, 2018. New Orleans, Louisiana

2 Feb 2018: Storytelling Challenge begins
2 March 2018: Long, Short, Demo papers due
2 April 2018: Notification of acceptance
16 April 2018: Camera-ready papers due
16 May 2018: Storytelling challenge ends
5-6 June 2018: Workshop!

Storytelling

Human storytelling has existed for as far back as we can trace, predating writing. Humans have used stories for entertainment, education, cultural preservation; to convey experiences, history, lessons, morals; and to share the human experience.

Part of grounding artificial intelligence work in human experience can involve the generation, understanding, and sharing of stories. This workshop highlights the diverse work being done in storytelling and AI across different fields.

The Workshop

This one-day, multi-modal and interdisciplinary workshop will bring together researchers and practitioners in NLP, Computer Vision, and storytelling. The focus will be on human storytelling: What storytelling is, its structure and components, and how it’s expressed, connected to the state of the art in NLP and related ML/AI areas:
  1. What we can understand from stories (natural language understanding)
  2. What we can generate to create human-like stories (natural language generation)
  3. What we can recognize multimodally for story understanding and generation (e.g., with computer vision)
The workshop will consist of:
  1. Contributed talks and posters.
  2. A visual storytelling challenge.
  3. Invited talks given by researchers in NLP, Computer Vision, and Storytelling.

Call For Papers

We invite work involving human storytelling with respect to machine learning, natural language processing, computer vision, speech, and other ML/AI areas.
This spans a variety of research, including work on creating timelines, detecting content to be used in a story, generating long-form text, and related multimodal work.
Data input sources may include professional and social-media content.

We also encourage ideas about how to evaluate user experiences in terms of coherence, composition, story comprehensiveness, and other aspects related to the creation of stories.

Paper topics may include, but are not limited to:

Papers should follow the NAACL 2018 style guidelines and are due on Softconf by 2 March, 2018.

Visual Storytelling Challenge

This challenge begins to scratch the surface on how well artificial intelligence can share in this cultural human experience.
Participants are encouraged to work on creating AI systems that can generate stories for themselves, sharing the human experience that they see -- and begin to understand.
Click here to see more about the dataset.
Participants may submit to two different tracks: The Internal track and the External track.
Submissions are evaluated on how well they can generate human-like stories given a sequence of images as input.

Dates

2 Feb 2018: Data train set augmented with additional stories
16 May 2018: Submissions due on EvalAI. (You will need to create an account to view the challenge)
30 May 2018: Results announced

Submission Tracks

Internal Track
For apples-to-apples comparison, all participants should submit to the Internal track. In this track, the only allowable training data is:
Any of the VIST storytelling data (SIS, DII, and/or the non-annotated album images)
Data available here
Allowed pretraining Data from any version of the ImageNet ILSVRC Challenge (common in computer vision).
Data from any version of the Penn Treebank (common in natural language processing).
If you wish to use any other sources of data/labels or pre-training, please submit to the External track.

External track
Participants can use any data or method they wish during training (including humans-in-the-loop), but all data should be publicly available or made publicly available. At test time, the systems must be stand-alone (no human intervention). Possible datasets include data from ICCV/CVPR workshops, such as LSMDC, and other vision-language datasets, such as COCO, and VQA.

Evaluation

Evaluation will have two parts:
Automatic: On EvalAI, using the automatic metric of METEOR.
Human: Crowdsourced survey of the quality of the stories.
Submission Instructions
Please follow the instructions listed on the challenge webpage on EvalAI.

Organizers

Margaret Mitchell
Google Research
margarmitchell@gmail.com
Ishan Misra
Carnegie Mellon University
ishan@cmu.edu
Ting-Hao 'Kenneth' Huang
Carnegie Mellon University
tinghaoh@cs.cmu.edu
Frank Ferraro
University of Maryland, Baltimore County
ferraro@umbc.edu