Tutorial: Describing images in natural language

Sunday 27 April, morning

Presenter: Julia Hockenmaier

Tutorial contents

The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. The purpose of this tutorial is to give researchers in natural language processing an overview of the issues involved in automatic image description, and to introduce them to vision tools and ideas they can use for this purpose.

Outline

Part 1: Image description (1.5 hours)

This part will introduce the audience to the task of sentence-based image description.

  • What does it mean to describe an image?
  • What is the state of the art in image description?
  • Evaluation of image description systems
  • Ranking-based image description: a proposal for a shared task

Part 2: Brief Introduction to Computer Vision for NLP (1.5 hours)

This part will provide an overview of standard computer vision tools and data sets that can be useful for automatic image description, including: low-level visual features, kernel methods, image segmentation, object recognition, scene recognition.

Biography

Julia Hockenmaier is assistant professor of Computer Science at the University of Illinois at Urbana-Champaign. She has been working on language-based image description for a number of years. Her group's UIUC-PASCAl data set has been widely used for sentence-based image description. She is an NSF CAREER-award recipient, and her PhD thesis was a runner up for the British Computer Society Distinguished Dissertation award.