EMNLP 2013: Conference on Empirical Methods in Natural Language Processing — October 18–21, 2013 — Seattle, USA.

emnlp2013

Invited talks

We are delighted to announce our invited speakers for EMNLP 2013.

The Online Revolution: Education for Everyone

Dr Andrew Ng, Co-CEO and Co-founder of Coursera

In 2011, Stanford University offered three online courses, which anyone in the world could enroll in and take for free. Together, these three courses had enrollments of around 350,000 students, making this one of the largest experiments in online education ever performed.

Since the beginning of 2012, we have transitioned this effort into a new venture, Coursera, a social entrepreneurship company whose mission is to make high-quality education accessible to everyone by allowing the best universities to offer courses to everyone around the world, for free. Coursera classes provide a real course experience to students, including video content, interactive exercises with meaningful feedback, using both auto-grading and peer-grading, and a rich peer-to-peer interaction around the course materials.

Currently, Coursera has 80 university and other partners, and 3.6 million students enrolled in its nearly 400 courses. These courses span a range of topics including computer science, business, medicine, science, humanities, social sciences, and more.

In this talk, I'll report on this far-reaching experiment in education, and why we believe this model can provide both an improved classroom experience for our on-campus students, via a flipped classroom model, as well as a meaningful learning experience for the millions of students around the world who would otherwise never have access to education of this quality.

Meaning in the Wild

Dr Fernando Pereira, Research Director at Google

This meeting was founded on the premise that analytical approaches to computational linguistics could be beneficially replaced by machine learning from large corpora exhibiting the linguistic behaviors of interest. The successes of that program have been most notable in speech recognition and machine translation, where the behavior of interest is plentiful in the wild: people transcribe speech and translate texts for practical reasons, creating a voluminous record from which our algorithms can learn.

However, when people understand what they are told or what they read, the output of the process is a hidden mental state change, only accessible partially from whatever observable actions are triggered by the change: the desired input-output behavior is not available in the wild, even accepting the dubious assumption that the output can be abstracted away from the mental state where it came about. The standard escape route of enlisting linguists to create annotated training data is hard enough for parsing, and it quickly falls apart for semantics, even for seemingly “constrained” tasks like coreference.

Nevertheless, meaning is all around us, in how people ask and respond to search queries, in how they write text so that it can be understood by others, in how they annotate their text with hyperlinks, and in many other common behaviors that are to some extent observable in the Web. We also have a growing body of structured text organized so that computers can use it meaningfully, such as WordNet and Freebase. I will take you on a tour of examples from search, coreference, and information extraction that show small successes and big failures in understanding, asking questions about the potential and limitations of our current approaches along the way. I'll not give you a complete recipe for machine understanding, but I hope you’ll find the examples and research questions fun and useful.