9th (Thu)

09:00 - 09:10 | Opening

Key-Sun Choi

??:?? - ??:?? | Talk-1: DeepPavlov: Open-Source Library for Dialogue Systems.

Mikhail S. Burtsev

??:?? - ??:?? | Talk-2: Deep neural network-based video summarization.

Yuta Nakashima

??:?? - ??:?? | Talk-3: Effective Semantics for Engineering NLP systems

André Freitas

Keynote Speakers

Talk-1: Mikhail S. Burtsev

Head of the of the Neural Networks and Deep Learning Lab.


DeepPavlov: Open-Source Library for Dialogue Systems.


Adoption of messaging communication and voice assistants has grown rapidly in the last years. This

creates a demand for tools that speed up prototyping of feature-rich dialogue systems. An open-source

library DeepPavlov is tailored for development of conversational agents. The library prioritises efficiency,

modularity, and extensibility with the goal to make it easier to develop dialogue systems from scratch

and with limited data available. It supports modular as well as end-to-end approaches to implementation

of conversational agents. Conversational agent consists of skills and every skill can be decomposed into

components. Components are usually models which solve typical NLP tasks such as intent classification,

named entity recognition or pre-trained word vectors. Sequence-to-sequence chit-chat skill, question

answering skill or task-oriented skill can be assembled from components provided in the library.

Talk-2: Yuta Nakashima

Associate Professor of Osaka University.


Deep neural network-based video summarization


Video summarization is a technique to compactly represent a long and redundant video, and various

methods have been proposed so far. In recent years, a new approach has emerged that uses deep

neural networks to “understand” a given video. In this talk, I will overview the recent approach and

introduce our video summarization method that maps a sentence and a video into the same space

for extracting higher-level semantics of the video. I also would like to mention about the difficulty of

handling sequence data (e.g., videos and text) with deep neural networks.

Talk-3: André Freitas

Lecturer of the University of Manchester.


Effective Semantics for Engineering NLP Systems


At the center of many Natural Language Processing (NLP) applications is the requirement for capturing

and interpreting commonsense and domain-specific knowledge at scale. The selection of the right semantic

and knowledge representation model plays a strategic role for building NLP systems ( e.g. Question Answering,

Sentiment Analysis, Semantic Search) which effectively work with real data.

In this talk, we will provide an overview of emerging trends in semantic representation for building NLP

systems which can cope with large-scale and heterogeneous textual data. Based on empirical evidence, we

will provide a description of the strengths and weaknesses of different representation perspectives, aiming

towards a synthesis: ‘a semantic model to rule them all’.