Language Pre-Training and Auxiliary Tasks for Vision and Language Navigation
Abstract
The Vision and Language Navigation task came to life from the idea that we can build a robot or an autonomous system that can be instructed in human language and that will navigate using the instructions given. For example, we tell the agent to “Go down past some room dividers toward a glass top desk and turn into the dining area. Wait next to the large glass dining table” and not only does it reach the goal state but it follows the instructions while navigating. With the current developments, this may not seem like a distant problem anymore and in recent years a number of systems have been developed that attempt to address this task.
To accomplish this task, the artificial agent must understand two modalities with which humans perceive the world, vision, and language, and then translate these into actions. While significant progress has been made in recent years to develop systems capable of performing this task, these systems still fail in a significant number of cases. To investigate reasons and potential ways to overcome this, this thesis explores a few ways in which the navigation task with multiple modalities can be grounded and can be aligned temporally and visually.
This thesis analyzes the failures of the previously used Environment Drop method with Back translation and investigates what happens when pre-trained embeddings, as well as auxiliary tasks, are utilized with it. In particular, it proposes an augmentation to the architecture for the Vision and language Navigation task with pretrained language tokens and a navigator with reasoning to oversee the progress and to co-ground vision and language rather than to only use temporal attention mechanism. The underlying base architecture on which the modifications have been implemented has been a highly successful method and uses the Environment Drop method with Back translation. While results with the modified architecture and proposed improvements did not show a significant increase in the success rate of the chosen base architecture, the analysis of the results has provided valuable insights to help determine the direction of potential further research.
Related items
Showing items related by title, author, creator and subject.
-
Representation of verbal event structure in sign languages
Malaia, Evie; Wilbur, Ronnie B. (Department of Curriculum and Instruction, The University of Texas at Arlington, 2010-11-05)**Please note that the full text is embargoed** ABSTRACT: Sign languages recruit physical properties of visual motion to convey linguistic information. The present experiment investigated the effect of sign position and ... -
Commencement Video 2013 December: College of Liberal Arts
Unknown author (University of Texas at Arlington, 2013-12) -
NATIVE LANGUAGE AND NON-LINGUISTIC INFLUENCES ON THE PRODUCTION OF ENGLISH VOWELS BY SPEAKERS OF KOREAN: AN ACOUSTIC STUDY
Kim, Ji-Eun (University of Texas at Arlington, 2004-08)This study investigates the production of Korean and English front vowels by ninety-one Koreans, based on their arrival age to the U.S., length of residence in the U.S. and degree of motivation. Subjects' Korean and English ...