Translate

RSS Feeds

Visitors

mod_vvisit_counterToday203
mod_vvisit_counterThis week203
mod_vvisit_counterAll1318738

Designed by:
SiteGround web hosting Joomla Templates
Active Vision

eu-cognition

This research was also supported by EU-Cognition and during the first three months I have completely reworked the 3D Mars rover as well as the whole simulator. The main focus of this research was to explore the possibility of using an active vision on the rover model to provide obstacle avoidance. The main idea here was to completely substitute all infrared sensors by a pan/tilt active vision camera system where a rover is able to move the camera freely and acquire information from its environment.

Active computer vision systems are inspired by information gathering of mammals and insects. Such systems can greatly simplify the computational complexity as they only use information from an environment that is necessary to solve a certain task while the rest is ignored. Past research in this field demonstrated that it is possible to combine an active vision system together with feature selection to acquire and integrate information from an environment in order to solve a specific task. Hence, our future goal is to use both the active vision system and the current system to achieve complex, robust and reliable, yet computationally cheap behaviours.

 

Neural network architecture

activevision

The active vision system is based on a discrete-time recurrent artificial neural network (ANN). The recurrent connections are implemented using 4 memory units that maintain a copy of the activations of output units at the previous sensory-motor cycle [23]. A set of 25 visual neurons receive the activation from an artificial retina composed of a 5x5 matrix of visual (foveal) cells whose receptive fields receive input from a gray level image of a limited area (100x100 pixels) of the whole image (640x480 pixels). Foveal activations together with the proprioceptive information (motor speed, steering and pan/tilt positions) are fed into the neural network. Both visual and proprioceptive neurons are fully connected to 4 output neurons that modulate the level of force which is applied to the actuators directly being responsible for the rover’s speed, steering and direction of the camera. The output neurons have a sigmoid activation function with [0, 1]. Biases are implemented as weights from input neurons with activation values set to -1. The ANN does not have a hidden layer, as our previous experiments showed that it was redundant and did not help to achieve higher fitness. This simple architecture greatly reduces the computational demand of the control system, which is one of the most important requirements for designing a planetary rover.

 

Preliminary results

You need to a flashplayer enabled browser to view this YouTube video

You need to a flashplayer enabled browser to view this YouTube video