Projekt: Evolutionary robotics in navigation
Forscher: Kassahun Y. , Sommer G.
The aim of the project is to design autonomous agents which use their vision sensors as a main source of information to navigate in their environment. The agents are expected to learn and accumulate knowledge about their environment as they continue to operate in it. The accumulated knowledge will then be used by the agents to navigate optimally in their environment. Neural networks, reinforcement learning and evolutionary methods will be used by the agents in processing the vision data, in learning about their environment and in acting optimally to achieve a certain predefined goal.Please click on the left image to watch the video of the application of our model based evolutionary object recognition system to visually controll a B21 robot. The system has a database of contour models of objects which are automatically acquired. The recognition algorithm determines the type of the object and its pose relative to the image plane of the camera. The recognition is translation, rotation and scaling invariant. When the robot sees the alphabet "A" it moves forward and when it sees "B" it moves backward and when it sees the "Stop" sign it tries to stop immediately. |
|
The two pictures on the left hand side show the diagrams of the neural controllers found by our evolutionary acquisition of neural toplogies algorithm (EANT) in solving the double pole balancing problems. The method starts with networks of minimal structures and complexifies them along the evolution path. It stops searching for neural structures when a solution with satisfactory performance is obtained. If you click on the images you will get videos showing the controllers in action. The first video shows a double pole balancing problem where the controller perceives the full state of the environment. The full state of the environment contains the position and velocity of the cart, and the angular positions from the vertical and angular speeds of both poles hinged to the cart. The second video shows the performance of the controller in double pole balancing problem where the environment is partially observable. The controller perceives the postion of the cart, and the angular postions of both poles from the vertical. In the videos, the left oscilloscope shows the waveform of the force generated by the controller. The right oscilloscope shows the angle from the vertical of both poles. The red trace corresponds to the angle from the vertical of the shorter pole while the blue trace corresponds to the angle from the vertical of the longer pole. Note the clever solutions obtained by the algorithm. In the first case the controller found has only one output neuron while in the second case it has one output and one hidden neuron where both of them have a recurrent connection onto themselves. Self-organization or the emergence of optimal structure is the inherent property of the system. |
The neural controller shown at the left side is found by EANT in solving the problem of learning to move forward for a crawling robotic insect. The robot has one arm having two joints where the joints are controlled by two servo motors. The arm is equipped with a touch sensor which detects wether the tip of the arm is touching the ground or not. The robotic insect is expected to move as fast as possible. |
If you click the left image, you will look at a video demonstrating the application of behavior based system. The agent follows a blue object and grabs it when the object comes near to the agent. This work is done by Andreas Bunten. We extended the system so that the agent has the ability of grabbing the object when it comes near to it. |
Further information:
Yohannes Kassahun