Publications
Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleKoos S, Cully A, Mouret J-B, 2013,
Fast damage recovery in robotics with the T-resilience algorithm
, The International Journal of Robotics Research, Vol: 32, Pages: 1700-1723, ISSN: 0278-3649Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behavior in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behavior by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behavior that does not use them. We evaluate the T-resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to an RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 min, T-resilience consistently leads to substantially better results than the other approaches.
-
Conference paperCully AHR, Mouret J-B, 2013,
Behavioral repertoire learning in robotics
, Proceedings of the 15th annual conference on Genetic and evolutionary computation, Publisher: ACM, Pages: 175-182Behavioral Repertoire Learning in RoboticsAntoine CullyISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francecully@isir.upmc.frJean-Baptiste MouretISIR, Université Pierre et Marie Curie-Paris 6,CNRS UMR 72224 place Jussieu, F-75252, Paris Cedex 05,Francemouret@isir.upmc.frABSTRACTLearning in robotics typically involves choosing a simple goal(e.g. walking) and assessing the performance of each con-troller with regard to this task (e.g. walking speed). How-ever, learning advanced, input-driven controllers (e.g. walk-ing in each direction) requires testing each controller on alarge sample of the possible input signals. This costly pro-cess makes difficult to learn useful low-level controllers inrobotics.Here we introduce BR-Evolution, a new evolutionary learn-ing technique that generates a behavioral repertoire by tak-ing advantage of the candidate solutions that are usuallydiscarded. Instead of evolving a single, general controller,BR-evolution thus evolves a collection of simple controllers,one for each variant of the target behavior; to distinguishsimilar controllers, it uses a performance objective that al-lows it to produce a collection of diverse but high-performingbehaviors. We evaluated this new technique by evolving gaitcontrollers for a simulated hexapod robot. Results show thata single run of the EA quickly finds a collection of controllersthat allows the robot to reach each point of the reachablespace. Overall, BR-Evolution opens a new kind of learningalgorithm that simultaneously optimizes all the achievablebehaviors of a robot.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.