Download a PDF with the full list of our publications: Robot-Intelligence-Lab-Publications-2021.pdf

A comprehensive list can also be found at Google Scholar, or by searching for the publications of author Kormushev, Petar.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Kormushev P, Nenchev DN, Calinon S, Caldwell DGet al., 2011,

    Upper-body Kinesthetic Teaching of a Free-standing Humanoid Robot

    , Pages: 3970-3975
  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2011,

    Time Hopping Technique for Faster Reinforcement Learning in Simulations

    , International Journal of Cybernetics and Information Technologies, Vol: 11, Pages: 42-59
  • Conference paper
    Kormushev P, Calinon S, Caldwell DG, 2010,

    Approaches for Learning Human-like Motor Skills which Require Variable Stiffness During Execution

  • Conference paper
    Kormushev P, Calinon S, Saegusa R, Metta Get al., 2010,

    Learning the skill of archery by a humanoid robot iCub

    , Pages: 417-423
  • Conference paper
    Kormushev P, Calinon S, Caldwell DG, 2010,

    Robot Motor Skill Coordination with EM-based Reinforcement Learning

    , Pages: 3232-3237
  • Conference paper
    Sato F, Nishii T, Takahashi J, Yoshida Y, Mitsuhashi M, Kormushev P, Kanamiya Yet al., 2010,

    Whiteboard Cleaning Task Realization with HOAP-2

    , Pages: 426-429
  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2009,

    Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning

    , Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol: 13, No. 6
  • Conference paper
    Kormushev P, Dong F, Hirota K, 2009,

    Probability redistribution using time hopping for reinforcement learning

  • Journal article
    Kormushev P, Nomoto K, Dong F, Hirota Ket al., 2008,

    Time manipulation technique for speeding up reinforcement learning in simulations

    , Cybernetics and Information Technologies, Vol: 8, Pages: 12-24, ISSN: 1311-9702

    A technique for speeding up reinforcement learning algorithms by usingtime manipulation is proposed. It is applicable to failure-avoidance controlproblems running in a computer simulation. Turning the time of the simulationbackwards on failure events is shown to speed up the learning by 260% andimprove the state space exploration by 12% on the cart-pole balancing task,compared to the conventional Q-learning and Actor-Critic algorithms.

  • Conference paper
    Yamazaki Y, Dong F, Masuda Y, Uehara Y, Kormushev P, Vu HA, Le PQ, Hirota Ket al., 2007,

    Intent expression using eye robot for mascot robot system

  • Conference paper
    Yamazaki Y, Dong F, Masuda Y, Uehara Y, Kormushev P, Vu HA, Le PQ, Hirota Ket al., 2007,

    Fuzzy inference based mentality estimation for eye robot agent

  • Journal article
    Agre G, Kormushev P, Dilov I, 2006,

    INFRAWEBS Axiom Editor - A graphical ontology-driven tool for creating complex logical expressions

    , International Journal of Information Theories and Applications, Vol: 13, Pages: 169-178
  • Conference paper
    Agre G, Kormushev P, Dilov I, 2005,

    INFRAWEBS Capability Editor - A graphical ontology-driven tool for creating capabilities of Semantic Web Services

    , Pages: 228-228
  • Journal article
    Chappell D, Wang K, Kormushev P,

    Asynchronous Real-Time Optimization of Footstep Placement and Timing in Bipedal Walking Robots

    Online footstep planning is essential for bipedal walking robots to be ableto walk in the presence of disturbances. Until recently this has been achievedby only optimizing the placement of the footstep, keeping the duration of thestep constant. In this paper we introduce a footstep planner capable ofoptimizing footstep placement and timing in real-time by asynchronouslycombining two optimizers, which we refer to as asynchronous real-timeoptimization (ARTO). The first optimizer which runs at approximately 25 Hz,utilizes a fourth-order Runge-Kutta (RK4) method to accurately approximate thedynamics of the linear inverted pendulum (LIP) model for bipedal walking, thenuses non-linear optimization to find optimal footsteps and duration at a lowerfrequency. The second optimizer that runs at approximately 250 Hz, usesanalytical gradients derived from the full dynamics of the LIP model andconstraint penalty terms to perform gradient descent, which finds approximatelyoptimal footstep placement and timing at a higher frequency. By combining thetwo optimizers asynchronously, ARTO has the benefits of fast reactions todisturbances from the gradient descent optimizer, accurate solutions that avoidlocal optima from the RK4 optimizer, and increases the probability that afeasible solution will be found from the two optimizers. Experimentally, weshow that ARTO is able to recover from considerably larger pushes and producesfeasible solutions to larger reference velocity changes than a standardfootstep location optimizer, and outperforms using just the RK4 optimizeralone.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wwwtest.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=815&limit=20&page=6&respub-action=search.html Current Millis: 1759567809458 Current Time: Sat Oct 04 09:50:09 BST 2025