Benutzer-Werkzeuge

Webseiten-Werkzeuge


seminar_3m:m_3_seminar_2016:learning_and_parameterization_of_motion_primitives_from_human_observation

Learning and Parameterization of Motion Primitives from Human Observation

Veranstaltung Motions in Man and Machine
Autor Kevin Nicholas Arbai, Monika Heringhaus, Dominic Hillerkuss, Mona Gut, David Rieserer, Jannika Salzmann, Minjie Wu
Zuletzt geändert 02.07.2016


Introduction

This project is inspired by the Master Motor Map (MMM)[1], a framework and toolkit used in combination with the KIT Whole Body database[2] to capture, represent and reproduce human motions.

The purpose of this project is to learn how machines, or in our case humanoid robots, can learn human movements and generalize them in the form of Dynamic Movement Primitives (DMPs)[3]. The generalization process includes the parameterization of the movements, which will then allow the robots to synthesize similar movements based on their DMP-counterparts.

The project requires us to simulate the DMP learning and parameterization process in c++. A set of predefined libraries were given to us at the start. We were allowed to change the DMPMainWindow class for this task. Using the tools given to us, we were able to investigate the different behaviors of our input motion models. Actual procedures and evaluation will be given in the next sections.

Procedure

We divided our group into 3 sub-groups. The following is a list of what each sub-groups did:

  1. Simulated, compared, and evaluated different DMP learning algorithms.
    • Downloaded „high five“, „wave“, and „waltz“ movement .xml files from the KIT Whole Body Database.
    • Reproduced and simulated the movements by using different DMP learning algorithms: BasicDMP and SimpleEndVeloDMP
    • Analyzed the simulation results.
  2. Simulated, compared, and evaluated different DMP learning algorithms.
    • Downloaded „high five“, „wave“, and „waltz“ movement .xml files from the KIT Whole Body Database.
    • Reproduced and simulated the movements by using different DMP learning algorithms: ForceFieldDMP and EndVeloForceFieldDMP
    • Analyzed the simulation results.
  3. Implemented complex movements and created a simulation of the movement. A complex movement is a movement composed of several simple movements.
    • Decided on what kind of movement the robot should perform. The one we decided on for the project is a movement that alternates twice between right and left arm punching (with different coordinates). (See Podcast section)
    • Segmented the needed basic movements from raw movement files. There are 4 basic movements comprising the complex movement in total: „punch R-Arm“, „punch L-Arm“, „retrieve R-Arm“, and „retrieve L-Arm“. Snippet 0 shows which basic movements are segmented and used. How the movements are segmented is shown in Snippet 1.
    • Decided on a DMP learning algorithm to use on the movements. In our case, we used ForceFieldDMP.
    • Generate the trajectories to reach the desired positions with the respective body parts. (Snippet 2)
    • Execute and visualize the trajectories. The desired positions are also visualized with small red boxes so that we can compare how much the deviation is between the actual movements and the ground truth. (Snippet 2)

Snippets

DMPMainWindow.hpp

Below contains the list of all the functions and attributes of the DMPMainWindow class.

<spoiler>

public:
    explicit DMPMainWindow(QWidget *parent = 0);
    ~DMPMainWindow();
    int main();
 
    void calculateDMP();
 
    void playMotion(MMM::MotionPtr motion);
    void playMotions(std::vector<std::pair<MMM::MotionPtr, std::string>> motionPairs);
 
    void drawTrajectoryPoint(std::string robotNodeName);
 
    void moveTCP(Eigen::Vector3f offset);
    void moveNode(Eigen::Vector3f offset, VirtualRobot::RobotNodePtr robotNode);
    void buildModel(MMM::MotionPtr motion);
 
    DMP::Vec<DMP::DMPState> getStartConfiguration(const DMP::SampledTrajectoryV2 &trajectory);
 
private:
    Ui::DMPMainWindow *ui;
    VirtualRobot::RobotPtr mmm;
    VirtualRobot::RobotNodeSetPtr nodeSet;
    std::vector<Eigen::Vector3f> allGoals;
    bool firstTime = true;
    int timerID;
    std::map<std::pair<std::string, std::pair<int, int>>, DMP::ForceFieldDMP> dmpMap;
 
    SoSeparator* mainSep;
    SoSeparator* robotSep;
    SoSeparator* trajSep;
    std::map<int, SoSeparator*> trajPointSeparators;
    SoQtExaminerViewer* viewer; /*!< Viewer to display the 3D model of the robot and the environment. */
    MMM::MotionPtr motion;
    std::vector<std::pair<MMM::MotionPtr, std::string>> motionPairs;
    int currentMotionIndex = 0;
    int currentMotionFrame = 0;
    int drawIndex = 0;
    bool forwardPlaying = true;
    MMM::MotionPtr getMotion(std::string filename);
    void drawPoint(Eigen::Vector3f vector, int i);
    void addMotion(MMM::MotionPtr motion, MMM::MotionPtr nextMotion);
 
    std::pair<std::pair<MMM::MotionPtr, std::string>, DMP::SampledTrajectoryV2> createNewDMPMotion(MMM::MotionPtr originalMotion, std::string motionName, std::string robotNodeName, int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition, double stepSize,
                                      DMP::SampledTrajectoryV2 oldTrajectory);
    std::pair<std::pair<MMM::MotionPtr, std::string>, DMP::SampledTrajectoryV2> createNewDMPMotion(std::pair<std::string, std::string> fileJointPair, int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition, double stepSize,
                                      DMP::SampledTrajectoryV2 oldTrajectory);
    DMP::Vec<DMP::DMPState> getEndConfiguration(const DMP::SampledTrajectoryV2& trajectory);
protected:
    void timerEvent(QTimerEvent *);
};

</spoiler>

Snippet 0

The following snippet shows the main function for performing the complex movement (can also be used for basic movements).

calculateDMP()

void DMPMainWindow::calculateDMP()
{
    double globalStepSize = 0.02;
    SampledTrajectoryV2 empty;
 
    /*
     * The original files taken from the KIT Whole Body Database are mapped with their respective RobotNode names (TCPs).
     * It is done this way so that we can track the relevant movements of the robot.
     * It could be improved further by changing the value of the map to a list of strings. Doing this will allow us to
     * track more than one RobotNode during the simulation.
     */
    std::pair<std::string, std::string> rightHandFileJointPair = std::make_pair("punch_right01.xml", "RWsegment_joint");
    std::pair<std::string, std::string> leftHandFileJointPair = std::make_pair("punch_left01.xml", "LWsegment_joint");
 
    /*
     * This is the part the resulting movements and trajectories are calculated.
     * First, the original movements are going to be segmented. Then they are turned into DMPs.
     * Similar movements, for example "punch R-Arm" may be used more than once. However, even though their goal positions may differ,
     * only one DMP will be used for both movements.
     * The DMPs will then be able to calculate the trajectory based on the given goal position.
     * 
     * Here we can also define the start configuration of each trajectories by referencing the previous movement's SampledTrajectory.
     * This way, we'll be able to make the complex movement move smoothly.
     */
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch1 = createNewDMPMotion(rightHandFileJointPair, 100, 195, Vector3f(-100, 0, 200), globalStepSize, empty);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat = createNewDMPMotion(rightHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch1.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch2 = createNewDMPMotion(leftHandFileJointPair, 125, 195, Vector3f(-200, 100, 200), globalStepSize, retreat.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat2 = createNewDMPMotion(leftHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch2.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch3 = createNewDMPMotion(rightHandFileJointPair, 100, 195, Vector3f(-100, 0, -800), globalStepSize, retreat2.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat3 = createNewDMPMotion(rightHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch3.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch4 = createNewDMPMotion(leftHandFileJointPair, 125, 195, Vector3f(300, 0, -800), globalStepSize, retreat3.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat4 = createNewDMPMotion(leftHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch4.second);
 
    /*
     * After all the MotionPtrs have been initialized, they will be stored based on their execution order in a list.
     * This list is then used as a parameter for the function playMotions(..).
     * The function is used to start a timer that will basically do an endless loop of the motion's execution.
     */ 
    std::vector<std::pair<MMM::MotionPtr, std::string>> motions;
    motions.push_back(punch1.first);
    motions.push_back(retreat.first);
    motions.push_back(punch2.first);
    motions.push_back(retreat2.first);
    motions.push_back(punch3.first);
    motions.push_back(retreat3.first);
    motions.push_back(punch4.first);
    motions.push_back(retreat4.first);
 
    playMotions(motions);
}
Snippet 1

This subsection shows a snippet of how the basic movements are segmented from the raw data.

createNewDMPMotion(MMM::MotionPtr originalMotion, std::string motionName, std::string robotNodeName,
                                                                                      int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition,
                                                                                      double stepSize, DMP::SampledTrajectoryV2 oldTrajectory) {
    ...
 
    originalMotion = originalMotion->getSegmentMotion(beginFrame, endFrame);
 
    ...
}

The segmented motion is then used in Snippet 2 to create the DMP.

Snippet 2

The motion we've segmented before will be learned by the DMP here. Afterwards, the DMP will calculate the trajectory of the new movement given a start configuration and a goal position.

createNewDMPMotion(MMM::MotionPtr originalMotion, std::string motionName, std::string robotNodeName,
                                                                                      int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition,
                                                                                      double stepSize, DMP::SampledTrajectoryV2 oldTrajectory) {
    ...
 
    auto mmmTrajectories = MMMConverter::fromMMMJoints(originalMotion);
    std::pair<std::string, std::pair<int, int>> dmpKey = std::make_pair(motionName, std::make_pair(beginFrame, endFrame));
    ForceFieldDMP dmp;
 
    /*
     * This part ensures that only one DMP will be learned for all similar motions.
     */ 
    auto it = dmpMap.find(dmpKey);
    if (it != dmpMap.end()) {
        dmp = it->second;
    } else {
        dmp.learnFromTrajectory(mmmTrajectories);
        dmpMap[dmpKey] = dmp;
    }
 
    // set joint values to end configuration of the trajectory to calculate a new position relative to it
    if (mmm)
        mmm->setJointValues(nodeSet, originalMotion->getMotionFrame(originalMotion->getNumFrames()-1)->joint);
    else
        std::cerr << "human/robot model is NULL" << std::endl;
 
    /*
     * Moves the robotNode, that was passed to the function, to the goal position. 
     */ 
    moveNode(relativeNewTCPPosition, relevantRobotNode);
    auto globalPose = relevantRobotNode->getGlobalPose();
 
    /*
     * allGoals contains all the goals that are going to be visualized during the simulation.
     */ 
    allGoals.push_back(Eigen::Vector3f(globalPose(0, 3), globalPose(1, 3), globalPose(2, 3)));
    auto jointValues = nodeSet->getJointValues();
    DVec goals;
    goals.assign(jointValues.begin(), jointValues.end());
 
    auto timestamps = SampledTrajectoryV2::generateTimestamps(0, originalMotion->getNumFrames() / 100.0, stepSize);
    /*
     * The starting configuration of the new trajectory. We have the option to pass any trajectory to the function.
     * The end joint configurations will then be used for the start configuration.
     */ 
    Vec<DMPState> startConfiguration;
    if (oldTrajectory.size() > 0) {
        startConfiguration = getEndConfiguration(oldTrajectory);
    } else {
        startConfiguration = getStartConfiguration(mmmTrajectories);
    }
    auto newTrajectory = dmp.calculateTrajectory(timestamps, goals, startConfiguration, 1.0);
 
    return std::make_pair(std::make_pair(MMMConverter::toMMMJoints(newTrajectory, originalMotion), robotNodeName), newTrajectory);
}
DMPMainWindow.cpp

<spoiler>

using namespace VirtualRobot;
 
using namespace DMP;
 
DMPMainWindow::DMPMainWindow(QWidget *parent) :
                             QMainWindow(parent),
                             ui(new Ui::DMPMainWindow) {
    ui->setupUi(this);
 
    calculateDMP();
 
    robotSep = new SoSeparator;
    robotSep->ref();
 
    trajSep = new SoSeparator;
    trajSep->ref();
 
    auto unit = new SoUnits;
    unit->units = SoUnits::MILLIMETERS;
 
    mainSep = new SoSeparator;
    mainSep->addChild(unit);
    mainSep->addChild(robotSep);
    mainSep->addChild(trajSep);
 
    if(mmm)
    {
        auto visualization = mmm->getVisualization<CoinVisualization>();
        SoNode* visualisationNode = NULL;
 
        if (visualization)
        {
            visualisationNode = visualization->getCoinVisualization();
        }
 
        if (visualisationNode)
        {
            robotSep->addChild(visualisationNode);
        }
    }
 
 
    viewer = new SoQtExaminerViewer(ui->centralwidget, "", TRUE, SoQtExaminerViewer::BUILD_POPUP);
 
    // setup
    viewer->setBackgroundColor(SbColor(1.0f, 1.0f, 1.0f));
    viewer->setAccumulationBuffer(true);
    viewer->setAntialiasing(true, 4);
 
    viewer->setGLRenderAction(new SoLineHighlightRenderAction);
    viewer->setTransparencyType(SoGLRenderAction::SORTED_OBJECT_BLEND);
    viewer->setFeedbackVisibility(true);
    viewer->setSceneGraph(mainSep);
    viewer->viewAll();
    viewer->setAntialiasing(true, 4);
 
    viewer->viewAll();
 
 
}
 
DMPMainWindow::~DMPMainWindow() {
    robotSep->unref();
    delete ui;
}
 
int DMPMainWindow::main() {
    SoQt::show(this);
    SoQt::mainLoop();
    return 0;
}
 
void DMPMainWindow::calculateDMP()
{
    double globalStepSize = 0.02;
    SampledTrajectoryV2 empty;
 
    std::pair<std::string, std::string> rightHandFileJointPair = std::make_pair("punch_right01.xml", "RWsegment_joint");
    std::pair<std::string, std::string> leftHandFileJointPair = std::make_pair("punch_left01.xml", "LWsegment_joint");
 
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch1 = createNewDMPMotion(rightHandFileJointPair, 100, 195, Vector3f(-100, 0, 200), globalStepSize, empty);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat = createNewDMPMotion(rightHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch1.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch2 = createNewDMPMotion(leftHandFileJointPair, 125, 195, Vector3f(-200, 100, 200), globalStepSize, retreat.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat2 = createNewDMPMotion(leftHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch2.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch3 = createNewDMPMotion(rightHandFileJointPair, 100, 195, Vector3f(-100, 0, -800), globalStepSize, retreat2.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat3 = createNewDMPMotion(rightHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch3.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> punch4 = createNewDMPMotion(leftHandFileJointPair, 125, 195, Vector3f(300, 0, -800), globalStepSize, retreat3.second);
    std::pair<std::pair<MMM::MotionPtr, std::string>, SampledTrajectoryV2> retreat4 = createNewDMPMotion(leftHandFileJointPair, 195, 260, Vector3f(0, 0, 0), globalStepSize, punch4.second);
 
 
//    addMotion(punch1.first.first, retreat.first.first);
//    addMotion(punch1.first.first, punch2.first.first);
//    addMotion(punch1.first.first, retreat2.first.first);
//    addMotion(punch1.first.first, punch3.first.first);
//    addMotion(punch1.first.first, retreat3.first.first);
//    addMotion(punch1.first.first, punch4.first.first);
//    addMotion(punch1.first.first, retreat4.first.first);
 
//    playMotion(punch1.first);
 
    std::vector<std::pair<MMM::MotionPtr, std::string>> motions;
    motions.push_back(punch1.first);
    motions.push_back(retreat.first);
    motions.push_back(punch2.first);
    motions.push_back(retreat2.first);
    motions.push_back(punch3.first);
    motions.push_back(retreat3.first);
    motions.push_back(punch4.first);
    motions.push_back(retreat4.first);
 
    playMotions(motions);
}
 
std::pair<std::pair<MMM::MotionPtr, std::string>, DMP::SampledTrajectoryV2> DMPMainWindow::createNewDMPMotion(std::pair<std::string, std::string> fileJointPair,
                                                                                      int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition,
                                                                                      double stepSize, DMP::SampledTrajectoryV2 oldTrajectory) {
    return createNewDMPMotion(getMotion(fileJointPair.first), fileJointPair.first, fileJointPair.second, beginFrame, endFrame, relativeNewTCPPosition, stepSize, oldTrajectory);
}
 
std::pair<std::pair<MMM::MotionPtr, std::string>, DMP::SampledTrajectoryV2> DMPMainWindow::createNewDMPMotion(MMM::MotionPtr originalMotion, std::string motionName, std::string robotNodeName,
                                                                                      int beginFrame, int endFrame, Eigen::Vector3f relativeNewTCPPosition,
                                                                                      double stepSize, DMP::SampledTrajectoryV2 oldTrajectory) {
    // Build model for inverse kinematics and visualization
    buildModel(originalMotion);
    RobotNodePtr relevantRobotNode = nodeSet->getRobot()->getRobotNode(robotNodeName);
 
    originalMotion = originalMotion->getSegmentMotion(beginFrame, endFrame);
 
    auto mmmTrajectories = MMMConverter::fromMMMJoints(originalMotion);
    std::pair<std::string, std::pair<int, int>> dmpKey = std::make_pair(motionName, std::make_pair(beginFrame, endFrame));
    ForceFieldDMP dmp;
    auto it = dmpMap.find(dmpKey);
    if (it != dmpMap.end()) {
        //std::cout << "Found one: " << dmpKey.first << std::endl;
        dmp = it->second;
    } else {
        //std::cout << "Learned a new move: " << dmpKey.first << std::endl;
        dmp.learnFromTrajectory(mmmTrajectories);
        dmpMap[dmpKey] = dmp;
    }
 
    // set joint values to end configuration of the trajectory to calculate a new position relative to it
    if (mmm)
        mmm->setJointValues(nodeSet, originalMotion->getMotionFrame(originalMotion->getNumFrames()-1)->joint);
    else
        std::cerr << "human/robot model is NULL" << std::endl;
 
    // Get start  configuration from original motion for DMP parameterization for all dimensions of the dmps (all joints)
    moveNode(relativeNewTCPPosition, relevantRobotNode);
    auto globalPose = relevantRobotNode->getGlobalPose();
//    allGoals.push_back(relevantRobotNode->getPositionInRootFrame());
    allGoals.push_back(Eigen::Vector3f(globalPose(0, 3), globalPose(1, 3), globalPose(2, 3)));
    auto jointValues = nodeSet->getJointValues();
    DVec goals;
    goals.assign(jointValues.begin(), jointValues.end());
 
    auto timestamps = SampledTrajectoryV2::generateTimestamps(0, originalMotion->getNumFrames() / 100.0, stepSize);
    Vec<DMPState> startConfiguration;
    if (oldTrajectory.size() > 0) {
        startConfiguration = getEndConfiguration(oldTrajectory);
    } else {
        startConfiguration = getStartConfiguration(mmmTrajectories);
    }
    auto newTrajectory = dmp.calculateTrajectory(timestamps, goals, startConfiguration, 1.0);
 
    return std::make_pair(std::make_pair(MMMConverter::toMMMJoints(newTrajectory, originalMotion), robotNodeName), newTrajectory);
}
 
void DMPMainWindow::addMotion(MMM::MotionPtr motion, MMM::MotionPtr nextMotion) {
    auto nextMotionFrames = nextMotion->getMotionFrames();
    for (int i = 0; i < nextMotionFrames.size(); i++) {
        motion->addMotionFrame(nextMotionFrames[i]);
    }
}
 
MMM::MotionPtr DMPMainWindow::getMotion(std::string filename) {
    std::string path = MMMTOOLS_DATA_DIR;
    std::string MMMRelativePath = "/Model/Winter/";
    MMM::MotionReaderXML reader;
    path += MMMRelativePath + filename;
    return reader.loadMotion(path);
}
 
void DMPMainWindow::moveNode(Eigen::Vector3f offset, VirtualRobot::RobotNodePtr robotNode) {
    Eigen::Vector3f newPosition = robotNode->getPositionInRootFrame() + offset ;
    std::cout << "New position: " << newPosition << std::endl;
    VirtualRobot::ConstraintPtr posConstraint(new VirtualRobot::PositionConstraint(mmm, nodeSet, robotNode, newPosition));
    Eigen::VectorXf jointConfig;
    nodeSet->getJointValues(jointConfig);
    VirtualRobot::ConstraintPtr referenceConfigConstraint(new VirtualRobot::ReferenceConfigurationConstraint(mmm, nodeSet, jointConfig));
    referenceConfigConstraint->setOptimizationFunctionFactor(0.5);
 
    // Instantiate solver and generate IK solution
    VirtualRobot::ConstrainedOptimizationIK ikSolver(mmm, nodeSet);
    ikSolver.addConstraint(posConstraint);
    ikSolver.addConstraint(referenceConfigConstraint);
 
    ikSolver.initialize();
    bool success = ikSolver.solve();
}
void DMPMainWindow::moveTCP(Eigen::Vector3f offset) {
    moveNode(offset, nodeSet->getTCP());
}
 
void DMPMainWindow::buildModel(MMM::MotionPtr motion) {
    mmm = MMM::SimoxTools::buildModel(motion->getModel());
    std::vector<std::string> jointNames = motion->getJointNames();
    if (jointNames.size()>0) {
        std::string rnsName("MMMViewerRNS");
        if (mmm->hasRobotNodeSet(rnsName))
            mmm->deregisterRobotNodeSet(mmm->getRobotNodeSet(rnsName));
        nodeSet = VirtualRobot::RobotNodeSet::createRobotNodeSet(mmm, rnsName, jointNames, "", "RWy_joint", true);
    }
}
 
DMP::Vec<DMP::DMPState> DMPMainWindow::getStartConfiguration(const SampledTrajectoryV2& trajectory) {
    Vec<DMPState> initialStates;
    for (int dim = 0; dim < trajectory.dim(); ++dim) {
        DMPState initialState;
        initialState.pos = trajectory.begin()->getPosition(dim);
        initialState.vel = trajectory.begin()->getDeriv(dim, 1);
        initialStates.push_back(initialState);
    }
    return initialStates;
}
 
DMP::Vec<DMP::DMPState> DMPMainWindow::getEndConfiguration(const SampledTrajectoryV2& trajectory) {
    Vec<DMPState> initialStates;
    for (int dim = 0; dim < trajectory.dim(); ++dim) {
        DMPState initialState;
        initialState.pos = trajectory.rbegin()->getPosition(dim);
        initialState.vel = trajectory.rbegin()->getDeriv(dim, 1);
        initialStates.push_back(initialState);
    }
    return initialStates;
}
 
 
 
void DMPMainWindow::playMotion(MMM::MotionPtr motion) {
    this->motion = motion;
    timerID = startTimer(10);
}
 
void DMPMainWindow::playMotions(std::vector<std::pair<MMM::MotionPtr, string> > motionPairs) {
    for (int i = 0; i < motionPairs.size(); i++) {
        this->motionPairs.push_back(motionPairs[i]);
    }
    timerID = startTimer(10);
}
 
 
 
void DMPMainWindow::drawTrajectoryPoint(std::string robotNodeName) {
    float tempFrame = drawIndex + allGoals.size();
    auto it = trajPointSeparators.find(tempFrame);
 
    if (it == trajPointSeparators.end()) {
        SoSeparator* sep = new SoSeparator;
        SoMaterial* mat = new SoMaterial;
        mat->ambientColor.setValue(1,0,0);
        mat->diffuseColor.setValue(1,1,0);
        mat->transparency.setValue(0.7);
        sep->addChild(mat);
 
        SoTransform* tr = new SoTransform;
        Eigen::Vector3f position = nodeSet->getRobot()->getRobotNode(robotNodeName)->getGlobalPose().block<3,1>(0,3);
        tr->translation.setValue(position[0], position[1], position[2]);
        sep->addChild(tr);
 
        SoCube* cube = new SoCube;
        cube->width = cube->height = cube->depth = 10;
        sep->addChild(cube);
        trajSep->addChild(sep);
        trajPointSeparators[tempFrame] = sep;
    } else {
        SoSeparator* sep = it->second;
        auto transform = dynamic_cast<SoTransform*>(sep->getChild(1));
        Eigen::Vector3f position = nodeSet->getRobot()->getRobotNode(robotNodeName)->getGlobalPose().block<3,1>(0,3);
        transform->translation.setValue(position[0], position[1], position[2]);
    }
    if (firstTime) {
        for (int i = 0; i <  allGoals.size(); i++) {
            auto currentGoal = allGoals[i];
            drawPoint(currentGoal, i);
        }
        firstTime = false;
    }
}
 
void DMPMainWindow::drawPoint(Eigen::Vector3f vector, int i) {
    SoSeparator* sep = new SoSeparator;
    SoMaterial* mat = new SoMaterial;
    mat->ambientColor.setValue(1,0,0);
    mat->diffuseColor.setValue(1,0,1);
    mat->transparency.setValue(0.8);
    sep->addChild(mat);
 
    SoTransform* tr = new SoTransform;
    Eigen::Vector3f position = Vector3f(vector[0], vector[1], vector[2]);
    tr->translation.setValue(position[0], position[1], position[2]);
    sep->addChild(tr);
 
    SoCube* cube = new SoCube;
    cube->width = cube->height = cube->depth = 50;
    sep->addChild(cube);
    trajSep->addChild(sep);
    trajPointSeparators[i] = sep;
 
}
 
void DMPMainWindow::timerEvent(QTimerEvent *) {
    if(currentMotionFrame < 0) {
        if (!forwardPlaying) {
            currentMotionIndex--;
        }
        if (currentMotionIndex >= 0) {
            currentMotionFrame = motionPairs[currentMotionIndex].first->getNumFrames() - 1;
        } else {
            if (!forwardPlaying) {
                currentMotionIndex++;
                drawIndex = 0;
            }
            currentMotionFrame = 0;
            forwardPlaying = true;
        }
 
 
    } else if(currentMotionFrame >= motionPairs[currentMotionIndex].first->getNumFrames()) {
//        killTimer(timerID);
        if (forwardPlaying) {
            currentMotionIndex++;
        }
        if (currentMotionIndex >= motionPairs.size()) {
            if (forwardPlaying) {
                currentMotionIndex--;
                currentMotionFrame = motionPairs[currentMotionIndex].first->getNumFrames() - 1;
            }
            currentMotionFrame--;
            forwardPlaying = false;
        } else {
            currentMotionFrame = 0;
        }
    }
 
    MMM::MotionFramePtr frame = motionPairs[currentMotionIndex].first->getMotionFrame(currentMotionFrame);
    mmm->setJointValues(nodeSet, frame->joint);
    mmm->setGlobalPose(frame->getRootPose());
 
    drawTrajectoryPoint(motionPairs[currentMotionIndex].second);
 
    if (forwardPlaying) {
        drawIndex++;
        currentMotionFrame++;
    } else {
        drawIndex--;
        currentMotionFrame--;
    }
}

</spoiler>

Evaluation

Basic Movements

We see in the podcast that both BasicDMP and SimpleEndVeloDMP have scalability problems when the number of joint movements become too large. Their movements become somewhat clunky and unnatural. On the other hand, ForceFieldDMP and EndVeloForceFieldDMP do not have this problem and can therefore be used to generate more natural movements.

Complex Movements

In the podcast we see that we are able to create a complex movement by utilizing and combining some DMPs. However, some of the red boxes (ground truth) couldn't be reached by the robot. This is because the DMPs that the robot uses define how the motions are supposed to be executed, the order of the motions, which joints should move, and so on. In order to solve this problem, we could either use a different basic movement or use a different DMP learning algorithm.

Podcast

The following podcast shows the results we have achieved in the project.

Note: Some mistake(s) we made in the podcast:

  • TCP = Tool Center Point



Summary

Here are the things that we learned from this project:

  1. How robots can generalize a movement.
  2. How the different DMP learning algorithms affect the reproduced movements.
  3. How the movements learned from the DMP can be used to generate similar movements with different start configurations and goal positions.
  4. How it is possible to connect several DMPs sequentially to create a complex movement.

DMP offers a convenient way to generalize movements and plan robust movements.

References

  • [1] Ijspeert, J. et al. (2013). Dynamical Movement Primitves: Learning and Attractor Models for Motor Behaviors, Posted online: Massachusetts Institute of Technology – Neural Computation
  • [2] Mandery, C. et al. (2015). The KIT Whole-Body Human motion Database, Published in: Advanced Robotics (ICAR)
  • [3] Terlemez, Ö. Et al. (2014). Master Motor Map (MMM) – Framework and Toolkit for Capturing, Representing, and Reproducing Human Motion on Humanoid Robots. Published in: IEEE-RAS International Conference on Humanoid Robots

indexmenu_n_4

seminar_3m/m_3_seminar_2016/learning_and_parameterization_of_motion_primitives_from_human_observation.txt · Zuletzt geändert: 28.11.2022 00:11 von 127.0.0.1


Warning: Undefined variable $orig_id in /is/htdocs/wp1019470_OPI92FFHXV/www/wikiLehre/lib/plugins/openas/action.php on line 232