Benutzer-Werkzeuge

Webseiten-Werkzeuge


abschlussarbeiten:msc:filipcengic:chap03

3 Haptic Control Design

In order to investigate learning and control of locomotion, a virtual simulation environment is needed (Rossi, 2015, p.13). Within simulation two environments are used: LabView (National Instruments) and MATLAB (MathWorks) with its built-in Simulink module. For the hardware-coupled interaction with the haptic interface, LabView is used. MATLAB is needed for finding adequate initial parameters for (un)stable locomotion conditions.

Figure 4: Basic Principles of Haptic Control Design (mod. after Ma, p. 7)



In haptic control design, there are two loops, which are interacting with each other via a virtual environment. The interface for this environment is a hardware controller in form of a joystick. By operating the hardware controller, the contact force feedback is sensed by the human skin receptors (Fig. 4). Providing tactile feedback information to the brain will emerge in new motor commands operated by the neuromuscular system. This paradigm is called Sensimotor learning (Wolpert, 2011). The virtual environment can be regarded as an adaptive control interface with two directions, one for feedback and the other for feedforward.



3.1. Principles of a Haptic Interface

A haptic interface is a sensory-mechanical device, which outputs feedback signals (3.2 Role of Feedback) based on user inputs. The human sense of touch plays a major role when it comes to manipulating simulated object parameters (e.g. the spring stiffness value of a SLIP model) in a virtual environment. According to Moussette (2012, p. 40) haptic sense contributes to human perception capabilities along with other sensations, such as vision and audition. The haptic modality is thus a complex bidirectional interface, which encapsulates perception and action like no other sense. Furthermore, the sense of touch comprises a hierarchy of subsystems that include two sensation modules: Tactile and proprioceptive sensation. The tactile module summarizes all sensations arising from an external stimulus to the skin, such as heat, pressure or vibrational effects. Proprioception summarizes internal functionalities of small perception organ units located in muscles, joints and tendons. More information on different feedback channels can be read in the following section 3.2 (Role of Feedback).

Output produced by the haptic device allows the user to feel the (force) feedback coming from the instrumented setup. According to MacLean (2008, p. 150) haptic devices exchange power (e.g., forces, vibrations, heat) through the contact with parts of the user’s body, following a programmed interactive algorithm. Actually, haptics surrounds us every day. From a small virtual text message received on a cellular phone, to real sports such as boxing, where each hit against the punching bag causes natural pain reactions, haptics is the key contributor to personal improvements within the scope of sensimotor learning. For basically every kind of learning tasks or self improvement strategies, the human being relies on external feedback.
To implement and understand the basic functioning of haptic process control, the computer must collect data and transmit signals to the controller. In the following, the basic components of a haptic interface will be described in detail:

  • Sensors / Actuators
  • Electronic components
  • Mathematical models

Sensors are one of the useful technologies, which play a major role in the field of haptics (2.6 Haptic Interactions), e.g. for signal measurement and force feedback. Within its mechanical context, a sensor is a transducer that converts a physical stimulus. Sensors can be classified into internal and external sensors. Internal sensors (e.g. accelerometer, potentiometer, …) are required for basic functioning of the system. External sensors are exposed to the interaction with the outer environment (vision, force, …). As part of the system description in section 3.3 (System Description), all relevant sensors of the haptic device environment are listed and specified. To make use of information gained from the sensors, actuation devices are needed. Basically, these are hardware devices, that convert a control signal into a measurable physical parameter (e.g. change of position). Furthermore, a source of energy is required, which may be electric current, hydraulic fluid pressure, or pneumatic pressure. Mathematical models need to be created for the synchronization of the user input and output, respectively (i.e. feedback), as well as the haptic interface.

3.2. Role of Feedback

Haptic interaction is realized by hardware control, which is feeding input to the system, and a visual output of the human-machine interaction is exemplarily displayed on the screen. This simple setup is widely used for sensimotor learning tasks. Feedback plays a major role in the task of sensimotor learning and developing learning skills is even more effective, if every component of the two sensimotor loops (Fig. 4) is well functioning. The usage of haptic device interfaces makes sense, if there is a feedback chain going back to the user, who is controlling the device. Within the scope of this work, sensory integration is a determining factor in the field of haptic control design. According to Rossi (2015, p. 5), sensory integration is very important in stable locomotion as well, because the latter relies on four sensory systems:

  • Visual system
  • Somatosensory system
  • Vestibular system
  • Proprioceptive system

The integration of sensory information provided by these systems is necessary to achieve an adult gait pattern (Schmuckler, 1990). Sensory information is redirected via the central nervous system to the human brain. Based on these feedback signals, the human subject is able to improve and adapt its movement patterns. Visual feedback is used to follow the virtual representation of the SLIP model. The trajectory of the CoM under unstable conditions can be monitored and compared intuitively to regular running scenarios. Furthermore, visual feedback may deliver a virtual reality experience, for the purposes of rehabilitation and experimental setups. The efficacy of visual and somatosensory feedbacks plays a major role, when it comes to Sensimotor learning tasks. The somatosensory cortex encodes incoming sensory information from receptors all over the body. Based on these signals, new movement programs may be evolved. Tactile perception, e.g. excited by force feedback, is important for the valuation of the input changing parameters (when adding an external force or changing the spring stiffness) fed by the user through the hardware. Vestibular information can be used efficiently, when it comes to balancing tasks or other tasks regarding the maintenance of equilibrium. Considering the scope of this work, vestibular information signals – compared to other sensory system components – are not very relevant. In contrast to vestibular information, the proprioceptive system provides knowledge of body positions, forces and motions via kinaesthetic end organs located in muscles, tendons and joints (Rossi, 2015, pp. 5f.). Within the scope of haptic interface manipulation, proprioception delivers necessary information with regards to the position handling of the haptic assessment tool. Human perceptive sensory organs, such as the muscle spindle or the Golgi tendon organ allow a person with closed eyes, to know where the haptic device robot is located in 3- dimensional space, while someone moves the subjects arms around.

Figure 5: Full System Design (Sensimotor loop)



According to MacLean (2008, p. 149), the ability to touch has evolved to work in a tight partnership with vision and hearing. As shown in Fig. 5, feedback is introduced to the user on two separate channels, i.e. haptically and visually. A force/haptic feedback mechanism is implemented, which returns, the position of the handle unit to its initial state, in case there is no controller displacement. This means, that all interfacing parameters do not change during the simulation of the implemented SLIP model. Haptic feedback is generally divided into two different classes: Kinaesthetic and tactile feedback deal with proprioceptive sensory receptor organs, such as the Golgi tendon organ or the muscle spindle. Latter feedback describes the perceived feeling on the skin surface, which might be caused by external forces/torques applied to the haptic interface controller. Force feedback devices are designed to address proprioceptive sensors in the muscles, joints and tendons by providing forces that react to the wrist flexion/extension movement. These movements are measured, and in response, forces are computed according to a virtual representation of the displayed physical environment (MacLean, 2008, p. 161).

The trajectory of the CoM (center of mass) and the leg compression of the SLIP model is displayed in real-time. The pivot point, at which the leg is rotating during each stance phase is called center of pressure (CoP) and is identical with the foot coordinates of the SLIP model. Hence, the dynamics of motion are all calculated during the touchdown. After the take-off only gravitation is acting on the point mass. The visual output of the SLIP model is perceived by the user, and since a Human-in-the-loop approach is considered as a major design criterion of the system setup (3.3. System Description), the subject is either able to add an external force acting on the point mass, or to modify the leg stiffness, which is simply modelled by a mechanical spring. Once, the differences in height of the point mass get bigger, it may be an even more challenging task to return into a stable state of the SLIP model.

Figure 6: Feedback of the SLIP model displayed on the screen



In Fig. 6, the haptic assessment control joystick is shown, which can be used both, as an input and an output device. Depending on the degrees of freedom of the robotic force feedback assessment tool (i.e. haptic interface), model parameters in a computed virtual environment model (e.g. for a SLIP runner) can be physically set and even modified in real-time within certain limits such as its workspace and actuator torque. Based on the principles of error-based learning (Wolpert, 2011, pp. 742f. & Krishnapillai, 2015, p. 7f.), experimental trials are repeated 5 times. All 5 trials underlay the same initial parameter set of the experimental setup. Afterwards, another 5 trials are executed with a different start parameter set. Results from all trials with their current initial parameter set are listed and represented graphically in section 6 (Results).

3.3. System Description

In the following, the haptic interface system, called Hi5, is described. Due to the design characteristics of the haptic device robot, the interface is easily transportable, allowing the system for easy carry. With a system’s workspace of 60° and overall dimensions of 320 x 120 x 250 mm, the system is useable in different spatial environments. According to Wilhelm et al. (2016), the fully grounded Hi5 interface can be considered as a robot- based tactile assessment tool, due to its force feedback functionality. The general characteristics of the haptic system are based upon an initial CAD design by Melendez-Calderon et al. (2011). Further readings, regarding the Hi5 robotic device can be found in the work of Wilhelm et al. (2016). The haptic device can be classified into three different interconnected parts:

  • Haptic Robotic Device
  • Controller
  • Target PC (real-time)

Haptic input is fed into the system through the robotic device controller, which is a customized handle unit connected to a Maxon DC motor (Model RE-65 with 250 W) and a Broadcom encoder (Model HEDR-55L2-BY09) with 3600 counts per turn. This DC motor is among others responsible for the force feedback mechanism (3.2 Role of Feedback) implemented within this system. Encoders are needed for positioning and velocity sensing in a wide range of applications, such as feeding input parameters in real- time into the SLIP simulation environment.

As part of the following section, more details will be provided on the hardware and software level side. The actual interface between software and hardware level forms the Controller.



3.3.1. The Controller

The role of the controller is the conversion of a signal coming from the computing machine into motor currents. In the following Fig. 7, plain lines represent power lines and the dotted lines are low current lines for control (𝑑𝑙>), sensing (𝑞>) and for the emergency button (𝐸𝐵). The pulse-width modulated (PWM) signal has the symbol 𝑑𝑙>, which stands for drive line and represents the Direction and Enable lines for the driver (Imperial College, 2017, p. 2).

Figure 7: General architecture of the controlling unit (mod. after Imperial College, 2017, p. 2)



As it can be seen above (Fig. 7), the Hi5 controller embeds a connector block in order to couple the data acquisition card (DAQ) with the system. Data acquisition systems convert analogue waveforms into digital values for processing, i.e. signals that measure real world physical conditions are sampled and converted into numeric values, that can be introduced to a computing machine. Furthermore, a power converter with a 48 𝑉 output is providing the included driver with a current of up to 3 𝐴 in total. The controlling system is using the Maxon ESCON 50/5 plug-in module, because of good control properties and a fast digital current controller with a large bandwidth for optimal motor (current/torque) control. According to the system specification paper by the Imperial College in London (2017, p. 2), the driver is configured to respond to a PWM (Pulse- width modulation) and direction signals. The maximum current sent to the motor 𝑀> is saturated to 3 𝐴. Additionally, the fully grounded controller features several electrical safety measures, such as a Schneider electric switch, which makes sure the 220 𝑉 line current is below 6 𝐴 at all times. A relay can cut the power line between the power converter and the drivers. It is triggered by the facade power switch of the controller, by the emergency stop button and by one of the 5 𝑉 lines of the DAQ (Imperial College, 2017, p. 2). Hence, drivers can be powered only if the computing machine is turned on. There is a separate Enable line, which is activated by the DAQ system (Imperial College, 2017, p. 2).

3.3.2. Haptic Robot

The system consists of a wrist haptic interface fixed to a table on which the subject can place the arm, hold a handle and interact with wrist flexion/extension movements. The haptic device controller is equipped with a Maxon RE-65 DC motor (250 𝑊), that allows the experimenter to program external torques to the wrist joint. Basically, two sensors are necessary in order to record individual torques, i.e. the torque sensor (6.1 Principles of a Haptic Interface) and an angular displacement encoder, which can be used for a total angular displacement of 360°. This kind of displacement sensors is concerned with the measurement of the amount by which some object has moved. A torque sensor with a resolution of 1.554 𝑚𝑁𝑚 is mounted between the rotating shaft and the handle on each device (Melendez-Calderon, 2011, p. 2579). Considering actuation, a maximum output torque of 0.886 𝑁𝑚 can be reached. Fig. 8 shows the four main components of the system setup; the haptic robot interface is also shown. The position of the handle is recorded in real-time and the tracked handle position and its velocity can be saved by using the degree unit. At the beginning of each experimental trial, the wrist handle returns back to its initial state. But, initial states may vary in accordance to the anatomic properties of the subjects. It is therefore necessary to ensure, that there are no ergonomic differences in the behaviour of serving the handling unit.

This handling unit is driven by rotational movements, that are transmitted to a subject’s wrist. According to Farkhatdinov et al. (2015a, p. 5), an important issue is to design a comfortable, adjustable and safe wrist handle. A user should not feel any discomfort during interaction with the device. The orientation of the arm as well as the wrist flexion/extension should remain natural. Even though the workspace for wrist movement is limited, mechanical safety stops are used to restrict the angular range to 60° with a resolution of 0.05°. At software level, safety is guaranteed by limiting the maximum produced torque for the force feedback.

Figure 8: Haptic System Setup Environment



The controller is specific to the robot. Its role is to convert the PWM (Pulse-width modulated) signals, which is a modulation technique used to encode data into a pulsing signal, coming from the computing machine. The pulsing signal is generated through motor currents (Imperial College, 2017).

3.3.3. Computing Machine

In the following, differences between the term host and target PC are made. The SLIP model is developed and programmed on the host PC, whereas the target PC is responsible for the real-time simulation. Both computing machines are linked by an Ethernet connection. As mentioned in chapter 3.3.1 (The Controller), the target PC features a Data Acquisition (DAQ) system (Imperial College, 2017, p. 2).

When implementing software on the host PC, it is important to always work with the same versions of LabView. Basically two main engineering software platforms have been used within the scope of this thesis:

  • National Instruments LabView 2015, Service Pack 1, 15.0.1 (64-bit)
  • MathWorks MATLAB R2015b (64-bit)

For the development of software programs, which will be coupled with the haptic interface, the LabView environment has been used. In the context of software-hardware coupling, LabView is more intuitive and user-friendly. Input sources and output sinks can be implemented via drag-and-drop and modified in real-time. All action commands are executed by using a graphical user interface (GUI) on a dedicated computing machine running in real-time. The target PC reads sensor inputs, processes them, and sets the outputs through a data acquisition card under a 1kHz loop.

The SLIP model (2.1 Spring-loaded Inverted Pendulum model) has been implemented twice, for LabView and for MATLAB. MATLAB is used once for retrieving stable parameter sets at the beginning of the simulation, and secondly for post-processing tasks. Combining two software applications for diverse implementation purposes and exploiting the benefits of each software, is a key factor for an efficient solution delivery within the scope of this work. For further explanation regarding the software implementation of the SLIP model, please have a look at section 5. Likewise, in the experimental setup of Melendez-Calderon (2011), a 22” output screen is used, which allows the presentation of visual feedback to the subject. Additionally, the subject can be provided with information indicating the current wrist position, applied force or movement performance during the task.

indexmenu_n_30

abschlussarbeiten/msc/filipcengic/chap03.txt · Zuletzt geändert: 28.11.2022 00:58 von 127.0.0.1


Warning: Undefined variable $orig_id in /is/htdocs/wp1019470_OPI92FFHXV/www/wikiLehre/lib/plugins/openas/action.php on line 232