While current systems for controlling industrial robots are very efficient, their programming interfaces are still too complicated, time-consuming, and cumbersome. In this paper, we present Ahumari, a new human-robot interaction method for controlling and programming a robotic arm. With Ahumari, operators are using a multi-modal programming operation technique that includes an optically tracked 3D stick, speech input, and an Augmented Reality (AR) visualization. We also implemented a prototype, simulating status-quo Teach Pendant interfaces, as they are commonly used for programming industrial robots. To validate our interaction design, a user study was conducted for gaining both quantitative and qualitative data. Summarizing, the Ahumari interaction technique was rated superior throughout all aspects. It was found to be easier to learn and to provide better performance in task completion time as well as in positioning accuracy.

INTRODUCTION

While numerous research projects in the area of Human Robot Interaction (HRI) are focusing on enabling humans to communicate with robots at a higher level, HRI in the fields of industrial robotics seems to be more conservative. The related interaction methods are somewhat out-of-date, compared to state-of-the-art approaches we know from the Human Computer Interaction (HCI) area.

In the OSHA Technical Manual, the American Occupational Safety & Health Administration classifies three paradigms of programming industrial robots:

Lead-through programming: In this method the process is either taught using a device known as Teach Pendant (also known as Teach Panel) or using an external computer that is connected to the robot controller.

Walk-through programming: Here, the operator is in physical contact with the robot and guides its tool to the target. According to Schraft and Meyer, this method is not common in industrial robotics so far—even if there are some products in this area.

Offline programming: Using this programming paradigm, the process is specified completely in a virtual environment without any physical robot. This allows for more virtual tests and simulations.

All of these three programming methods are complicated, time consuming, and hard to learn; thus, they are inappropriate for small and medium enterprises (SMEs), which require high flexibility and frequent re-programming of robots. To overcome these issues, we propose a novel HRI methodology for controlling and programming a robotic arm. Our technique is based on the lead-through programming paradigm.

In this project, we present Ahumari, a multi-modal interaction (MMI) system, combining six degree of freedom (6 DOF) tracking with speech and touch screen input, while Augmented Reality (AR) is applied for rendering. Based on the requirements and the feedback we achieved from three robotic companies, we implemented a prototype for controlling and programming a robotic arm. We also implemented a prototype that simulates status-quo robot programming interfaces. This so-called Teach Pendant setup uses a joystick-similar 3D Mouse device and a touchscreen.