But if you press harder, you may notice a second way to feel the touch: through your knuckles and other joints. That sensation—a sense of torque, to use robotics jargon—is exactly what the researchers recreated in their new system.
Their robotic arm contains six sensors, each of which can register even incredibly small amounts of pressure against any section of the device. After precisely measuring the strength and angle of that force, a series of algorithms can then map where a person is touching the robot and analyze what exactly they are trying to communicate. For example, a person could use a finger to draw letters or numbers anywhere on the surface of the robotic arm, and the robot could interpret directions from those movements. Any part of the robot could also be used as a virtual button.
This means that every square centimeter of the robot essentially becomes a touchscreen, but without the associated cost, fragility and wiring, says Maged Iskandar, a researcher at the German Aerospace Center and lead author of the study.
“Human-robot interaction, where a human can closely interact with a robot and control it, is not yet optimal because the human needs an input device,” says Iskandar. “If you can use the robot itself as a device, the interactions are more fluid.”
Such a system could provide a cheaper and easier way to provide not only a sense of touch but also a new way to communicate with robots. This could be particularly important for larger robots such as humanoids, which continue to receive billions in venture capital investment.