Based on the AI technologies object recognition and hand gesture recognition, simple manual activities are recognised by a camera system. In the current showcase this concerns the following activities: hammering, screwing and cutting. With sufficient training, however, other activities and entire work steps can also be detected.
The aim of the showcase is to use this automatic recognition in the context of time management. More precisely: the accurate answer to the question "How long does a defined work step in my production line take on average? After the definition of the work steps and their subdivision into activities is available, MAMOC can help to answer this question in a statistically significant way. Each of the automatically recognised activities is given a precise time stamp and thus provides the statistical basis to deliver average values for these work steps.
That's what MAMOC is about
A simple robot arm, which only has one serial control interface, was upgraded with an Intel Realsense depth camera and an nVidia Jetson Companion computer.
Now it is able to autonomously perceive its environment and react to it. He is able to recognise pre-trained objects in real time and follow them within his degrees of freedom. This was implemented by a combination of robot control and AI-based object recognition (based on TinyYOLO).
In addition, a connected IOT node collects environmental data (temperature, air pressure, acceleration...) These data are collected together with the robot's data output (e.g. number of detected objects) and displayed together in a uniform dashboard.
The robot arm acts autonomously without the use of cloud technology. However, the dashboard with the collected machine and environment data can be shared when the cloud connection is active (e.g. on mobile devices). So this showcase shows what is possible today with a simple robot arm - even if you don't have more than one serial port available initially.