By June 10, 2016 0 Comments Read More →

SICK vision technology gives robots eyes

160617_Sick_4Enabling robotics to achieve reliable identification of parts and objects, often in more unstructured and flexible arrangements, has long been a challenge, but the latest vision technologies are now making this possible, as Neil Sandhu of SICK UK explains.

Glimpse into a not-too-distant future and it’s easy to imagine a world where robotics and automation are commonplace. In factory automation, advanced robots and automated vehicles are, of course, already essential to many mass production processes and we are now seeing exciting developments that enable robots and humans to cooperate safely on the factory floor.

To achieve all of these advances, robotic systems, whether mobile or stationary, have dramatically improved their ability to process, learn and communicate information in an increasingly networked world. However, more than anything, to operate safely, a robot first needs to have ‘eyes’ – and while in aspects of strength, speed, stamina, accuracy and repeatability, a robot already far outperforms a human, when it comes to sight, robots have historically been limited by their powers of vision.

Advances in machine vision such as CMOS, smart cameras, time-of-flight laser sensing and LED lighting, are now showing new potential for stereo vision and 3D sensing. At the same time industry has demanded that the vision technology becomes quicker and easier to install and commission on the factory floor without the need for the system integrator or OEM to undertake complex programming, or even to use a separate PC.

160617_Sick_3In the past, a pick and place robot needed the parts or products it was targeting to be fixed in the same position and orientation, often on a conveyor.  Otherwise it would not be able to identify a product or component reliably and grip it in the correct location without damage. Then, as sensing and processing technology advanced, it became possible for robots to recognise and grip objects positioned randomly on a conveyor whatever their orientation, opening up many more processes as diverse as component assembly and food packaging.

However, a sticking point has been that robotic systems have struggled in more unstructured materials handling applications. For a human, distinguishing between a pile of different objects in a container, then picking out the uppermost one is easy. For a robot, this remained for some time a challenge for vision technology to accomplish reliably. The limitations of 2D vision made it difficult for a robot system to avoid picking occluded objects, or to distinguish between similar colours or backgrounds, and most importantly to accurately calculate the depth and 3D profile of the object, so it can be safely gripped. Components with curved or complex profiles prove particularly difficult.

Now, 3D vision applications are beginning to be integrated with robots to solve these previously problematic applications reliably. Vision applications in robotics automation have often come about by combining a range of 1D, 2D and 3D image capture techniques and combining them with processing power and software algorithms to solve automation problems.

Picking complex shapes

One significant advance in robotic pick-and-place technology supports automated handling, typically of raw materials such as blanks, castings or forgings with more complex shapes, that need to be picked from random configurations in bins or stillages. The robot can then load them automatically to, for example, turning machines, fixtures or feeder systems. Such a requirement is commonplace in automotive part production and in other industrial environments such as machining shops, machine loading and engine assembly, where these components have to be handled at speed.

The SICK PLB500 robot guidance system solves the problem by being able to recognise the correct part profile, calculate which is uppermost and most accessible for selection, and then find the optimum gripping point and place it exactly where required without collisions. Then it will choose the next part at another angle and repeat the task at high speeds. The PLB robot guidance system incorporates SICK’s  ScanningRuler 3D vision solution which combines 2D and 3D image processing and a built-in light source in a single unit. Its fixed laser with a rotating mirror captures a sequence of laser line profiles across the scanned area, from which a 3D image is built up using the process of triangulation.

160617_Sick_2An accurate 2D overlay across the ranged image then enables part identification against known profiles. The software correlates the CAD data of the parts with the position and orientation information captured by the camera, as well as CAD data of the gripper, for the verification of collision-free gripping of the parts. This enables very precise and at the same time flexible visual guidance of the robots.

Another problem for vision-guided robot picking in automated assembly is where large parts, for example car body panels, are stored in racks. The parts may hang in slightly different positions and orientations, especially if the racks are bent or the parts are not precisely fixtured. The SICK PLR is a self-contained robot guidance system that combines state-of-the-art 2D and 3D machine vision techniques. The system works by taking a first picture of the part, looking for contrasting features like drill holes, for example. It then projects a laser cross onto a flat area of the part and takes a second image. The resulting data enables the correct distance and any pitch, roll or yaw of the part to be calculated in the system and the information communicated so that the robot can safely grip it.

SICK has integrated the vision-related processes of calibration and co-ordinate transformation so an integrator does not need specialist expertise and the system can be set up easily through a teach-in process. The device can be installed and connected to the robot controller and an existing job configuration loaded or a new one created in a just couple of minutes. The user interface is operated simply via an integrated web server and monitored from an HMI on the factory floor.

160617_Sick_1The future development of vision-based applications is likely to similarly combine 1D, 2D and 3D imaging technologies to suite of specific robotic tasks. The continuing development of powerful processing tools and communications platforms also is integral to integrate image-derived data in increasingly demanding applications. In addition, algorithm developments enable new applications to be developed by retrofitting into existing pick-and-place solutions.

Common communications platforms such as SICK’s 4DPro which incorporates standard protocols, facilitate easy integration of different devices and real time communication with a factory network to transform automation not only within the enterprise, but throughout the distribution chain.

The more widespread use of fully automated and robotic applications for manufacturing will also go hand in hand with advances in safe motion control of static and mobile machinery to enable human interaction with the minimum of stops. Advances in laser scanning technologies are already pointing the way.

Visit the SICK website for more information

See all stories for SICK

Disclaimer: Robotics Update is not responsible for the content of submitted or externally produced articles and images. Click here to email us about any errors or omissions contained within this article
Posted in: All News, SICK, Vision