<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Robotics Update &#187; Scorpion Vision</title>
	<atom:link href="https://www.roboticsupdate.com/category/stories-by-company/scorpion-vision/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.roboticsupdate.com</link>
	<description>The Online Magazine for Industrial Robots &#38; Automation</description>
	<lastBuildDate>Tue, 28 Apr 2026 08:50:16 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Time-of-flight 3D depth camera range</title>
		<link>https://www.roboticsupdate.com/2024/08/time-of-flight-3d-depth-camera-range/</link>
		<comments>https://www.roboticsupdate.com/2024/08/time-of-flight-3d-depth-camera-range/#comments</comments>
		<pubDate>Thu, 29 Aug 2024 16:13:45 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Sensors]]></category>
		<category><![CDATA[Vision]]></category>
		<category><![CDATA[3D depth camera]]></category>
		<category><![CDATA[Cube Eye]]></category>
		<category><![CDATA[Scorpion vision]]></category>
		<category><![CDATA[time of flight]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=9031</guid>
		<description><![CDATA[Scorpion Vision is partnering with Cube Eye in selling its Time-of-Flight 3D depth camera range. These products have been several years in the making with expert research and development behind them. The Cube Eye 3D depth camera is a high-performance camera that measures the distance to objects in its field of view using Time-of-Flight (ToF) [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2024/08/240831_Scorpion.jpg"><img class="alignright size-medium wp-image-9029" src="http://www.roboticsupdate.com/wp-content/uploads/2024/08/240831_Scorpion-300x225.jpg" alt="240831_Scorpion" width="300" height="225" /></a>Scorpion Vision is partnering with Cube Eye in selling its <a title="Cube Eye time of flight 3D depth camera" href="https://shop.scorpion.vision/collections/cube-eye" target="_blank">Time-of-Flight 3D depth camera</a> range. These products have been several years in the making with expert research and development behind them.</p>
<p>The Cube Eye 3D depth camera is a high-performance camera that measures the distance to objects in its field of view using Time-of-Flight (ToF) technology. It can detect a greater depth of motion and distance of objects with high accuracy, and it comes in several customisations and variations to meet different customer and application requirements.</p>
<p>It&#8217;s well suited for AR, VR, robotics, industrial automation, machine vision, security, and more.</p>
<h4>What is Time-of-Flight technology?</h4>
<p>Time-of-Flight is a method for measuring the distance between a sensor and an object. The technology measures the time it takes for a light signal to travel from the camera to the object and back again.</p>
<p>Precise measuring allows the camera to calculate the distance to each point which then creates a 3D depth map in detail. The ToF cameras from Cube Eye use highly advanced sensors to achieve high precision and resolution which is what makes them so versatile and well-suited to a range of applications.</p>
<p>The range of Cube Eye time-of-flight 3D depth cameras come in variations, allowing for a tailored solution to your application. This is what makes the cameras so useful for a variety of industries.</p>
<p><strong>Robotics:</strong> Having awareness and understanding of the surrounding environment is paramount in robotics and automation. Cube Eye&#8217;s Time-of-Flight cameras offer accurate 3D mapping so that robots can navigate safely and avoid colliding with obstacles in their surroundings.</p>
<p><strong>Augmented reality (AR) and Virtual reality (VR):</strong> The ToF cameras are excellent at creating realistic AR experiences. They accurately track the position of various objects in the real world and provide accurate 3D data to enhance spatial awareness and create interactive and immersive environments.</p>
<p><strong>Machine vision:</strong> The accurate data collection in the Cube Eye ToF cameras is ideal for machine vision because it helps machines identify and classify objects effectively. They inspect products with high precision and measure distances accurately. This ensures early detection and prevention of defects and maintains high standards of quality control.</p>
<p><strong>Mapping:</strong> The real-time data collection and accurate distance measurements combined with the wide field of view available in the Cube Eye 3D depth camera create accurate and detailed 3D maps of surrounding environments. This makes the cameras perfect for mapping and surveying.</p>
<p><strong>Automotive:</strong> Cube Eye cameras measure distances in real-time, giving more accuracy to car sensors and increasing safety. This technology can also be used in self-driving cars to avoid collisions and provide a full understanding of the surroundings.</p>
<p>Cube Eye ToF 3D Depth cameras are available in a number of variations:</p>
<p><strong>I200D: </strong>The I200D is a small camera that thrives in all lighting conditions, including harsh sunlight which makes it perfect for outdoor environments. It collects accurate depth data to help robots and drones avoid collisions with obstacles and features long-range detection for industrial automation and security systems.</p>
<p><strong>S100D: </strong>The S100D camera delivers 3D depth data in real time at high speed. With a built-in companion chip, it handles the in-depth processing, freeing up the host system to concentrate on other tasks. The compact board design allows for easy system integration. The S100D also has a wide field of view and a resolution of 640&#215;480 pixels.</p>
<p><strong>S110D: </strong>The S110D ToF camera has a wide field of view of 100 degrees and a resolution of 640&#215;480 pixels. It is compact and lightweight as well as dust and water-resistant which makes it suitable for harsh environments.</p>
<p>Cube Eye&#8217;s Time-of-Flight 3D Depth cameras signify rapid innovation in 3D imaging technology. With their high precision, high resolution, and real-time data collection, they&#8217;re the ideal tool for many industries.</p>
<p>If you&#8217;re looking for a solution that uses a camera with Time-of-Flight technology but aren&#8217;t sure which one is best suited to your application, browse the Cube Eye range or get in touch with Scorpion Vision, where a member of our team will be more than happy to chat about your project and advise on the right product for you.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2024/08/time-of-flight-3d-depth-camera-range/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Smart vision cameras for robotics applications</title>
		<link>https://www.roboticsupdate.com/2024/08/smart-vision-cameras-for-robotics-applications/</link>
		<comments>https://www.roboticsupdate.com/2024/08/smart-vision-cameras-for-robotics-applications/#comments</comments>
		<pubDate>Mon, 05 Aug 2024 09:24:25 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>
		<category><![CDATA[camera]]></category>
		<category><![CDATA[HIK Robot]]></category>
		<category><![CDATA[Scorpion vision]]></category>
		<category><![CDATA[smart]]></category>
		<category><![CDATA[vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=8960</guid>
		<description><![CDATA[At MachineBuilding.live, the Scorpion Vision team will be demonstrating the latest innovations in machine vision. Specifically, the company will be demonstrating a new range of smart cameras from HIK Robot that offer unprecedented image processing capabilities utilising the best optical solutions that intelligently negate the effect of reflective surfaces. The HIK Robot Smart Camera range [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2024/08/240805_Scorpion.jpg"><img class="alignright size-medium wp-image-8961" src="http://www.roboticsupdate.com/wp-content/uploads/2024/08/240805_Scorpion-300x225.jpg" alt="240805_Scorpion" width="300" height="225" /></a>At MachineBuilding.live, the <a title="Scorpion Vision" href="https://www.scorpion.vision" target="_blank">Scorpion Vision</a> team will be demonstrating the latest innovations in machine vision. Specifically, the company will be demonstrating a new range of smart cameras from HIK Robot that offer unprecedented image processing capabilities utilising the best optical solutions that intelligently negate the effect of reflective surfaces.</p>
<p>The HIK Robot Smart Camera range is broad, from smart code readers and vision sensors to full featured machine vision systems with integrated light sources, in a small footprint.</p>
<p>Scorpion Vision will have image processing experts available at its booth to discuss your production problems. Find out how the company can help you solve niggly production issues using image processing.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
<p>Visit the <a title="Machine Building Live" href="https://machinebuilding.live" target="_blank">MachineBuilding.Live website</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2024/08/smart-vision-cameras-for-robotics-applications/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The basics of 3D vision – and the impact of AI</title>
		<link>https://www.roboticsupdate.com/2023/08/the-basics-of-3d-vision-and-the-impact-of-ai/</link>
		<comments>https://www.roboticsupdate.com/2023/08/the-basics-of-3d-vision-and-the-impact-of-ai/#comments</comments>
		<pubDate>Tue, 22 Aug 2023 09:47:32 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>
		<category><![CDATA[featured]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7973</guid>
		<description><![CDATA[Paul Wilson, managing director of Scorpion Vision, explores the fundamental concepts of 3D Machine Vision, including stereo vision, point clouds, pixel displacement, depth perception, and height measurement. 3D Machine Vision is a technology that enables machines to perceive and interpret three-dimensional data from the real world. It combines various imaging techniques and processing algorithms to [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_1.jpg"><img class="alignright size-medium wp-image-7974" src="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_1-300x202.jpg" alt="230822_Scorpion_1" width="300" height="202" /></a>Paul Wilson, managing director of Scorpion Vision, explores the fundamental concepts of 3D Machine Vision, including stereo vision, point clouds, pixel displacement, depth perception, and height measurement.</p>
<p>3D Machine Vision is a technology that enables machines to perceive and interpret three-dimensional data from the real world. It combines various imaging techniques and processing algorithms to create a comprehensive representation of an object&#8217;s shape, size, and position in space, allowing machines to perform complex tasks with increased accuracy and efficiency.</p>
<p>3D Machine Vision plays a crucial role in modern industrial applications, including automated inspection, robotics, and quality control. By providing detailed and accurate information about an object&#8217;s geometry, 3D vision systems can enhance production processes, reduce errors, and ensure the highest level of product quality. Moreover, these systems contribute to increased safety in manufacturing environments by enabling robots to collaborate with humans more effectively.</p>
<p><strong>Stereo vision</strong></p>
<p>Stereo vision, also known as stereoscopic vision, is a technique used in 3D Machine Vision to capture depth information from a scene. It works by simulating human binocular vision, where two cameras, placed at a certain distance apart, capture images from slightly different perspectives. By analysing the disparities between these images, a 3D model can be generated, providing valuable depth information.</p>
<p>The stereo baseline is the distance between the two cameras used in stereo vision systems. This distance plays a critical role in determining the system&#8217;s depth perception capabilities. A larger baseline results in greater disparities between the captured images, providing more accurate depth information. However, increasing the baseline also increases the complexity of matching corresponding points in the images, which may lead to inaccuracies. Therefore, selecting an appropriate stereo baseline is essential to achieve the desired balance between depth accuracy and computational complexity.</p>
<p>Stereo vision has been widely adopted in various machine vision applications across different industries. Some common uses of stereo vision include:</p>
<ul>
<li><strong>Robotics:</strong> Stereo vision can help robots navigate their environment, avoid obstacles, and manipulate objects with precision</li>
<li><strong>Automated inspection:</strong> By providing accurate depth information, stereo vision can assist in identifying defects and verifying the dimensions of manufactured products</li>
<li><strong>Autonomous vehicles:</strong> Stereo vision systems can enhance the perception capabilities of self-driving cars, enabling them to detect objects and gauge distances with greater accuracy</li>
<li><strong>Virtual and augmented reality:</strong> Stereo vision can generate realistic 3D models of the environment, improving the immersion and interactivity of virtual and augmented reality experiences.</li>
</ul>
<p>As technology advances, the applications of stereo vision in machine vision systems will continue to expand, opening up new possibilities in various industries.</p>
<p><strong><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_2.jpg"><img class="alignright size-medium wp-image-7977" src="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_2-300x187.jpg" alt="230822_Scorpion_2" width="300" height="187" /></a>Point clouds</strong></p>
<p>A point cloud is a collection of data points in a three-dimensional coordinate system, representing the external surface of an object or scene. In the context of 3D machine vision, point clouds are essential for extracting depth information and creating accurate 3D models of objects. By processing point clouds, machine vision systems can analyse complex geometries, detect defects, and perform measurements with high precision.</p>
<p>Point clouds can be generated using various techniques, such as stereo vision, structured light, and time-of-flight sensors. In stereo vision, point clouds are obtained by calculating disparities between images captured by two cameras positioned at a specific distance apart. Structured light systems project a pattern onto the object and capture the deformation of the pattern, using the captured data to generate a point cloud. Time-of-flight sensors, on the other hand, measure the time it takes for emitted light to travel to the object and back, calculating the distance to each point and generating a point cloud accordingly.</p>
<p>Point clouds are widely used across multiple industries for various purposes, such as:</p>
<ul>
<li><strong>Quality control:</strong> Point clouds can help identify deviations from the original design, ensuring the manufactured products meet the desired specifications.</li>
<li><strong>Reverse engineering:</strong> By generating point clouds of existing objects, engineers can create accurate 3D models for redesign or replication purposes.</li>
<li><strong>Geospatial applications:</strong> Point clouds are used in surveying and mapping to create detailed representations of terrain, buildings, and infrastructure.</li>
<li><strong>Entertainment:</strong> In film and gaming, point clouds can be used to create realistic 3D models of characters, objects, and environments.</li>
</ul>
<p>As 3D machine vision technology continues to advance, the generation and processing of point clouds will become faster and more accurate, expanding their applications and benefits across various industries.</p>
<p><strong>Pixel displacement</strong></p>
<p>Pixel displacement refers to the difference in the position of a particular point in an object when viewed from two different perspectives. In 3D imaging, pixel displacement is used to calculate depth information by determining the disparity between corresponding points in a stereo image pair. Accurate measurement of pixel displacement is crucial for generating precise 3D models and extracting reliable depth data from images.</p>
<p>Several techniques can be employed to calculate pixel displacement in 3D imaging systems. Some common methods include:</p>
<ul>
<li><strong>Block matching:</strong> This technique involves searching for a small block of pixels in one image that best matches a corresponding block in the other image, calculating the displacement between the two blocks.</li>
<li><strong>Feature-based matching:</strong> In this method, distinctive features such as edges or corners are identified in both images, and the displacement is calculated by matching these features between the images.</li>
<li><strong>Optical flow:</strong> This approach estimates the displacement by analysing the apparent motion of pixels between consecutive frames in a video sequence, assuming that the motion is smooth and continuous.</li>
</ul>
<p>The choice of technique depends on factors like image quality, computational complexity, and the desired level of accuracy.</p>
<p>Pixel displacement can have a significant impact on the quality and accuracy of 3D imaging systems. If pixel displacement is not accurately measured, the resulting 3D models may contain errors, leading to incorrect depth information. Moreover, factors like noise, lighting conditions, and occlusions can affect pixel displacement calculations, further impacting the quality of 3D images. Therefore, it is essential to use robust techniques for calculating pixel displacement to ensure the reliability and accuracy of 3D machine vision systems.</p>
<p><strong><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_3.jpg"><img class="alignright size-medium wp-image-7976" src="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_3-300x300.jpg" alt="230822_Scorpion_3" width="300" height="300" /></a>Depth perception</strong></p>
<p>Depth perception is a critical aspect of 3D machine vision, as it enables systems to determine the distance and position of objects within a scene. Accurate depth information is crucial for various applications, such as robotic manipulation, quality control, and obstacle detection. By capturing and processing depth data, 3D machine vision systems can perform tasks with higher precision and efficiency, leading to improved productivity and reduce errors in industrial processes.</p>
<p>Several factors can influence the depth perception capabilities of imaging systems, including:</p>
<ul>
<li><strong>Camera resolution:</strong> Higher resolution cameras capture more detailed images, which can lead to more accurate depth calculations.</li>
<li><strong>Camera baseline:</strong> As discussed earlier, the stereo baseline plays a significant role in determining the depth perception capabilities of stereo vision systems.</li>
<li><strong>Image quality:</strong> Factors like noise, lighting conditions, and occlusions can affect the accuracy of depth information extracted from images.</li>
<li><strong>Algorithms and processing techniques:</strong> The choice of algorithms and techniques for calculating depth information can impact the accuracy and reliability of the resulting data.</li>
</ul>
<p>To enhance depth perception in machine vision applications, various techniques can be employed, such as:</p>
<ul>
<li>Using higher resolution cameras to capture more detailed images.</li>
<li>Optimising the stereo baseline to balance depth accuracy and computational complexity.</li>
<li>Improving image quality through techniques like noise reduction, adaptive illumination, and HDR imaging.</li>
<li>Employing advanced algorithms and processing techniques for better depth calculation, such as machine learning and AI-based methods.</li>
</ul>
<p>By incorporating these techniques, professionals can develop 3D machine vision systems that deliver accurate and reliable depth information, enhancing the overall performance and effectiveness of their applications.</p>
<p><strong>Height measurement</strong></p>
<p>Height measurement is a crucial aspect of 3D imaging, as it provides valuable information about the size and shape of objects within a scene. Accurate height data is essential for various applications, including quality control, inspection, and robotic manipulation. By obtaining precise height measurements, 3D machine vision systems can ensure that manufactured products meet the desired specifications and perform tasks with increased accuracy and efficiency.</p>
<p>Several techniques can be employed to achieve accurate height measurement in machine vision systems, such as:</p>
<ul>
<li><strong>Stereo vision:</strong> As discussed earlier, stereo vision systems can generate depth information by analysing disparities between images captured by two cameras positioned at a specific distance apart. This depth data can be used to calculate height measurements.</li>
<li><strong>Laser triangulation:</strong> This method involves projecting a laser line onto the object and capturing the deformation of the line with a camera. By analysing the deformation, the system can calculate the height profile of the object.</li>
<li><strong>Structured light:</strong> Similar to laser triangulation, structured light systems project a pattern onto the object and analyse the deformation of the pattern to generate height measurements.</li>
</ul>
<p>Height measurement in industrial applications can be challenging due to various factors, such as:</p>
<ul>
<li><strong>Complex object geometries:</strong> Objects with intricate shapes or varying surface properties can pose difficulties in obtaining accurate height measurements.</li>
<li><strong>Occlusions:</strong> Parts of the object may be hidden from the camera&#8217;s view, leading to incomplete data and inaccurate measurements.</li>
<li><strong>Environmental factors:</strong> Lighting conditions, vibrations, and temperature variations can affect the accuracy of height measurements.</li>
</ul>
<p>To overcome these challenges, professionals can employ techniques like adaptive illumination, advanced algorithms, and robust hardware designs to improve the accuracy and reliability of height measurement in machine vision systems. By addressing these challenges, 3D imaging systems can deliver precise height data, enhancing the overall performance of industrial applications.</p>
<p><strong><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_4.jpg"><img class="alignright size-medium wp-image-7975" src="http://www.roboticsupdate.com/wp-content/uploads/2023/08/230822_Scorpion_4-300x300.jpg" alt="230822_Scorpion_4" width="300" height="300" /></a>AI in 3D machine vision</strong></p>
<p>Artificial intelligence (AI) has emerged as a powerful tool for enhancing the capabilities of 3D machine vision systems. AI algorithms, particularly deep learning and neural networks, can process complex datasets and extract meaningful information, leading to improved accuracy and efficiency in 3D imaging tasks. By integrating AI into machine vision systems, professionals can develop more sophisticated solutions that can better handle challenging imaging scenarios and deliver reliable results.</p>
<p>AI can address various challenges in 3D machine vision, including:</p>
<ul>
<li><strong>Noise reduction:</strong> AI algorithms can effectively identify and remove noise from images, enhancing image quality and depth accuracy.</li>
<li><strong>Feature detection and matching:</strong> Deep learning techniques can improve the detection and matching of features in stereo images, leading to more accurate depth calculations.</li>
<li><strong>Robustness to occlusions and complex geometries:</strong> AI-powered vision systems can better handle occlusions and complex object shapes by learning to recognise and process these challenging scenarios.</li>
</ul>
<p>By incorporating AI-based solutions, machine vision professionals can overcome common obstacles and improve the performance of their 3D imaging systems.</p>
<p>As AI technology continues to advance, we can expect several developments in AI-powered 3D imaging systems, such as:</p>
<ul>
<li><strong>Improved depth estimation algorithms:</strong> AI models will become more efficient and accurate in estimating depth information, enhancing the overall performance of 3D imaging systems.</li>
<li><strong>Real-time processing:</strong> AI algorithms will enable faster processing of 3D data, paving the way for real-time applications in industries like robotics, autonomous vehicles, and augmented reality.</li>
<li><strong>Adaptive learning:</strong> AI-powered systems will be capable of adapting to new scenarios and environments, improving their performance and reliability in diverse applications.</li>
</ul>
<p>The integration of AI into 3D machine vision systems will continue to drive innovation and growth in the field, offering new opportunities and solutions for professionals in the industry.</p>
<p>By understanding the basic principles of 3D machine vision, professionals in the machine vision and imaging components industry can better comprehend and apply the technology to their specific applications. This knowledge will enable them to develop more efficient and accurate solutions that meet the demands of various industries and applications.</p>
<p>As the field of 3D machine vision continues to evolve, it is crucial for professionals to stay updated with industry news and advancements. Scorpion Vision is committed to providing relevant information, expert insights, and the latest updates in the field, helping professionals stay ahead in this rapidly changing industry.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/08/the-basics-of-3d-vision-and-the-impact-of-ai/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Labour saving 3D vision solution for chocolates</title>
		<link>https://www.roboticsupdate.com/2023/05/labour-saving-3d-vision-solution-for-chocolates/</link>
		<comments>https://www.roboticsupdate.com/2023/05/labour-saving-3d-vision-solution-for-chocolates/#comments</comments>
		<pubDate>Tue, 30 May 2023 10:46:50 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Food & Drink]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7814</guid>
		<description><![CDATA[Scorpion Vision has developed a sophisticated 3D stereo vision-based inspection solution that ensures perfect presentation for chocolate gifting trays. By performing with near 100% accuracy a task that is usually carried out by one or two humans, Scorpion’s system can help confectionery manufacturers and packers to combat unskilled labour shortages, whilst increasing line efficiency. This [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_3.jpg"><img class="alignright size-medium wp-image-7815" src="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_3-300x200.jpg" alt="230530_Scorpion_3" width="300" height="200" /></a>Scorpion Vision has developed a sophisticated 3D stereo vision-based inspection solution that ensures perfect presentation for chocolate gifting trays. By performing with near 100% accuracy a task that is usually carried out by one or two humans, Scorpion’s system can help confectionery manufacturers and packers to combat unskilled labour shortages, whilst increasing line efficiency.</p>
<p>This system deploys the Scorpion 3D Venom camera to inspect chocolates that have been deposited in compartmentalised trays by pick and place robots. Occasionally, after being placed, chocolates will bounce up and either turn upside down or jump out of the tray altogether.</p>
<p>Scorpion has engineered an advanced vision solution that images the chocolates in 3D and takes precision shape measurements to confirm that each chocolate is in the right compartment and position. At the heart of the solution is the global shutter Scorpion 3D Venom camera, which is ideal for applications in cutting-edge 3D stereo vision systems owing to its short baseline.</p>
<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_1.jpg"><img class="alignleft size-medium wp-image-7817" src="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_1-300x264.jpg" alt="230530_Scorpion_1" width="300" height="264" /></a>Paul Wilson, managing director at Scorpion Vision, explains: “The shorter the baseline, the more accurate the stereo or ‘z’ depth – a camera trait that is essential for reliable decision making in this application.</p>
<p>“The system requires several data sets to determine, with high accuracy, whether the right chocolate is in the right position. As well as generating a 3D profile, it relies on precise dimensional measurements, 2D imaging and colour imaging to build a complete and detailed picture. With this data combination, it can even analyse the texture of each chocolate to determine whether they are correctly placed. If a chocolate is upside down, for example, the vision system recognises that the texture or pattern on that chocolate is different to the reference.”</p>
<p>Without this system, factories usually have to build an inline buffer system into their process and deploy manual labour to perform quality checks of chocolate trays. This arrangement generally requires at least two people who have to move quickly to reseat chocolates as they pass down the line. With an automated inspection system, any packages are diverted off the line so they can be reworked offline. This eliminates bottlenecks and back-ups, and, most importantly, guarantees perfect tray presentation.</p>
<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_2.jpg"><img class="alignright size-medium wp-image-7816" src="http://www.roboticsupdate.com/wp-content/uploads/2023/05/230530_Scorpion_2-300x175.jpg" alt="230530_Scorpion_2" width="300" height="175" /></a>This solution was originally engineered with chocolate manufacturing in mind but could just as easily be applied on any production line where compartmentalised presentation trays are used – from cup cakes, patisserie items and pastries to biscuit selections. It is also possible to overlay this solution with AI for enhanced performance and reliability.</p>
<p>“We create a profile of the product in 3D and analyse it for certain reference features. We then use AI to enhance extraction of these features – essentially training the vision system to identify and locate anomalies. The application of AI makes texture and pattern verification easier to do and even more reliable than with the use of 3D vision alone,” says Paul.</p>
<p>This application is a great example of Scorpion’s ability to harness its vision knowledge to solve production problems. Paul adds: “As well as drawing on the wealth of technology that exists within Scorpion, we have curated an extensive toolbox of machine vision components and software products from selected manufacturers whose technology we admire. We know what each and every one of these products can do and our strength lies in choosing which ‘building blocks’ to use for a specific application, and integrating them into seamless, line-ready solutions.”</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/05/labour-saving-3d-vision-solution-for-chocolates/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AI+3D vision platform is a robot integrator’s dream</title>
		<link>https://www.roboticsupdate.com/2023/04/ai3d-vision-platform-is-a-robot-integrators-dream/</link>
		<comments>https://www.roboticsupdate.com/2023/04/ai3d-vision-platform-is-a-robot-integrators-dream/#comments</comments>
		<pubDate>Mon, 24 Apr 2023 10:00:54 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Events]]></category>
		<category><![CDATA[Scorpion Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7641</guid>
		<description><![CDATA[At the Machine Vision Conference (MVC), co-located with the brand-new BARA exhibition Automation UK, Scorpion Vision (Stand 21) will show how Mech Mind’s new generation AI+3D robot vision tech can support rapid development of 3D vision for robot picking challenges. The machine vision automation expert will use a random bin picking application to illustrate how, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/04/230424_Scorpion.jpg"><img class="alignright size-medium wp-image-7642" src="http://www.roboticsupdate.com/wp-content/uploads/2023/04/230424_Scorpion-300x199.jpg" alt="230424_Scorpion" width="300" height="199" /></a>At the Machine Vision Conference (MVC), co-located with the brand-new BARA exhibition Automation UK, Scorpion Vision (Stand 21) will show how Mech Mind’s new generation AI+3D robot vision tech can support rapid development of 3D vision for robot picking challenges.</p>
<p>The machine vision automation expert will use a random bin picking application to illustrate how, by automating much of the application building process, Mech Mind’s intelligent robot solutions can accelerate the design of challenging 3D robot vision applications.</p>
<p>Paul Wilson, managing director at Scorpion Vision, says: “Mech-Mind’s AI+3D robot solutions are both ingenious and simple. They essentially provide the key building blocks that system integrators and machine builders need to design 3D vision driven robots. Instead of having to develop the machine vision software or AI element from scratch, users can simply incorporate these ready-to-use applications into the system architecture.</p>
<p>“This dramatically reduces the time, complexity and cost involved in developing vision-guided robots, making it very easy to create what have previously been considered challenging 3D robot vision applications.”</p>
<p>Mech Mind’s suite of AI+3D vision solutions for robots combines high performance industrial cameras with intelligent platform software. The core hardware module is the Mech-Eye industrial camera, which has integrated light sources, sensors and processing. Three complementary software applications complete the package: Mech-Vision is the machine vision software element; Mech-Viz is the bundled robot programming software and Mech-DLK enables creation of deep learning applications. When these building blocks are used in combination, users can quickly design automation applications.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/04/ai3d-vision-platform-is-a-robot-integrators-dream/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>3D vision enables robots to peel and core apples</title>
		<link>https://www.roboticsupdate.com/2023/03/3d-vision-enables-robots-to-peel-and-core-apples/</link>
		<comments>https://www.roboticsupdate.com/2023/03/3d-vision-enables-robots-to-peel-and-core-apples/#comments</comments>
		<pubDate>Tue, 14 Mar 2023 12:25:10 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Case studies]]></category>
		<category><![CDATA[Food & Drink]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7532</guid>
		<description><![CDATA[Scorpion Vision is presenting a new application for its AI-powered 3D vision platform. The new 3D+AI solution will enable fresh produce processors to automate the peeling, coring and chopping of apples without impacting yield for the first time ever. Apple processing is an extremely labour-intensive task, but, until now, attempts at automation in this area [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230314_Scorpion_2.jpg"><img class="alignright size-medium wp-image-7533" src="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230314_Scorpion_2-300x231.jpg" alt="230314_Scorpion_2" width="300" height="231" /></a>Scorpion Vision is presenting a new application for its AI-powered 3D vision platform. The new 3D+AI solution will enable fresh produce processors to automate the peeling, coring and chopping of apples without impacting yield for the first time ever.</p>
<p>Apple processing is an extremely labour-intensive task, but, until now, attempts at automation in this area have been unsuccessful. Both mechanical and 2D vision systems have struggled to contend with the inherent product variability, resulting in unreliable performance and unacceptably high waste.</p>
<p>Scorpion Vision has overcome this challenge by deploying advanced 3D cameras with colour and shape measurement capabilities, and training them to identify and locate certain features (the pips, for example) using AI. This intelligence is then passed to a servo in the form of coordinates, and guides the robotic peeler, knife or corer to perform its action with millimetric precision.</p>
<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230314_Scorpion_1.jpg"><img class="alignright size-medium wp-image-7534" src="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230314_Scorpion_1-300x225.jpg" alt="230314_Scorpion_1" width="300" height="225" /></a>The beauty of this approach is that the vision system can be taught to work with variability without it impacting on processing performance or accuracy. By applying AI to machine vision in its 3D Agritech camera, Scorpion has succeeded in achieving a very high level of repeatability, which is the key to high yield and minimal wastage. Yields of over 99% can be achieved with this bespoke combination of 3D vision and AI, compared with the yields of 70% that are typical for mechanical or 2D vision systems.</p>
<p>Paul Wilson, managing director at Scorpion Vision, says: “We are excited to be bringing our proven AI-optimised vision technology to a new area of the fresh produce industry. It is the first time that a commercially viable solution for automating apple processing has been available. In this respect, AI has been a game-changer, providing the missing link that was needed to build vision-guided robotics that can guarantee high yields in spite of variable subject matter.”</p>
<p>Scorpion’s 3D+AI solution has been tried and tested in a range of produce applications, from topping and tailing corn on the cobs, swedes and leeks, to de-coring lettuce, removing the outer leaves from sprouts and de-shelling seafood.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/03/3d-vision-enables-robots-to-peel-and-core-apples/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Fast scanning for detection of faults and defects</title>
		<link>https://www.roboticsupdate.com/2023/03/fast-scanning-for-detection-of-faults-and-defects/</link>
		<comments>https://www.roboticsupdate.com/2023/03/fast-scanning-for-detection-of-faults-and-defects/#comments</comments>
		<pubDate>Fri, 03 Mar 2023 10:42:58 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Comment]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>
		<category><![CDATA[featured]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7510</guid>
		<description><![CDATA[From batteries to burgers, fast scanning is finding new inspection applications, as Paul Wilson, managing director of Scorpion Vision, explains. Tech advancements such as sophisticated sensor technology, Artificial Intelligence (AI) and faster microprocessors are conspiring to make line-scan camera technology a compelling option in a host of new areas. Inline quality control of web printing [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230303_Scorpion.jpg"><img class="alignright size-medium wp-image-7511" src="http://www.roboticsupdate.com/wp-content/uploads/2023/03/230303_Scorpion-300x300.jpg" alt="230303_Scorpion" width="300" height="300" /></a>From batteries to burgers, fast scanning is finding new inspection applications, as Paul Wilson, managing director of Scorpion Vision, explains.</p>
<p>Tech advancements such as sophisticated sensor technology, Artificial Intelligence (AI) and faster microprocessors are conspiring to make line-scan camera technology a compelling option in a host of new areas.</p>
<p>Inline quality control of web printing processes is one of the most established fields of application for line-scan imaging – these fast cameras can often be found inspecting printed sheets or textiles for defects such as ink spot marks, embossing defects and mis-registered colours.</p>
<p>Beyond the printing industry, however, their use has been limited to niche pockets of application. In the electronics industry, for example, line-scan cameras are deployed for inspecting printed circuit boards, and in the food industry, they can be found on nut sorting lines, scanning product flows as they cascade in a curtain past the camera.</p>
<p>Essentially, in scenarios where a moving, continuous material needs to be analysed for faults or defects, line-scan cameras will generally provide a better solution than conventional area-scan cameras, owing to fundamental differences between the two cameras.</p>
<h4>Area-scan versus line-scan cameras</h4>
<p>Area-scan cameras capture the data for an entire image in one go and the dimensions of the resulting image correspond to the number of pixels on the sensor. Line-scan cameras, on the other hand, use a single row of light-sensitive pixels to constantly scan moving objects at a high frequency, capturing lots of ‘slices’ of an image which it then combines to construct the final image.</p>
<p>Because of this, area-scan cameras are not as suited to very fast web-based applications but are easier to install and use than line-scan cameras, making them ideal for straightforward machine vision tasks.</p>
<p>That said, the advent of bigger, faster and more sensitive area-scan sensors means that it is not unusual to see a CMOS area-scan sensor being used where a line-scan camera would have been the default option. The ability to adjust the active area on the sensor means that a line-scan camera can be emulated in some cases.</p>
<p>The complexity of implementing line-scan cameras can be off-putting. Although capable of higher speed processing, line-scan cameras are more complicated and costly to install, mainly because the line rate of the camera must be synchronised to the speed of the object being detected.</p>
<h4>Tech progress drives line-scan camera uptake</h4>
<p>However, tech advancements in the last few years have meant that industries such as food, pharmaceuticals, e-commerce and logistics can no longer afford to ignore the performance advantages of line-scan cameras.</p>
<p>Advancements in Image Signal Processors (ISPs) are facilitating higher quality and faster processing of 3D images in more demanding environmental and lighting conditions. This enables the cameras to detect more critical detail and capture higher resolution images.</p>
<p>For example, HIKROBOT, represented by Scorpion Vision in the UK and Ireland, earlier this year launched a 16K line-scan camera that is capable of detecting minute defects in PCB, EV battery, semiconductor, print and film inspection applications.</p>
<p>At the same time, system designers are harnessing GPUs (Graphics Processing Units) from the gaming industry for image processing. This PC graphics hardware can reduce algorithm and data processing time and enables the use of AI-powered analysis.</p>
<p>Using AI to improve pattern matching capabilities improves and accelerates inspection performance. The combination of advanced image sensor technology and AI is enabling line-scan cameras to infer increasingly complex insights from the vast amounts of vision data they capture.</p>
<p>Sophisticated sensor technology has also provided solutions to the problem of adjusting the line rate to match the speed of the material under inspection, enabling accurate, meaningful 3D analysis of the image at high frequencies using software algorithms.</p>
<h4><strong>Making mincemeat of burger inspection</strong></h4>
<p>Scorpion Vision has recently designed an AI-powered line scanning system to inspect IQF burgers for visual abnormalities and defects. The bespoke system, which incorporates two Scorpion 3D Stinger cameras above and below the conveyor belt, ensures that every single burger that passes through the line is visually perfect – to meet the increasingly stringent presentation demands of both consumers and retailers.</p>
<p>The camera system will check each frozen burger is exactly the correct shape and size, shows no signs of discolouration, freezer burn or ice crystal formation and is free from unsightly visual abnormalities such as large lumps of fat.</p>
<p>There are two reasons why a line-scan camera is the best solution for this application. Firstly, the inspection needs to take place while the frozen burgers are being transported to a robotic pick and place packing system on a very fast moving conveyor belt. An area-scan camera would not be able to perform the required imaging at this high speed. The line-scanning system designed by Scorpion is able to locate, inspect and measure the burgers in real time.</p>
<p>Secondly, an area-scan camera would only able to image the surface that faces upwards, not the underside of the burger. With the line-scan system, the burgers are passed over a very narrow gap between two conveyors and two cameras (one above and one below the conveyor) build up a 3D image of the complete burger as it passes over the gap.</p>
<p>The scanning unit incorporates two 3D Stinger cameras built into enclosures with internal polarised light sources, an arrangement that enables robust acquisition of images on reflective surfaces. Scorpion Vision’s proprietary AI-optimised software analyses these images in real-time for reference features that have been established through deep learning, and any burgers that exhibit abnormalities are immediately rejected from the line.</p>
<p>This is only the start of what is possible when AI is combined with fast scanning technology, but it demonstrates how AI-enabled line scanning systems are breaking new ground in inspection speed, accuracy and repeatability.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/03/fast-scanning-for-detection-of-faults-and-defects/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Top trends in industrial cameras and optics</title>
		<link>https://www.roboticsupdate.com/2023/01/top-trends-in-industrial-cameras-and-optics/</link>
		<comments>https://www.roboticsupdate.com/2023/01/top-trends-in-industrial-cameras-and-optics/#comments</comments>
		<pubDate>Fri, 27 Jan 2023 12:49:06 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7425</guid>
		<description><![CDATA[Faster and higher-resolution image sensors, interfaces that support ever increasing bandwidths, simpler integration, minIaturisation and high on-board processing power are driving innovation in machine and embedded vision applications. Paul Wilson, managing director of Scorpion Vision, charts the current trends in cameras and components. Ultra high resolution image sensors: image sensor megapixel counts are continuing to climb, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2023/01/230127_Scorpion.jpg"><img class="alignright size-medium wp-image-7427" src="http://www.roboticsupdate.com/wp-content/uploads/2023/01/230127_Scorpion-300x225.jpg" alt="230127_Scorpion" width="300" height="225" /></a>Faster and higher-resolution image sensors, interfaces that support ever increasing bandwidths, simpler integration, minIaturisation and high on-board processing power are driving innovation in machine and embedded vision applications. Paul Wilson, managing director of Scorpion Vision, charts the current trends in cameras and components.</p>
<p><strong>Ultra high resolution image sensors:</strong> image sensor megapixel counts are continuing to climb, enabling industrial cameras to capture more picture detail. This not only means that microscopic defects can be detected, but also that a single camera can cover a larger area. Applications that have historically required multiple cameras may be carried out using just one high resolution camera, introducing huge potential for cost savings through reduced complexity, processing, management and capital outlay. Using a single camera eliminates the need to stitch together multiple images – driving performance improvements by shortening image processing time.</p>
<p><strong>Larger lenses:</strong> the bigger the sensor, the larger the pixel area and the better the image quality…higher resolution sensors go hand in glove with larger lenses if a ‘tunnel effect’ is to be avoided. This is a major consideration when designing machine vision systems, as c-mount lenses are designed for sensors of up to a little more than one inch – anything bigger and a large diameter aperture lens is needed.</p>
<p><strong>Micro lenses:</strong> at the opposite end of the lens size spectrum there is a surge in demand for M12 miniature lenses for use in embedded cameras, drones, robotics and autonomous vehicles. One of the reasons for this rise in popularity is that the quality of these lenses has improved dramatically in recent years, enabling them to be deployed in applications that were previously the preserve of c-mount lenses. Miniature lenses can be now incorporated into machine vision cameras, resulting in highly compact systems. This is particularly advantageous where space is at a premium and there is a cost benefit.</p>
<p><strong>Filter application technologies:</strong> the use of filters to block out certain bandwidths of light is nothing new in itself, but the technologies for applying filters to industrial cameras are evolving. For example, adapters for M12 lenses are now available that allow ambient light to be filtered out in the same way as with c-mount lenses. This means M12 lenses can be deployed in light sensitive applications, such as in robotics systems for factories where ambient light is constantly changing. The use of filters that only allow light in the near infrared bandwidth is commonly used to mitigate against varying ambient light.</p>
<p><strong>10 GigE protocol:</strong> as the successor to GigE, 10GigE provides the same benefits but with a ten-fold increase in data rate and frame rate. To take a step back, historically, machine vision system designers have had a choice of GigE or USB3 as the protocols for transmitting high-speed video and related data over ethernet networks. This decision tended to hinge on the length of cable required, which in turn related to the number of cameras. USB is rated for five metres or less, whereas a GigE interface can function with a cable length up to 100 metres. The trade off with a one GigE interface is speed – a USB3 cable can transmit data five times faster than a One GigE system. The advent of 10GigE will enable the capabilities of high performing image sensors – until now, limited by the bandwidth that could be achieved with the available interfaces – to be realised and exploited.</p>
<p><strong>Embedded vision:</strong> these low cost cameras are increasingly infiltrating the industrial world. The main difference between embedded vision and machine vision is that embedded camera technology is far simpler owing to limited data processing capacity. This means it is more suited to gathering data, which is then analysed on a cloud-based platform, than processing online data and making decisions in real-time. There are several examples of embedded cameras being used in a commercial context: in vertical farms they are monitoring and adjusting ambient conditions for optimum plant growth; and in a retail context they are utilised in stock-taking robots.</p>
<p><strong>Stereo vision for the masses:</strong> in the past, stereo vision was seen as the domain of experts and required investment in costly software. Now, thanks to the development of low cost stereo vision sensors by companies such as Arducam, cameras can be paired with open source stereo vision AI algorithms and 3D capabilities to create systems that are very proficient at depth sensing – an example might be a robot that navigates its way around a warehouse. The limitations of this technology should be respected though – the hardware isn’t capable of tasks that require accuracy and repeatability, which is the domain of traditional and dedicated machine vision cameras.</p>
<p>Scorpion Vision offers a complete range of industrial cameras and optics, from low cost, low resolution embedded cameras and boards to high resolution cameras with large sensors for more demanding machine vision applications. Scorpion’s expansive catalogue of hardware and software products provides state-of-the-art building blocks for OEMs and system integrators. Scorpion represents Hikrobot, The Imaging Source, Arducam and Hypersen in the UK and Ireland for industrial cameras and other accessories.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2023/01/top-trends-in-industrial-cameras-and-optics/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Machine vision web shop opens for business</title>
		<link>https://www.roboticsupdate.com/2022/12/machine-vision-web-shop-opens-for-business/</link>
		<comments>https://www.roboticsupdate.com/2022/12/machine-vision-web-shop-opens-for-business/#comments</comments>
		<pubDate>Thu, 22 Dec 2022 09:06:42 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7371</guid>
		<description><![CDATA[Scorpion Vision has launched what it claims is the industry’s biggest online marketplace for machine vision and imaging at shop.scorpion.vision. The e-commerce site catalogues over 1,000 cameras and components, including lenses, machine vision light sources, specialist cables, industrial PCs, interface boards and software packages, making it a truly comprehensive B2B web shop for cameras and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2022/12/221221_Scorpion.jpg"><img class="alignright size-medium wp-image-7372" src="http://www.roboticsupdate.com/wp-content/uploads/2022/12/221221_Scorpion-300x226.jpg" alt="221221_Scorpion" width="300" height="226" /></a>Scorpion Vision has launched what it claims is the industry’s biggest online marketplace for machine vision and imaging at <a title="Scorpion Vision shop" href="https://shop.scorpion.vision" target="_blank">shop.scorpion.vision</a>.</p>
<p>The e-commerce site catalogues over 1,000 cameras and components, including lenses, machine vision light sources, specialist cables, industrial PCs, interface boards and software packages, making it a truly comprehensive B2B web shop for cameras and accessories. Delivery is free on orders within the UK, and many items – such as embedded cameras and M-12 lenses – are available for next day delivery.</p>
<p>The site bridges a gap that exists for a trusted platform which allows online transactions to be accompanied by specialist vision systems knowledge and advice. The site’s live chat facility means buyers can discuss any questions they might have with a Scorpion Vision imaging expert &#8211; allaying any fears about making costly purchasing mistakes. The site is also full of useful tips and recommendations for guiding users on which components to use. For example, buyers perusing C-Mount lenses can click through to a ‘Knowledge Base’ article on how to select a lens. In addition, the site features ‘lens selector’ and ‘camera selector’ facilities, which will match users with their perfect lens or camera by filtering search criteria.</p>
<p>Paul Wilson, managing director at Scorpion Vision, says: “E-commerce has been slower to take off in B2B industries than in B2C markets. This is partly because purchases tend to be higher value and therefore require greater trust between buyer and seller, and also because buyers of industrial systems or components want to ask questions before they buy.</p>
<p>“We have been active in the imaging market for over 25 years, and are already known as the leading offline supplier of vision cameras and components. This is about using the trust and experience we have built up to create a first-of-its-kind e-commerce site that gives customers specialist knowledge at their fingertips.”</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2022/12/machine-vision-web-shop-opens-for-business/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Tackling the robotics and vision engineering skills gap</title>
		<link>https://www.roboticsupdate.com/2022/12/tackling-the-robotics-and-vision-engineering-skills-gap/</link>
		<comments>https://www.roboticsupdate.com/2022/12/tackling-the-robotics-and-vision-engineering-skills-gap/#comments</comments>
		<pubDate>Tue, 20 Dec 2022 10:29:43 +0000</pubDate>
		<dc:creator><![CDATA[Editor]]></dc:creator>
				<category><![CDATA[All News]]></category>
		<category><![CDATA[Comment]]></category>
		<category><![CDATA[Scorpion Vision]]></category>
		<category><![CDATA[Vision]]></category>

		<guid isPermaLink="false">http://www.roboticsupdate.com/?p=7362</guid>
		<description><![CDATA[Parents who despair about the number of hours their children spend glued to video games should take heart; today’s gamers are likely to be the robotic and vision engineers of tomorrow, says Scorpion Vision. It’s no secret that the UK manufacturing industry is experiencing a massive skills shortage. Indeed, the latest ONS figures show that [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.roboticsupdate.com/wp-content/uploads/2022/12/221220_Scorpion.jpg"><img class="alignright size-medium wp-image-7364" src="http://www.roboticsupdate.com/wp-content/uploads/2022/12/221220_Scorpion-300x199.jpg" alt="221220_Scorpion" width="300" height="199" /></a>Parents who despair about the number of hours their children spend glued to video games should take heart; today’s gamers are likely to be the robotic and vision engineers of tomorrow, says Scorpion Vision.</p>
<p>It’s no secret that the UK manufacturing industry is experiencing a massive skills shortage. Indeed, the latest ONS figures show that between August and September this year, the number of vacancies was over 67% higher than pre-pandemic levels.</p>
<p>All areas of manufacturing are struggling to attract new talent but in robotics and vision engineering, the situation is compounded by the post-pandemic boom the sector is experiencing. Demand for 3D machine vision systems has accelerated with the growing adoption of automation in a range of industries – a trend that, ironically, is being driven in part by the labour crisis.</p>
<p>If robotics and vision engineering is to keep pace with this demand and advance in terms of technology, the sector needs people who are capable of designing, building and programming robotic vision systems. Skills and innovation go hand in hand. But where is this next generation of talent going to come from?</p>
<p>Many parents today despair about the number of hours that their children spend in darkened rooms playing video games. What they don’t realise is that these ‘digital natives’ possess skills that will be extremely valuable in a workplace that is increasingly reliant on digital and virtual reality technology.</p>
<p>Digitalisation is redefining engineering as a practice. As digital technologies become more and more commonplace, engineers will need new skills to take full advantage of them. In other words, tomorrow’s engineers will need both traditional engineering skills and software engineering know-how, including knowledge of 3D modelling, AI and data science.</p>
<p>Apprenticeships and more flexible, non-traditional learning mechanisms will be crucial for fostering new talent and developing hobby gamers into highly skilled robotic and vision engineers.</p>
<h4>Earn while you learn: case study</h4>
<p>Alex Charles, a machine vision engineer at Scorpion Vision, is a great example of how the apprenticeship pathway can be a win-win for businesses as well as employees. A keen gamer with ‘A’ levels in Maths and Computer Science, Alex knew he wanted to go into a computer science related career, but wasn’t convinced university was the right route for him.</p>
<p>“I think that if you are academic at school, you are often pushed down the university route, but graduates build up a lot of debt and it is not necessarily the best way to gain experience,” says Alex.</p>
<p>Scorpion Vision’s apprentice scheme, which allows budding vision engineers to learn whilst in employment, was an attractive alternative. Apprentices spend three days at work and two days studying at college and are paid a full-time wage. “This programme gave me the best of both worlds – I was able to gain hands-on experience by working on projects at Scorpion whilst studying for an HNC in Computing,” he says.</p>
<p>Alex firmly believes that he has learned far more by working on real-life build and design projects and visiting customers than he ever would have done sitting in a lecture theatre or library.</p>
<p>“By working in industry I have gained skills that simply aren’t taught on any course curriculum. I’ve learned how to programme robotic vision systems to trim vegetables accurately and pick and place products into packaging; I’ve learned how AI can be harnessed to enhance image processing and I’ve accumulated lots of knowledge about optics, lenses, cameras and lighting – hardware as well as software.”</p>
<p>Alex has now been at Scorpion Vision for almost three years and his next ambition is to expand his knowledge of AI-based systems by doing an Open University course. Paul Wilson, managing director at Scorpion Vision, says: “Alex has gone from strength to strength in his time with us to become one of our most valued and sought after engineers. He is a shining example of how apprenticeships can benefit both the individual and the business.</p>
<p>“As a company, we have always had very positive experiences with apprenticeships and believe they are one of the most important approaches available to the industry today for bridging the skills gap in engineering.”</p>
<p>Scorpion Vision is now looking for its next apprentice.</p>
<p>Visit the Scorpion Vision website for more information</p>
<p>See all stories for Scorpion Vision</p>
]]></content:encoded>
			<wfw:commentRss>https://www.roboticsupdate.com/2022/12/tackling-the-robotics-and-vision-engineering-skills-gap/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
