Join the 155,000+ IMP followers

Market Overview

www.ptreview.co.uk

Overview of AI-Based Vision Systems on the Market

A brief overview of ready-to-use vision solutions available on the market that implement AI-based image processing tools. Written by Y. Belgnaoui.

Overview of AI-Based Vision Systems on the Market
The AX Smart Cameras integrate Nvidia Jetson AI modules and Sony CMOS sensors with a freely programmable image processing platform.

According to their manufacturers, implementing AI-based vision systems does not require specific expertise. It is enough to present the camera with a greater or lesser number of reference images. AI image processing techniques aim to detect defects and irregularities in products during production that were previously difficult or even impossible to identify using traditional image processing methods.

AX Smart Cameras from Baumer
The AX Smart Cameras from Baumer, designed for industrial use, are equipped with the NVIDIA Jetson platform which, in addition to a GPU, also provides ASICs specifically designed for AI in the form of DLA Cores (Deep Learning Accelerator). They capture up to 300 images per second for AI-assisted object classification. In addition to AI tasks, they can also be used for traditional image processing. Images can be compressed in JPEG and sent directly to the Cloud to continue training and improve the neural network. Thanks to the Linux-based approach, the programming language can be freely chosen depending on the application, and third-party image processing libraries or APIs (Application Programming Interfaces) can be used. The computing power of the NVIDIA Jetson AI modules, the system’s openness, and the use of established standards, allowing programmers to easily leverage existing AI libraries, tools and models, are all intended to facilitate the deployment of vision-based inspection applications.

When data processing takes place directly in the camera, as is the case with AX Smart Cameras, this is referred to as Vision at the Edge. Deploying Edge Processing reduces bandwidth compared to the traditional camera-to-PC system and reduces the amount of hardware required, such as industrial PCs, cables, or interface cards. The Smart Cameras feature various standards-compliant interfaces such as Ethernet, RS232, USB 3.0, and HDMI. This gives users greater flexibility in system design, communication within the system and with other systems, and data exchange.

Cognex In-Sight 3800 Vision System

Overview of AI-Based Vision Systems on the Market
The In-Sight 3800 system comes with vision tools based on AI-powered Edge Learning technology combined with traditional rule-based algorithms.

The Cognex In-Sight 3800 vision system is designed for high-throughput production lines. It combines a set of vision tools, image processing capabilities, and software into a fully integrated solution suitable for a wide range of inspection applications. “The In-Sight 3800 offers processing speeds twice as fast as previous systems,” said Lavanya Manohar, Cognex Vice President of Vision Products. The system includes vision tools that leverage AI-based Edge Learning technology and traditional rule-based algorithms. Edge Learning tools solve tasks of varying complexity and can be configured in just a few minutes using a handful of sample images for training. Rule-based tools, in turn, allow for deterministic tasks with specific parameters.

“When we chose Cognex, we expected the processing time of the In-Sight 3800 to be 30% lower than that of the In-Sight 7900 vision system we currently use,” said Nicolas Chomel, Director of Technology Development at Sidel, a packaging machine supplier. “However, during qualification tests, the In-Sight 3800 proved to be 50% faster in our application!”

The In-Sight 3800 is powered by the In-Sight Vision Suite software, a platform common to all In-Sight products, which includes two programming environments: EasyBuilder and Spreadsheet. With its intuitive point-and-click interface, the EasyBuilder environment guides users step-by-step through the development process. Meanwhile, the spreadsheet-style interface allows for fine-tuning of work parameters in advanced or highly customised applications.

Datalogic Smart-VS Sensor

Overview of AI-Based Vision Systems on the Market
Users can train the Smart-VS sensor with a few sample images, without any specific expertise.

The Datalogic Smart-VS is particularly suited to detecting the presence and orientation of labels and caps on bottles. Thanks to artificial intelligence, the sensor learns the distinctive features of an object from just a few sample images. Without specialised knowledge, users can train the sensor in a few simple steps. According to its manufacturer, the sensor can be installed in just a few steps and requires no technical expertise. With its built-in AI, the sensor learns distinctive features from a small number of examples. Even process and product variations, such as different products within the same batch, tasks, strong reflections, moving or flexible parts, can be learned by the sensor with just a few mouse clicks. The sensor offers a measurement range of 50 to 150 mm, a push-button for configuration, and uses an LED display to indicate Good/Bad for detected parts. It features an Ethernet interface for point-to-point communication, a push-pull switching output (NPN/PNP), and a supply voltage of 10–30 VDC.

IDS NXT Malibu Camera

Overview of AI-Based Vision Systems on the Market
The NXT Malibu camera integrates Ambarella’s vision-on-chip system to leverage AI-based image processing capabilities.

The IDS NXT Malibu cameras are industrial devices that enable real-time AI overlays in video streams. These cameras are the result of a collaboration between IDS and U.S. semiconductor manufacturer Ambarella. They feature AI-powered image processing capabilities directly embedded into the camera through Ambarella’s vision-on-chip (SoC) system. Image analyses can be performed at over 25 frames per second and transmitted to peripheral devices as overlays in compressed video streams via the RTSP protocol. Thanks to the integrated image processing pipeline, data from the onsemi AR0521 image sensor is processed directly within the camera. Functions automatically adjust parameters such as brightness, noise, and colour.

“With IDS NXT Malibu, we have developed an industrial camera capable of analysing images in real time and embedding results directly into video streams,” explained Kai Hartmann, Head of Product Innovations at IDS. “The combination of on-camera AI, compression, and streaming opens new application scenarios for intelligent image processing.”

This was made possible through close collaboration between IDS and Ambarella, combining their expertise in industrial and consumer technologies. “We are proud to collaborate with IDS, a specialist in industrial image processing,” said Jérôme Gigot, General Manager of Marketing at Ambarella. “IDS NXT Malibu represents a new class of AI-compatible industrial cameras, delivering fast inference times and high image quality thanks to our CVflow AI vision SoC.” IDS provides all the components — from cameras to its AI Vision Studio — required across the entire workflow: from image capture and dataset labelling to neural network training and on-camera execution.

Keyence IV3 Series Vision Sensor

Overview of AI-Based Vision Systems on the Market
With its compact head (24 × 31 × 44.3 mm) and 330° rotating connector, the IV3 Series vision sensor can be installed in equipment where space is limited.

Unlike conventional models, the IV3 Series vision sensor with integrated AI automatically determines optimal image capture conditions thanks to its AI, specifically designed for presence and difference checks. Installation is simple: the operator only needs to define the target part and register at least one conforming (OK) and one non-conforming (NG) image. No specific knowledge is required for configuration. The all-in-one system includes both a lens and lighting, eliminating the need to select these components separately, and provides immediate results. No high-performance PC is needed: adjustments are made using a small amplifier equipped with a CPU. The compact head (24 × 31 × 44.3 mm) offers a wide installation distance range from 50 to 3000 mm, while the maximum field of view of 2730 × 2044 mm enables wide-angle detection for diverse applications.

The IV3 Series vision sensor is particularly suitable when the detection zone is small but the system needs to be installed at a distance to avoid obstructing operators or robots. Long-range detection also allows the camera to be positioned away from splashes during painting or welding processes. With its 330° rotating connector, the sensor adapts to any type of space and setup, ensuring maximum installation flexibility.

Mitsubishi Electric Melsoft Vixio Software

Overview of AI-Based Vision Systems on the Market
The Melsoft Vixio software enables visual inspection in coordination with external cameras. All vision system functions can be implemented without programming.

Mitsubishi Electric recently launched Melsoft Vixio, an AI-based visual inspection software designed to automate camera-based inspection processes, aiming both to improve manufacturing quality and to address labour shortages. “As part of our industrial automation systems strategy, we tackle the key challenges facing modern society,” said Toshie Takeuchi, President of Mitsubishi Electric’s Factory Automation Systems Group. “Production line automation is accelerating due to labour shortages, yet many visual inspections are still performed manually. Through Melsoft Vixio, we aim to better meet customer needs for AI-based visual inspection.”

The Melsoft Vixio suite is designed to reduce the workload associated with manual visual inspections, improve defect detection, and enhance product quality. It also seeks to bridge the gap between novice and experienced inspectors. By supporting new operators in their judgement and decision-making, AI aims to address skill gaps in identifying irregularities and defects in manufactured products. The software conducts visual inspections in coordination with external devices such as cameras. All the functions required for deploying a visual inspection system can be set up without programming. Inspection date and time, PLC data, captured images, and inspection results are automatically linked and stored to facilitate traceability.

Sick Inspector83x 2D Vision Sensor

Overview of AI-Based Vision Systems on the Market
The Inspector83x vision sensor can perform up to 15 inspections per second for defect and anomaly detection in production.

With its integrated lighting unit, the Sick Inspector83x 2D vision sensor is an all-in-one solution. Offering resolutions of up to 5 MP and powered by a quad-core processor, it is designed to carry out AI-based inspections directly on the device. No external machine control is required. Typically, up to 15 inspections per second can be performed for industrial vision applications such as defect and anomaly detection or classification.

This sensor can perform industrial vision inspections across multiple industries, including consumer goods manufacturing, food and beverage production, automotive, and packaging. Colour imager models will be launched during 2024 to expand the range, enabling inspection of colour-specific features for sorting, defect detection, and colour-based quality control.

With its pre-installed software, the vision sensor is ready to use immediately. According to Sick, no vision experts or laborious preparation are required to set up an application. Using a standard computer connected via the camera’s USB-C or network interface, users are guided by the intuitive interface to present sample parts under real production conditions, then perform training and inspection. Just five sample images are enough. By combining AI capabilities with traditional rule-based tools — for example, to add a simple measurement — inspections can be pragmatically configured.

According to Sick, this sensor removes much of the complexity of conventional machine vision, especially when product or packaging design changes are needed. Instead of contacting a vision specialist or external consultant to configure new rule-based inspections, non-specialist operators can simply add a new product example, and the camera autonomously learns the task.

For more complex scenarios involving large datasets and numerous samples, users can access the dStudio Cloud Service to configure their own neural networks, which can then be exported in lightweight format to the Inspector83x. With Sick Nova, advanced users can also refine their solutions through custom developments using Lua and Halcon programming. The cloud-based dStudio service also facilitates collaboration and data management among colleagues.

Once installed, image inference is performed directly on the vision sensor, and results are transmitted to the machine controller as “OK/NOK” outputs or sensor values. The sensor is optimised for data transfer to industrial networks with dual ports for EtherNet/IP™ or Profinet integration. A dedicated high-speed Gigabit Ethernet port ensures the transfer of high-resolution image data, data logging, or TCP/IP integration. An integrated export function also generates customised configurations for the most common API types at the press of a button.

The sensor includes seven inputs and five outputs. Its built-in pulse delay and queue functions synchronise camera image outputs according to time or encoder pulses, activating connected machine controls, such as triggering an ejector.

Although designed to operate fully autonomously, a range of accessories is available if the application requires them. A dedicated lighting connector can be used with external light sources such as backlights or light bars. A near-infrared (NIR) version will also be available in 2024. Using a standard C-mount thread, users can choose from Sick’s range of lenses as well as custom optics to meet the needs of more complex inspections.

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers