HOW WILL THE LATEST IMAGE SENSORS IMPACT THE FUTURE OF MACHINE VISION?

WHAT IS AN IMAGE SENSOR?

An image sensor is a device that allows the camera to convert photons (light) into electrical signals. They are composed of millions of pixels on a single chip. The image sensor measures light intensity – but light’s angle, spectrum, and other characteristics are also extracted.

For simple applications like photography, the intensity information of three-color bands (RGB) is adequate. However, for advanced sensing applications, such as autonomous vehicles, biomedical imaging, and robotics, extracting more information from the incident light could help machines to make better decisions.

SIZE MATTERS

The bigger, the better when it comes to image sensors. Because the sensor is the part of the camera capturing the image, it is critical to the quality of the resulting image. Sensor size and megapixel count are very closely connected. A larger camera sensor will give you better image quality because it gathers more light and delivers a higher megapixel count than a smaller sensor.

IMPROVING IMAGE SENSORS FOR MACHINE VISION

In the future, it is predicted that more cameras will be built for machines than people. This will be further accelerated by the rapid progress in machine learning and artificial intelligence. In addition, it is predicted that machine vision applications will substantially benefit from the multimodal measurement of light fields by advanced imaging sensors.

Some of the latest advances in image sensing have significantly impacted 3D imaging, event-based sensing, and nonvisible image sensing. According to innovations-report.com , the latest developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.

Optics play a major role in the performance of any imaging system. For optimal performance, it is critical to choose a lens that can accommodate the latest sensor technology. Computar’s MPT 1.4″ 45 Megapixel Series is engineered to optimize the capabilities of the latest industrial CMOS sensors.

TO KNOW MORE ABOUT MACHINE VISION DEALERS IN SINGAPORE CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Advertisement

WHAT IS MACHINE VISION?

Machine vision is the technology used to provide imaging-based automatic inspection and analysis for such applications as process control, robot guidance, factory automation, and mechanical inspection, usually in industry.

According to Forbes , a machine vision system is a combination of software and hardware that usually incorporates:

  • * Sensors
  • * Frame-grabber
  • * Cameras (digital or analog)
  • * Sufficient lighting for cameras to capture quality images
  • * Software and computer capable of analyzing images
  • * Algorithms that can identify patterns necessary in some use cases
  • * Output such as a screen or mechanical components

HOW IS MACHINE VISION USED?

Machine vision is primarily used in industry for quality control to identify businesses production line mistakes, inspection, guidance, and more. Machine vision is valuable for factory automation in finding and correcting production line errors where they start before, they affect too many more products.

Machine vision is also helpful for manufacturing and warehouses, where they can expedite inventory control by reading barcodes and labels on various products and parts. Machine vision lenses are also used for finding a specific part and ensuring proper placement or positioning, so the production process runs as smoothly as possible. It is also used for machine vision gauging, where a fixed-mount camera distinguishes two or more points on an object as it goes through the production line to find discrepancies between the distances measured and thus finds production mistakes.

In agriculture, farms find machine vision beneficial when installed in farming equipment to monitor crops and detect their diseases. SWIR imaging is one application of machine vision that can be used in the agriculture and farming industry for produce inspection because its ability to see past the human eye.

The printing industry finds machine vision useful for catching printing defects for labels, packaging, and other print.

In healthcare and life sciences, machine vision lenses —such as these SWIR lenses —are used for microscopes, robotics, and medical machines, such as the well-known CT scanner.

WHY IS THE LENS A CRITICAL COMPONENT IN MACHINE VISION?

The data flows from the lens first. That makes the lens choice one of the most impactful decisions in determining how a machine vision system will perform. Computar’s award-winning 45-megapixel machine vision MPT lens series ‘ floating design is ideal to deliver high-performance and high-level aberration correction at any working distance. In addition, the centering/alignment technology has astonishing performance from the image center to the corner, delivering the precise detail required for optimal machine vision performance.

Machine vision systems and their applications are constantly evolving. With continuous advancements in technology, robotics, and AI, machine vision will become a standard for improving quality, efficiency, and operations.

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION CAMERAS DEALER IN SINGAPORE CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

WHAT’S THE DIFFERENCE BETWEEN VISIBLE AND SWIR LENSES?

 Short-wave Infrared (SWIR) lenses are designed to operate in the 0.9-1.7 µm wavelength region. SWIR is close to visible light in that photons are reflected or absorbed by an object, providing the strong contrast needed for high-resolution imaging. SWIR is great for the Machine Vision and the Health & Sciences Industries because water vapor, fog, and certain materials such as silicon are transparent. SWIR imaging is also helpful because similar-looking colors visible to the human eye are easily differentiated using SWIR lenses.

HOW DOES IT WORK?

SWIR lenses are like visible cameras in the way they detect reflected light. Photons in the SWIR wavelength are reflected or absorbed by objects, allowing for high-resolution imaging with a strong contrast. This kind of technology is the only wavelength technology that can pierce through cloud coverage and capture a well-defined image.

For our ViSWIR series, according to Mr. Katsuya Hirano, Chief Optical Designer, CBC Group, for fully-corrected focus shift in visible and SWIR range (400nm-1,700nm): “By using ultra-low dispersion glass and low partial dispersion glass paired with superior design technology developed from Computar’s extensive optics experience, the focus shift is minimized within a few micron mm at a super wide range of wavelengths. With this, spectral imaging is achievable with a single sensor camera by simply syncing the lighting.”

With Computar’s ViSWIR HYPER-APO lens series, it is unnecessary to adjust focus for differences. By adopting an APO floating design*, the focus shift is reduced at any wavelength and any working distance. This function makes SWIR lenses ideal for multiple applications, including machine vision, UAV, and remote sensing.

WHICH LENS IS THE BEST FOR MY INDUSTRY?

For the Machine Vision Industry as well as the Life Sciences Industry, we recommend our ViSWIR Series. These lenses achieve a clear and precise image visible to the SWIR range by applying a multilayer coating to absorb the specific light. A higher-resolution lens gives you greater specificity in designing and implementing the most efficient vision solutions. So, for medical devices and robotics, this is great for detail work and other short-range imaging.

For the Intelligent Transport Systems Industry and Government and Defense, a blend of visible and SWIR would be most helpful—visible imaging for distance and SWIR for detailed imaging.

Some lenses, such as ours, are designed to perform well with for both visible and SWIR, enabling cost-effective and performance imaging systems for a range of applications.

TO KNOW MORE ABOUT MACHINE VISION LENS DISTRIBUTORS IN MIDDLE EAST SINGAPORE CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

SMART VS. STANDARD MACHINE VISION LENSES

 Machine vision lenses have exploded in popularity recently with the increasing demand for automation and robotics across various industries. With this explosion comes new imaging technology and a variety of machine vision lenses. Here we explore the differences between smart and standard machine vision lenses.
Smart lenses feature a P-Iris allow remote adjustment to improve contrast, clarity, resolution, and depth of field. With software configured to optimize performance, the P-Iris automatically provides the best iris position for optimal image quality in all lighting conditions. The auto-iris has its limitations. Selecting a precise iris value is not repeatedly attainable with the auto-iris lens. In addition, problems may occur because of the iris straying from its selected aperture value over time. Auto-iris lenses mainly adjust the level of light that would reach the sensor and are only reliable when the iris is fully opened or set to its smallest aperture. Therefore, it is challenging for auto-iris lenses to attain accurate mid-range values, which can result in image diffraction and aberrations.

Lens focusing capabilities vary as well. For example, the floating focus design of an intelligent lens delivers ultra-high resolution from near to far, and stepper motors enable precise focus control and high repeatability. A standard manual or autofocus lens can produce great results, but neither can be adjusted on the fly. They can lose focus with applications requiring the inspection of objects at various heights. Things outside the depth of field become out of focus and limit the vision application.
The convenience and time-saving factors of making remote adjustments can be the deciding factor in the type of lens chosen. Smart plug-and-play machine vision lenses are easy to install, control, and adjust remotely. After being plugged in via USB and installed, smart lenses can be fine-tuned using software installed on a Windows or Linux system.
More and more industries and applications are using machine vision each year. Smart lenses are advantageous for multiple machine vision applications, including automation, robotics, inspection, medical labs, manufacturing, warehouses, and just about any environment where image clarity is required. With this growth in popularity comes the demand for a more precise, efficient, and intelligent lens.

TO KNOW MORE ABOUT MACHINE VISION LENSES DEALERS SINGAPORE CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

HOW DEEP LEARNING AUTOMATES PACKAGING SOLUTION INSPECTIONS

 Increasingly, packaging products require their own custom inspection systems to perfect quality, eliminate false rejects, improve throughput, and eliminate the risk of a recall. Some of the foundational machine vision applications along a packaging line include verifying that a label on a package is present, correct, straight, and readable. Other simple packaging inspections involve presence, position, quality (no flags, tears, or bubbles), and readability (barcode and date/lot codes present and scannable) on a label.


But packaging like bottles, cans, cases, and boxes—present in many industries, including food and beverage, consumer products, and logistics—can’t always be accurately inspected by traditional machine vision. For applications which present variable, unpredictable defects on confusing surfaces such as those that are highly patterned or suffer from specular glare, manufacturers have typically relied on the flexibility and judgment-based decision-making of human inspectors. Yet human inspectors have some very large tradeoffs for the modern consumer packaged goods industry: they aren’t necessarily scalable.

For applications which resist automation yet demand high quality and throughput, deep learning technology is a flexible tool that application engineers can have confidence in as their packaging needs grow and change. Deep learning technology can handle all different types of packaging surfaces, including paper, glass, plastics, and ceramics, as well as their labels. Be it a specific defect on a printed label or the cutting zone for a piece of packaging, Cognex Deep Learning can identify all of these regions of interest simply by learning the varying appearance of the targeted zone. Using an array of tools, Cognex Deep Learning can then locate and count complex objects or features, detect anomalies, and classify said objects or even entire scenes. And last but not least, it can recognize and verify alphanumeric characters using a pre-trained font library.

Here, we’ll explore how Cognex Deep Learning does all of the above for packagers and manufacturers.

PACKAGING DEFECT DETECTION

Machine vision is invaluable to packaging inspections on bottles and cans. In fact, in most factories, it is machine vision which not only inspects the placement of labels and wrapping but also places and aligns them during manufacturing.

Labeling defects are well-handled by traditional machine vision, which can capably detect wrinkles, rips, tears, warpage, bubbles, and printing errors. High-contrast imaging and surface extraction technology can capture defects, even when they occur on curved surfaces and under poor lighting conditions. Yet the metal surface of a typical aluminum can might confuse traditional machine vision with its glare as well as the unpredictable, variable nature of its defects, not all of which need to be rejected. Add to those challenging surface inspections countless forms and types of defects—for example, long scratches and shallow dents — and it quickly becomes untenable to explicitly search for all types of potential defects.

Using a novel deep learning-based approach, it’s possible to precisely and repetitively inspect all sorts of challenging metal packaging surfaces. With Cognex Deep Learning, rather than explicitly program an inspection, the deep learning algorithm trains itself on a set of known “good” samples to create its reference models. Once this training phase is complete, the inspection is ready to start. Cognex Deep Learning can identify and report all defective areas on the can’s surface which deviate outside the range of a normal acceptable appearance.

PACKAGING OPTICAL CHARACTER RECOGNITION

Hiding somewhere on almost all consumable packages, regardless of material or type, lies a date/lot code. Having these codes printed cleanly and readable is important not only for end-users and consumers doing their shopping but also for manufacturers during the verification stage. A misprinted, smeared, or deformed date/lot code printed onto a label on a bottle or package of cookies, for example, causes problems for both.

Typically, traditional machine vision could easily recognize and/or verify that codes are readable and correct before they leave the facility, but certain challenging surfaces make this too difficult. In these cases, a smeared or slanted code printed on specular material like a metal soda case could be read with some effort by a human inspector but not with much reliability by a machine vision inspection system. In these cases, packagers need an inspection system that can judge readability by human standards but, critically, with the speed and robustness of a computerized system. Enter, deep learning.

Cognex’s deep learning OCR tool is able to detect and read the plain text in date/lot codes, verifying that their chains of numbers and letters are correct even when they are badly deformed, skewed, or—in the case of metal surfaces—poorly etched. The tool minimizes training because it leverages a pre-trained font library. This means that Cognex Deep Learning can read most alphanumeric text out-of-the-box, without programming. Training is limited to specific application requirements to recognize surface details or retrain on missed characters. All of these advantages help ease and speed implementation and contribute to successful OCR and OCV application results without the involvement of a vision expert.

PACKAGING ASSEMBLY VERIFICATION

Visually dependent assembly verification can be challenging for multi-pack goods which may have purposeful variation, as in the case of holiday-themed or seasonal offerings. These packs showcase different items and configurations in the same case or box.

For these sorts of inspections, manufacturers need highly flexible inspection systems which can locate and verify that individual items are present and correct, arranged in the proper configuration, and match their external packaging. To do this, the inspection system needs to be able to locate and segment several regions of interest within a single image, possibly in multiple configurations that can be inspected line-by-line to account for variations in packaging.

To locate individual items by their unique and varying identifiable characteristics, a deep learning-based system is ideal because it generalize each item’s distinguishable characteristics based on size, shape, color, and surface features. The Cognex Deep Learning software can be trained quickly to build an entire database of items. Then, the inspection can proceed by region, whether by quadrant or line-by-line, to verify that the package has been assembled correctly.

PACKAGING CLASSIFICATION

Kitting inspections require multiple capabilities of its automated inspection system. Consumer product multi-packs need to be inspected for the right number and type of inclusions before being shipped. Counting and identification are well-loved strengths of traditional machine vision. But to ensure that the right items are included in a multi-part unit requires classifying included products by category—for example, does a sunblock multi-pack contain two types of sunblock, or does it contain an extra sunblock lip balm?

This categorization is important yet remains out of reach for traditional machine vision. Luckily, Cognex’s deep learning classification tool can easily be combined with traditional location and counting machine vision tools, or with deep learning-based location and counting tools if the kitting inspection deals with variable product types and requires artificial intelligence to distinguish the generalizing features of these types.

Deep learning-based classification works by separating different classes based on a collection of labelled images and identifies products based on these packaging discrepancies. If any of the classes are trained as containing anomalies, then the system can learn to classify them as acceptable or unacceptable.

New deep learning-enabled vision systems differ from traditional machine vision because they are essentially self-learning and trained on labeled sample images without explicit application development. These systems can also be trained on new images for new inspections at any time, which makes it a valuable long-term asset for growing businesses.

Deep learning-based software is also quick to deploy and uses human-like intelligence which is able to appreciate nuances like deviation and variation and outperform even the best quality inspectors at making reliably correct judgments. Most importantly, however, is that it is able to solve more complex, previously un-programmable automation challenges.

Manufacturers in the packaging industry are increasingly demanding faster, more powerful machine vision systems, and for good reason: they are expected to make a great number of products at a higher quality threshold and for less cost. Cognex is meeting customers’ rigorous requirements head-on by offering automated inspection systems that marry the power of machine vision with deep learning in order to manufacture packaging more cost effectively and robustly.

TO KNOW MORE ABOUT MACHINE VISION DEALER SINGAPORE FOR PACKAGING SOLUTIONS CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

HOW TO SELECT THE CORRECT MACHINE VISION LENS FOR YOUR APPLICATION

When setting up your automated vision system, the lens may be one of the last components you choose. However, once your system is up and running, your data flows from the lens first. That makes your lens choice one of the most impactful decisions that affect how well your vision system works for you.

Resolution is a priority. A higher resolution lens gives you greater specificity in designing and implementing the most efficient vision solutions.

Don’t let the lens be the weak link in your Machine Vision (MV) system. Choosing a great lens tailored to your system’s needs can be daunting. To select the ideal lens, one should consider several factors. So, what is the best way to choose the right lens for a machine vision application?

SELECTING A MACHINE VISION LENS CHECKLIST

1. What is the distance between the object to be inspected and the camera, i.e., the Working Distance (WD)? Does the distance affect the focus and focal length of the lens?

2. What is the size of the object? Object size determines the Field of View (FOV).

3 . What resolution is needed? The image sensor, as well as the pixel size, are determined here.

4. Is camera motion or special fixturing required?

5. What are the lighting conditions? Can the lighting be controlled, or is the object luminous or in a bright environment?

6. Is the object or camera moving or stationary? If it is moving, how fast? Motion between the object and camera has shutter speed implications, affecting the light entering the lens and the f-Number.

These variables and more make selecting the proper lens a challenge, but an excellent place to start is with three significant features: type of focusing, iris, and focal length.

Choosing a great lens tailored to your system’s needs can be daunting, but we are here to help. Talk to a lens specialist at Computar today and find out how we can assist in selecting the correct lens for you.

TO KNOW MORE ABOUT COGNEX CAMERA DEALERS SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

WHAT IS THE DIFFERENCE BETWEEN VISION SENSORS AND VISION SYSTEMS?

81868-mvasia

Cognex-camera-dealers-Singapore-Whats-the-difference-between-Vision-Sensors-and-Vision-Systems

 

The difference between vision sensors and vision systems is fairly basic:

A vision sensor does simple inspections like answering a simple yes-no question on a production line. A vision system does something complex like helping a robot arm weld parts together in an automated factory.

Machine vision sensors capture light waves from a camera’s lens and work together with digital signal processors (DSPs) to translate light data into pixels that generate digital images. Software analyzes pixel patterns to reveal critical facts about the object being photographed.

Automated production doesn’t have to mean robots building pickup trucks and smartphones. Many automated factory tasks require simple, straightforward kinds of vision sensor data:

 

  • Presence or absence. Is there a part within the sensor’s field of view? If the sensor answers yes, then machine vision software gives the OK to move the part to its correct place in the production process.
  • Inspection. Is the part damaged or flawed? If the sensor sees defects, then the part gets routed out of production.
  • Optical character recognition (OCR). Does the part contain specific words or text? Answering this question can help automated systems sort products by brand name or product description.

Cognex machine vision systems use multiple sensors to perform all of these basic tasks plus many more complicated challenges:

  1. Guides/alignment: When parts require an exact position or alignment, vision systems use sensors to identify the correct parts and place them exactly where they need to go.
  2. Code reading: Codes on packages and individual components contain vital data that vision systems acquire in real time to sort finished goods and differentiate between parts within a production process.
  3. Gauges/measurement: Sensors can ensure that machined parts are cut to the proper dimensions.
  4. 3D imaging: Sensors create three-dimensional representations of parts and products. These images can help automate inspections and tell robotic arms where to pick up and place parts.

Every company has to decide whether they need simple vision sensors or more advanced vision systems. Vision sensors are designed to be easy to install and implement, so factory personnel typically can set them up and configure them without a lot of outside assistance. When the imaging job requires a simple go/no-go decision, vision sensors may be all the company needs.

Vision systems, by contrast, require more expertise and a significant investment of time and money for configuration, installment and training. Often, companies turn to third-party integrators who have deep expertise in vision system installations.

Every company in the machine vision sector has its own way of defining the difference between machine vision sensors and systems. Cognex, for instance, builds vision sensors that perform specific kinds of tasks, like quality control in food processing. Our vision systems combine advanced software with industrial-strength cameras to enable a broad spectrum of factory automation applications.

One way to distinguish between vision systems and sensors is to imagine hundreds of beer bottles on a conveyor belt in a bottling plant. A vision sensor can make sure every bottle has a cap. If the cap is there, then the bottle gets approved and sent to packaging, where another sensor makes sure every six-pack has six bottles.

But the bottling company may want to identify when a bottle cap is skewed past a certain angle. Or, perhaps they want to ensure that the six-pack doesn’t accidentally mix multiple beer varieties. That’s more likely to require a vision system.

 

Also Read: HOW MACHINE VISION AND DEEP LEARNING ENABLE FACTORY AUTOMATION

TO KNOW MORE ABOUT COGNEX CAMERA DEALERS SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

HOW MACHINE VISION AND DEEP LEARNING ENABLE FACTORY AUTOMATION

81868-mvasia

MARCH, 2020

Cognex-Machine-Vision-System-Cameras-in-Singapore-How-Machine-Vision-and-Deep-Learning-Enable-Factory-Automation

Credits : pexels.com

The pace of technology’s change over the last decade has been nearly unprecedented in human history and it’s only poised to become even more breathtaking in the years ahead: blockchain, robotics, edge computing, artificial intelligence (AI), big data, 3D printing, sensors, machine vision, internet of things, are just some of the massive technological shifts on the cusp for industries

Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry. In the United States, manufacturing accounts for $2.17 trillion in annual economic activity, but by 2025 – just half a decade away – McKinsey forecasts that “smart factories” could generate as much as $3.7 trillion in value. In other words, the companies that can quickly turn their factories into intelligent automation hubs will be the ones that win long term from those investments.

“If you’re stuck to the old way and don’t have the capacity to digitalize manufacturing processes, your costs are probably going to rise, your products are going to be late to market, and your ability to provide distinctive value-add to customers will decline,” Stephen Ezell, an expert in global innovation policy at the Information Technology and Innovation Foundation, says in a report from Intel on the future of AI in manufacturing.

These technologies as applied in a factory or manufacturing setting are no longer nice to have, they are business critical. According to a recent research report from Forbes Insights, 93% of respondents from the automotive and manufacturing sectors classified AI as ‘highly important’ or ‘absolutely critical to success’. And yet, only 56% of these respondents plan to increase spending on artificial intelligence by less than 10%.

The disconnect between recognizing the importance of new technologies that allow for more factory automation and the willingness to spend on them will be the difference between those companies that win and those that lose. Perhaps this reticence to invest in something like AI could be attributed to the lack of understanding of its ROI, capabilities, or real-world use cases. Industry analyst Gartner, Inc. still slots many of AI’s applications into the “peak of inflated expectations” after all.

But AI, specifically deep learning or examples-based machine vision, combined with traditional rules-based machine vision can give a manufacturing factory and its teams superpowers. Take a process such as the complex assembly of a modern smartphone or other consumer electronic devices. The combination of rules-based machine vision and deep learning can help robotic assemblers identify the correct parts, identify differences like missing screws or misaligned casings, help detect if a part was present or missing or assembled in a different place on the product, and more quickly determine if those were problems. And they can do this at an unfathomable scale.

The combination of machine vision and deep learning are the on-ramp for companies to adopt smarter technologies that will give them the scale, precision, efficiency, and financial growth for the next generation. But understanding the nuanced differences between traditional machine vision and deep learning and how they complement each other, rather than replace, are essential to maximizing those investments.

Also Read: THREE TRENDS DRIVING INDUSTRIAL AUTOMATION

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

THREE TRENDS DRIVING INDUSTRIAL AUTOMATION

81868-mvasiaFEBRUARY, 2020

high-resolution-standalone-smart-cameras-dealer-Singapore-Three-Trends-Driving-Industrial-Automation (1)

Credits : pexels.com

Since its inception in the 1980s, machine vision has concerned itself with two things: improving the technology’s power and capability and making it easier to use. Today, machine vision is turning to higher-resolution cameras with greater intelligence to empower new automated solutions both on and off the plant floor — all with a simplicity of operation approaching that of the smartphone, which significantly reduces engineering requirements and associated costs.

And, just like in other industries which are benefiting from rapid advancements in technology like big data, the cloud, artificial intelligence (AI), and mobile, so too will manufacturers, logistics operations, and other enterprises benefit from three key advances in machine vision for automation.

RAPIDLY IMPROVING SENSOR TECHNOLOGY

While 1-, 2-, and 5-megapixel (MP) cameras continue to make up the bulk of machine vision camera shipments, we’re seeing considerable interest in even higher-resolution smart cameras, up to 12 MP. High-resolution sensors mean that a single smart camera inspecting an automobile engine can do the work of several lower resolution smart cameras while maintaining high-accuracy inspections.

Cognex’s patent-pending High Dynamic Range Plus (HDR+) image processing technology provides even better image fidelity than your typical HDR. It will help smart cameras inspect multiple areas across large objects where lighting uniformity is less than ideal. In the past, lighting variations could be mistaken for defects or the feature was not even visible. Today, HDR+ helps reduce the effects of lighting variations, enabling applications in challenging environments that were beyond the capability of machine vision technology just a few years ago.

While advanced smart cameras run HDR+ technology on field-programmable gate arrays (FPGAs) to improve the quality of the acquired image at frame rate speeds, complementary sensor technology, such as time-of-flight (ToF) sensors, are being incorporated to enable “distance-based dynamic focus”.

The new high-powered integrated torch (HPIT ) image formation system, using ToF distance measurement and high-speed liquid lens technology, are also making an impact by enabling dynamic autofocus at frame rate. The newest barcode readers incorporate HPIT capability for applications such as high-speed tunnel sortation and warehouse management in situations where packages and product size can vary significantly, requiring the camera to quickly adapt to different focal ranges.

INTEGRATION WITH DEEP LEARNING

Just like AI’s impact in other industries, deep learning vision software for factory automation is allowing enterprises to automate inspections that were previously only able to do manually or more efficiently solve complex inspection challenges that are cumbersome or time-consuming to do with traditional rule-based machine vision.

The biggest use driving the investment in deep learning is the potential of re-allocating, in many cases, hundreds of human inspectors with deep learning-based inspection systems. For the first time, manufacturers have a technology that offers an inspection solution that can achieve comparable performance to that of a human.

One example of how deep learning will benefit organizations is in defect detection inspection. Every manufacturer wants to eliminate industrial defects as much as possible and as early as possible in the manufacturing process to reduce downstream impacts that cost time and money.

Defect detection is challenging because it is nearly impossible to account for the sheer amount of variation in what constitutes a defect or what anomalies might fall within the range of acceptable variation.

As a result, many manufacturers utilize human inspectors at the end of the process to perform a final check for unacceptable product defects. With deep learning, quality engineers can train a machine vision system to learn what is an acceptable or unacceptable defect from a data set of reference pictures rather than program the vision system to account for the thousands of defect possibilities.

THE INTERNET OF THINGS

An important development for smart camera vision systems enabling Industry 4.0 initiatives is Open Platform Communications Unified Architecture (OPC UA). With contributions from all major machine vision trade associations around the world, OPC UA is an industrial interoperability standard developed to help machine-to-machine communication.

Combined with advanced sensor technology and trends such as deep learning, OPC UA will help transition machine vision technology from a point solution to bridge the industrial world inside the plant and the physical world outside it. Today, vision systems and barcode readers are key sources of data for modern enterprises.

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

THERMAL IMAGING FOR SAFER AUTONOMOUS VEHICLES

81868-mvasiaFEBRUARY, 2020

Seek-Thermal-Cameras-dealer-in-Mumbai-India-Menzel-Vision-and-Robotics-Thermal-Imaging-for-Safer-Autonomous-Vehicles (1)

Credits : pexels.com

For the automotive industry, pedestrian safety has been a serious concern since the horseless carriage. Londoner Arthur Edsall was the first driver to strike and kill a pedestrian in 1896 at a speed of four miles per hour. It took the U.S. Congress almost seventy years to impose automotive safety standards and mandate the installation of safety equipment and another thirty years before airbags became a required safety feature. Automotive safety standards in the United States are promulgated by a process of reviewing accidents after they have occurred.

In 2019, the National Transportation Safety Board (“NTSB”) finally addressed this standards – promulgation process in their Most Wanted List of transportation safety improvements calling for an increase in the implementation of collision-avoidance systems in all new highway vehicles. The progression of this change in policy derived from the 2015 study (SIR-15/01) that described the benefits of forward-collision-avoidance systems and their ability to prevent thousands of accidents.

After that report was published, an agreement was reached with the National Highway Traffic Safety Administration (“NHTSA”) and the Insurance Institute for Highway Safety that would require compliance with the Automatic Emergency Braking standard (“AEB”) on all manufactured vehicles by 2022. However, the agreement did not identify the specific technology that would enable AEB, and the question remains whether such technology is readily available and economically viable for industry-wide adoption.

RAPIDLY IMPROVING SENSOR TECHNOLOGY

The pace of technology over the last thirty years has been astronomical, yet technology to make driving safer has not kept pace. A computer that not too long ago was the size of a garage now fits into the palm of your hand. Today driving should be safer than ever, but the reality is that without the implantation of available modern technologies, the uncertainties of the road will always be with us. According to the NHTSA, there were 37,461 traffic fatalities in 2016 in the United States.

In 2015, there were a total of 6,243,000 passenger car accidents. 1 Globally, there is a fatality every twenty-five seconds and an injury every 1.25 seconds. In the United States there is a fatality every thirteen minutes and an injury every thirteen seconds. These statistics are mind blowing. Compared to recent events affecting the aviation industry, two Boeing 737 MAX 8 airplanes crashed killing 346 people, the same number of people that die as a result of automobile accidents every 144 minutes, and all Boeing 737 MAX 8 airplanes were grounded

The cost for automotive accidents is high. According to the national safety counsel, in the United States, the annual cost of health care resulting from cigarette smoking is approximately $300 billion whereas the annual cost of health care for injuries arising from automobile accidents is roughly $415 billion.

Technology to protect automobile occupants has reduced the number of driver and passenger fatalities. However, the number of people who die as a result of an accident outside the automobile continue to climb at an alarming rate. Pedestrians are at the greatest risk, especially after dark.

The NHTSA reports that in 2018, 6,227 pedestrians were killed in United States traffic accidents, with seventy-eight percent of pedestrian deaths occurring at dusk, dawn, or night.2 In the United States, pedestrian fatalities have increased forty-one percent since 2008. Solutions to address pedestrian fatalities are needed to meet the standards by 2022.

TECHNOLOGY IN THE DRIVER’S SEAT

Ultimately, it is safer cars and safer drivers that make driving safer, and automotive designers need to deploy every possible technological tool to improve driver awareness and make cars more automatically responsive to impending risks. Today’s safest cars can be equipped with a multitude of cameras and sensors to make them hyper-sensitive to the world around them and intelligent enough to take safe evasive action as needed. Microprocessors can process images and identify subject matter 1,000,000 times faster than a human being

Advanced Driver Assist Systems (“ADAS”) are becoming the norm, spotting potential problems ahead of the automobile making auto travel safer for drivers, passengers, and pedestrians, not to mention the more than one million ‘reported’ animals struck by automobiles in the United States annually resulting in $4.2 billion in insurance claims each year. The advances we have seen so far are the first steps to evolving towards a future of truly autonomous vehicles that will revolutionize both personal and commercial transportation.

Drivers need no longer rely on eyes alone to maintain situational awareness. Early generations of vision-assisting cameras were innovative, but they were not particularly intelligent and could do little to perceive the environment around the car and communicate information that could be used for driver decision-making.

Today, with tools such as radar, light detection and ranging (“LIDAR”), cameras, and ultrasound installed, a car knows much more about the environment than the driver does and can control the vehicle faster and safer than the human driver. Risky driving conditions such as rain, fog, snow, and glare, are less hazardous when a driver is assisted by additional onboard sensors and data processors.

One of the most advanced automotive sensors is a thermal sensor that allows a driver and the automobile to perceive the heat signature of anything ahead of the driver. Previously used mainly for military and commercial applications, early forms of night vision first came to the mainstream automotive market in the 2000 Cadillac DeVille, albeit as a cost-prohibitive accessory priced at almost at a cost approaching $3,000.

Since then, thermal cameras and sensors have become smaller, lighter, faster and cheaper. After years of exclusive availability in luxury models, thermal sensors are now ready to take their place among other automotive sensors to provide a first line of driving defense that reaches far beyond the reach of headlights in all vehicles, regardless of the cost of the vehicle.

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM