FUNDAMENTAL OF IMAGE PROCESSING SYSTEMS

mvasia-logo

WHAT DO IMAGE PROCESSING SYSTEMS HAVE TO DO WITH KEEPING FOODSTUFFS IN GOOD SHAPE?

Everyone prefers foodstuffs that are fresh and outwardly attractive. Image processing systems are frequently used during the quality assurance process for these products to ensure that this is the case. The image data helps producers make informed decisions that would be otherwise be impossible.

But how are systems of this kind designed? What steps are necessary, what must be taken into account, and what options are available?

Selection of the camera, selection of the lens and lighting source, evaluation of image quality, selection of PC hardware and software and the configuration of all components – all of those are important steps toward an effective image processing system.

Imagine an apple grower asks you to design a machine vision system for inspecting the apples. He’s interested in delivering uniform quality, meaning the ability to sort out bad apples while still working fast. He is faced with the following questions:

  • What are the precise defined requirements for the system?
  • Which resolution and sensors do I need?
  • Do I want to use a color or monochrome camera?
  • What camera functions do I need, and what level of image quality is sufficient?
  • The eye of the camera: Scale and lens performance
  • Which lighting should I use?
  • What PC hardware is required?
  • What software is required?

Machine Vision System in Singapore
Credits : baslerweb.com

UNDERSTANDING WHAT´S REQUIRED: REQUIREMENTS DEFINITION

WHAT EXACTLY SHOULD THE SYSTEM DELIVER AND UNDER WHICH CONDITIONS?

This question sounds so obvious that it’s frequently overlooked and not answered in the proper detail. But the fact remains: If you are clear up front about precisely what you want, you’ll save time and money later.

SHOULD YOUR SYSTEM

  • Only show images of the object being inspected, with tools like magnification or special lighting used to reveal product characteristics that cannot be detected with the human eye?
  • Calculate objective product features such as size and dimensional stability?
  • Check correct positioning — such as on a pick-and-place system?
  • Determine properties that are then used to assign the product into a specific product class?

RESOLUTION AND SENSOR

Which camera is used for any given application? The requirements definition is used to derive target specifications for the resolution and sensor size on the camera.

But first: What exactly is resolution? In classic photography, resolution refers to the minimum distance between two real points or lines in an image such that they can be perceived as distinct.

In the realm of digital cameras, terms like “2 megapixel resolution“ are often used. This refers to something entirely different, namely the total count of pixels on the sensor, but not strictly speaking its resolution. The proper resolution can only be determined once the overall package of camera, lens and geometry, i.e. the distances required by the setup, is in place. Not that the pixel count number is irrelevant — a high number of pixels is truly needed to achieve high resolutions. In essence, the pixel count indicates the maximal resolution under optimal conditions.

Fine resolution or large inspection area — either of these requirements necessitates the greatest possible number of pixels for the camera. Multiple cameras may actually be required to inspect a large area at a high level of resolution. In fact, the use of multiple cameras with standard lenses is often cheaper than using one single camera with a pricy special lens capable of covering the entire area.

The sensor size and field of view dictate the depiction scale, which will later be crucial for the selection of the lens.

COLOR OR MONOCHROME?

Generally speaking, most applications do not really need a color camera. Color images are often just easier on the eyes for many observers. Realistic reproduction of color using a color camera necessitates the use of white lighting as well. If the characteristics can be detected via their color (such as red blemishes on an apple), then color is often — but not always — needed. Yet these characteristics can also in many cases be picked up in black and white images from a monochrome camera if colored lighting is used. Experiments on perfect samples can help here. If color isn’t relevant, than monochrome cameras are preferable, since color cameras are inherently less sensitive than black and white cameras.

Are you working with a highly complex inspection task? If so, you may want to consider using multiple cameras, especially if a range of different characteristics need to be recorded, each requiring a different lighting or optics configuration.

WHAT A CAMERA SHOULD ALSO PROVIDE: CAMERA FUNCTIONS AND IMAGE QUALITY

There’s more to a good camera than just the number of pixels. You should also take image quality and camera functions into account.

When evaluating the image quality of a digital camera, the resolution is one important factor alongside:

  • Light sensitivity
  • Dynamic range
  • Signal-to-noise ratio

In terms of camera functions, one of the most important is the speed, typically stated in frames per second (fps). It defines the maximum number of frames that can be recorded per second.

THE EYE OF THE CAMERA: SCALE AND LENS PERFORMANCE

Good optical systems are expensive. In many cases, a standard lens is powerful enough to handle the task. To decide what’s needed, we need information about parameters such as

  • Lens interface
  • Pixel size
  • Sensor size
  • Image scale, meaning the ratio between image and object size. This corresponds to the ratio of the size of the individual pixels divided by the pixel resolution (The pixel resolution is the length of the edges of a square within the object being inspected that should fill up precisely one pixel of the camera sensor.
  • Focal length of the lens that determines the image scale and the distance between camera and object
  • Lighting intensity

Once this information is available, it becomes much easier to examine the spec sheets from lens makers to review whether an affordable standard lens is sufficient or whether a foray into the higher-end lenses is needed.

Lens properties like distortion, resolution (described using the MTF curve), chromatic aberration and the spectral range for which a lens has been optimized, serve as additional selection criteria.

There are for example special lenses for near infrared, extreme wide angle lenses (‘fisheye‘) and telecentric lenses that are specially suited for length measurements. These lenses typically come at a high price, though.

Here too the rule is: Tests and sample shots are the best way to clear up open questions.

LIGHTING

It’s hard to see anything in poor light: It may seem obvious, but it holds true for image processing systems as well.

High inspection speeds typically require sensitive camera and powerful lenses. In many cases however the easier option is to modify or improve the lighting situation to boost the image brightness. There are a variety of options for attaining greater image brightness: Increasing the ambient light and sculpting the light using lenses or flashes to create a suitable light source are two examples. But it’s not just the lighting strength that’s important. The path that the light moves through the lens to the camera matters too.

One common example from photography is the use of a flash: if the ambient lighting is too diffuse, then a flash is used to aim the light in a targeted manner — although then you need to deal with unwanted reflections off smooth surfaces in the image area that can overwhelm the desired details. During image processing, these kinds of effects may actually be desired to deliver high light intensities on straight, low-reflecting surfaces. For objects with many surfaces reflecting in various direction, diffuse light is better.

We look at photos by reflecting light on them, while a stained glass window only reveals its beauty when the light shines through it.

PC HARDWARE

Which hardware is required depends on the task and the necessary processing speed.

While simple tasks can be handled using standard PCs and image processing packages, complex and rapid image processing tasks may require specialized hardware.

SOFTWARE

Software is required to assess the images. Most cameras come together with software to display images and configure the camera. That’s enough to get the camera up and running. Special applications and image processing tasks require special software, either purchased or custom developed.

 

TO KNOW MORE ABOUT MACHINE VISION SYSTEM IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT+65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Advertisements

USB 3.0 CAMERAS

mvasia-logo

USB 3.0 is the newest interface on the image processing market. Read here about when USB 3.0 is the ideal choice for your applications, factors to remember during installation and which camera models Basler is offering you.

USB 3.0 VISION CAMERAS

USB3 Vision cameras are an excellent tool for a variety of applications. Especially their bandwidth that effectively closes the speed gap between Camera Link and GigE interfaces, their simple plug and play functionality and their Vision Standard compliance make them suitable for industrial applications.

In addition, the USB 3.0 is perfectly tailored for the latest generation of CMOS sensors, with the architecture and bandwidth to take advantage of all that new technology has to offer.

Thanks to a decision by the USB Implementers Forum, the USB 3.0 interface may also henceforth be referred to as USB 3.1 Gen 1. Even with the new name, there are no technical differences from USB 3.0, and so the terms can be used synonymously. For simplicity’s sake and to avoid confusion, we will continue to refer to it as USB 3.0.

It’s important to distinguish it from USB 3.1 Gen 2 (a.k.a. USB Superspeed+), as this new generation of the interface offers a higher bandwidth than its predecessor.

USB 3.0 Cameras Dealer in Singapore
Credits : baslerweb.com

Selected advantages of the USB 3.0 interface:

Fast: High data throughput rates of up to 350 MB/s

  •  Outstanding real-time compatibility
  •  High stability
  •  Simple integration into all image processing applications (libraries)
  •  Reliable industrial USB3 Vision Standard
  •  Low CPU load
  •  Low latency and jitter
  •  Screw-down plug connectors
  •  Integrated memory and buffers for top stability in industrial applications.
  •  Plug and play functionality

BASLER USB 3.0 CAMERAS

The following camera series are available with the USB 3.0 interface:

Basler Ace

  •  Broad sensor portfolio: CCD, CMOS and NIR variants
  •  Extensive firmware features
  •  VGA to 14 MP resolution, up to 750 fps

Basler Dart

  •  Outstanding price/performance ratio
  •  Board level, small and flexible
  •  1.2 MP to 5 MP and up to 60 fps

Basler Pulse

  •  Compact and lightweight, with elegant design
  •  Global shutter and rolling shutter options
  •  1.2 MP to 5 MP and up to 60 fps

 

TO KNOW MORE ABOUT BASLER USB 3.0 CAMERAS DEALER IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – BASLERWEB.COM

THE BASICS OF MACHINE VISION: IMAGINE YOUR NEW AUTOMATION PROCESS IN THE MODERN AGE

mvasia-logo

What is MACHINE VISION? Who is using machine vision? How can I get started using machine vision? These are all great questions when it comes to the exciting world of machine vision, its capabilities, and its impact on daily and yearly production outputs. In this blog, we’ll answer these questions and more, as we introduce you to the future.

WHAT IS MACHINE VISION?

Machine vision is the automatic extraction of information from digital images. A typical machine vision environment would be a manufacturing production line where hundreds of products are flowing down the line in front of a smart camera. Manufacturers use machine vision systems instead of human inspectors because it’s faster, more consistent, and doesn’t get tired. The camera captures the digital image and analyzes it against a pre-defined set of criteria. If the criteria are met, the object can proceed. If not, the object will be re-routed off the production line for further inspection.

Machine vision can be difficult to understand, so here is a very basic example: Say you are a beverage manufacturer. Traditionally, you would have human inspectors watching thousands of bottles move down a production line. The workers would need to ensure every bottle cap was secured correctly, every label was on straight and contained the correct information, and every bottle was filled to the appropriate level. With machine vision, this entire repetitive process can be automated to be faster, more efficient, and more productive.

WHAT ARE THE COMPONENTS OF A MACHINE VISION SYSTEM?

Machine vision is used heavily in conjunction with robots to increase their effectiveness and overall value for the business. These types of robots resemble a human arm with a camera mounted at the “hand” position. The camera acts as the robot’s “eyes”, guiding it to complete the assigned task. (Visit our previous blog about integrating machine vision cameras with robots for more information.)

A machine vision system has five key components that can be configured either as separate components or integrated into a single smart camera. The correct configuration depends on the application and its complexity. The five key components are:

Lighting – This critical aspect of a machine vision system illuminates the part to be inspected, allowing its features to stand out so that the vision system can see them as clearly as possible.

Lens – Captures the image and presents it to the sensor in the form of light.

Sensor – Converts light into a digital image for the processor to analyze.

Vision Processing – Consists of algorithms that review the image and extract required information.

Communication – The resulting data is communicated out to the world in a useful manner.

Our MicroHAWK MV Smart Camera is a fully-integrated machine vision system. This means that the lighting, lens, sensor, and vision processing is done on the camera. That information can then be sent to a PC or tablet via Ethernet or USB. MicroHAWK is available with an array of hardware options to take on any inspection task in a wide variety of applications.

WHY YOU SHOULD USE MACHINE VISION

The machine vision market is growing rapidly. According to Statistics MRC, “the global machine vision market was estimated at $8.81 billion in 2015 and is expected to reach $14.72 billion by 2022, growing at a CAGR of 8.9% from 2015 to 2022”. Many retail giants use a vision system to track products in their warehouse from arrival to dispatch, aiding workers by eliminating the possibility of human error and automating repetitive tasks. “Items retrieved from storage shelves are automatically identified and sorted into batches destined for a single customer. The system knows the dimensions of each product and will automatically allocate the right box, and even the right amount of packing tape.” (MIT Technology Review). A worker will then pack the box and send it on its way.

Machine vision is better-suited to repetitive inspection tasks in industrial processes than human inspectors. Machine vision systems are faster, more consistent, and work for a longer period of time than human inspectors, reducing defects, increasing yield, tracking parts and products, and facilitating compliance with government regulations to help companies save money and increase profitability.

Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.

FOUR MAIN BENEFITS OF MACHINE VISION

Reduce Defects

    •  Ensure fewer bad parts enter the market which cause costly recalls and tarnish a company’s reputation.

 

  •  Prevent mislabeled products whose label doesn’t match the content. These defects create unhappy customers, have a negative impact on your brand reputation, and pose a serious safety risk – especially with pharmaceutical products and food items for customers with allergies.

Increase Yield

  •  Turn additional available material into saleable product.
  •  Avoid scrapping expensive materials and rebuilding parts.
  •  Reduce downtime by detecting product routing errors that can cause system disruptions.

Tracking Parts and Products

    •  Uniquely identify products so they can be tracked and traced throughout the manufacturing process.

 

  • Identify all pieces in the process, reducing stock and ensuring product will be more readily available for just-in-time (JIT) processes.
  • Avoid component shortages, reduce inventory, and shorten delivery time.

Comply with Regulations

  •  To compete in some markets, manufacturers must comply with various regulations.
  •  In pharmaceuticals, a highly regulated industry, machine vision is used to ensure product integrity and safety by complying with government regulations such as 21CFR Part 11 and GS1 data standards.

FOUR COMMON MACHINE VISION APPLICATIONS

Measurement

Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.

Counting

Another common machine vision application is counting – looking for a specific number of parts or features on a part to verify that it was manufactured correctly. In the electronics manufacturing industry, for example, machine vision is used to count various features of printed circuit boards (PCBs) to ensure that no component or step was missed in production.

Location

Machine vision can be used to locate the position and orientation of a part and to verify proper assembly within specific tolerances. Location can identify a part for inspection with other machine vision tools, and it can also be trained to search for a unique pattern to identify a specific part. In the life sciences and medical industries, machine vision can locate test tube caps for further evaluation, such as cap presence, cap color, and measurement to ensure correct cap position.

Decoding

Machine vision can be used to decode linear, stacked, and 2D symbologies. It can also be used for optical character recognition (OCR), which is simultaneously human- and machine-readable. In factory automation, machine vision is used to sort products on a production line by decoding the symbol on the product. The symbols themselves can also be verified by machine vision-based verification systems to ensure that they comply with the requirements of various symbology standards organizations.

Machine vision is a powerful tool that saves money and increases efficiency in virtually any industrial process. The MicroHAWK MV Smart Camera can be scaled from basic decoding to advanced inspection and integration with robotic applications.

Microscan will soon be announcing a new machine vision system that will make you re-evaluate your definition of fast.

 

TO KNOW MORE ABOUT MACHINE VISION IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – WWW.MICROSCAN.COM

THE BASICS OF MACHINE VISION: IMAGINE YOUR NEW AUTOMATION PROCESS IN THE MODERN AGE

mvasia-logo

What is MACHINE VISION? Who is using machine vision? How can I get started using machine vision? These are all great questions when it comes to the exciting world of machine vision, its capabilities, and its impact on daily and yearly production outputs. In this blog, we’ll answer these questions and more, as we introduce you to the future.

WHAT IS MACHINE VISION?

Machine vision is the automatic extraction of information from digital images. A typical machine vision environment would be a manufacturing production line where hundreds of products are flowing down the line in front of a smart camera. Manufacturers use machine vision systems instead of human inspectors because it’s faster, more consistent, and doesn’t get tired. The camera captures the digital image and analyzes it against a pre-defined set of criteria. If the criteria are met, the object can proceed. If not, the object will be re-routed off the production line for further inspection.

Machine vision can be difficult to understand, so here is a very basic example: Say you are a beverage manufacturer. Traditionally, you would have human inspectors watching thousands of bottles move down a production line. The workers would need to ensure every bottle cap was secured correctly, every label was on straight and contained the correct information, and every bottle was filled to the appropriate level. With machine vision, this entire repetitive process can be automated to be faster, more efficient, and more productive.

WHAT ARE THE COMPONENTS OF A MACHINE VISION SYSTEM?

Machine vision is used heavily in conjunction with robots to increase their effectiveness and overall value for the business. These types of robots resemble a human arm with a camera mounted at the “hand” position. The camera acts as the robot’s “eyes”, guiding it to complete the assigned task. (Visit our previous blog about integrating machine vision cameras with robots for more information.)

A machine vision system has five key components that can be configured either as separate components or integrated into a single smart camera. The correct configuration depends on the application and its complexity. The five key components are:

Lighting – This critical aspect of a machine vision system illuminates the part to be inspected, allowing its features to stand out so that the vision system can see them as clearly as possible.

Lens – Captures the image and presents it to the sensor in the form of light.

Sensor – Converts light into a digital image for the processor to analyze.

Vision Processing – Consists of algorithms that review the image and extract required information.

Communication – The resulting data is communicated out to the world in a useful manner.

Our MicroHAWK MV Smart Camera is a fully-integrated machine vision system. This means that the lighting, lens, sensor, and vision processing is done on the camera. That information can then be sent to a PC or tablet via Ethernet or USB. MicroHAWK is available with an array of hardware options to take on any inspection task in a wide variety of applications.

WHY YOU SHOULD USE MACHINE VISION

The machine vision market is growing rapidly. According to Statistics MRC, “the global machine vision market was estimated at $8.81 billion in 2015 and is expected to reach $14.72 billion by 2022, growing at a CAGR of 8.9% from 2015 to 2022”. Many retail giants use a vision system to track products in their warehouse from arrival to dispatch, aiding workers by eliminating the possibility of human error and automating repetitive tasks. “Items retrieved from storage shelves are automatically identified and sorted into batches destined for a single customer. The system knows the dimensions of each product and will automatically allocate the right box, and even the right amount of packing tape.” (MIT Technology Review). A worker will then pack the box and send it on its way.

Machine vision is better-suited to repetitive inspection tasks in industrial processes than human inspectors. Machine vision systems are faster, more consistent, and work for a longer period of time than human inspectors, reducing defects, increasing yield, tracking parts and products, and facilitating compliance with government regulations to help companies save money and increase profitability.

Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.

FOUR MAIN BENEFITS OF MACHINE VISION

Reduce Defects

    •  Ensure fewer bad parts enter the market which cause costly recalls and tarnish a company’s reputation.

 

  •  Prevent mislabeled products whose label doesn’t match the content. These defects create unhappy customers, have a negative impact on your brand reputation, and pose a serious safety risk – especially with pharmaceutical products and food items for customers with allergies.

Increase Yield

  •  Turn additional available material into saleable product.
  •  Avoid scrapping expensive materials and rebuilding parts.
  •  Reduce downtime by detecting product routing errors that can cause system disruptions.

Tracking Parts and Products

    •  Uniquely identify products so they can be tracked and traced throughout the manufacturing process.

 

  • Identify all pieces in the process, reducing stock and ensuring product will be more readily available for just-in-time (JIT) processes.
  • Avoid component shortages, reduce inventory, and shorten delivery time.

Comply with Regulations

  •  To compete in some markets, manufacturers must comply with various regulations.
  •  In pharmaceuticals, a highly regulated industry, machine vision is used to ensure product integrity and safety by complying with government regulations such as 21CFR Part 11 and GS1 data standards.

FOUR COMMON MACHINE VISION APPLICATIONS

Measurement

Microscan holds one of the world’s most extensive patent portfolios for machine vision technology, including hardware designs and software solutions to accommodate all user levels and application variables. Automatix, now part of Microscan, was the first company to market industrial robots with built-in machine vision. Our fully-integrated MicroHAWK MV Smart Camera, coupled with powerful Visionscape software, is one incredible platform created to solve your machine vision needs.

Counting

Another common machine vision application is counting – looking for a specific number of parts or features on a part to verify that it was manufactured correctly. In the electronics manufacturing industry, for example, machine vision is used to count various features of printed circuit boards (PCBs) to ensure that no component or step was missed in production.

Location

Machine vision can be used to locate the position and orientation of a part and to verify proper assembly within specific tolerances. Location can identify a part for inspection with other machine vision tools, and it can also be trained to search for a unique pattern to identify a specific part. In the life sciences and medical industries, machine vision can locate test tube caps for further evaluation, such as cap presence, cap color, and measurement to ensure correct cap position.

Decoding

Machine vision can be used to decode linear, stacked, and 2D symbologies. It can also be used for optical character recognition (OCR), which is simultaneously human- and machine-readable. In factory automation, machine vision is used to sort products on a production line by decoding the symbol on the product. The symbols themselves can also be verified by machine vision-based verification systems to ensure that they comply with the requirements of various symbology standards organizations.

Machine vision is a powerful tool that saves money and increases efficiency in virtually any industrial process. The MicroHAWK MV Smart Camera can be scaled from basic decoding to advanced inspection and integration with robotic applications.

Microscan will soon be announcing a new machine vision system that will make you re-evaluate your definition of fast.

 

TO KNOW MORE ABOUT MACHINE VISION IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – WWW.MICROSCAN.COM

WHAT ARE VISION INSPECTION SYSTEMS?

mvasia-logo

VISION INSPECTION SYSTEMS (sometimes referred to as machine vision systems) provide image-based inspection automated for your convenience for a variety of industrial and manufacturing applications. Though not a new technology, 2D and 3D machine vision systems are now commonly used for automated inspection, robot guidance, quality control and sorting, and much more.

WHAT VISION INSPECTION SYSTEMS CAN DO

These intelligent inspection systems come equipped with a camera or multiple cameras, and even video and lighting. Vision systems are capable of measuring parts, verifying parts are in the correct position, and recognizing the shape of parts. Also, vision systems can measure and sort parts at high speeds. Computer software processes images captured during the process you are trying to assess to capture data. The vision system can be intelligent enough to make decisions that impact the function you are trying to assess, often in a pass/fail capacity to trigger an operator to act. These systems can be embedded into your lines to provide a constant stream of information.

APPLICATIONS FOR VISION INSPECTION SYSTEMS

VISION INSPECTION SYSTEMS can be used in any number of industries in which quality control is necessary. For example, vision systems can assist robotic systems to obtain the positioning of parts to further automate and streamline the manufacturing process. Data collected by a vision system can help improve efficiency in manufacturing lines, sorting, packing and other applications. In addition, the information captured by the vision system can identify problems with the manufacturing line or other function you are examining in an effort to improve efficiency, stop inefficient or ineffective processes, and identify unacceptable products.

INDUSTRIES USING VISION SYSTEMS FOR INSPECTION

Because vision inspection systems combine various technologies, the design of these systems can be customized to meet the needs of many industries. Thus, many companies enjoy the use of this technology for quality control purposes, and even security purposes. Industries using vision inspection systems include automation, robotics, pharmaceuticals, packaging, automotive, food and beverage, semiconductors, life sciences, medical imaging, electronics, consumer goods among other kinds of manufacturing and non-manufacturing companies.

BENEFITS OF VISION INSPECTION SYSTEMS

Overall, the benefits of VISION INSPECTION SYSTEMS , include, but are not limited to, production improvements, increased uptime, and reduction in expenses. Vision systems allow companies to conduct 100% inspection of parts for quality control purposes. This ensures that all products will meet the customers’ specifications. If you want to improve the quality and efficiency of your industry, a vision inspection system could be the answer for you.

 

TO KNOW MORE ABOUT VISION INSPECTION SYSTEM IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTDAT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – VISIONONLINE.ORG

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

MACHINE VISION KEEPS AN EYE ON FACIAL RECOGNITION

mvasia-logo

While privacy concerns have been a factor for years, it turns out that if you put a useful application in front of the MACHINE VISION’S algorithm —i.e., you make it fun — everyone’s happy. For example, a Russian music festival used a facial recognition algorithm to supply attendees with photos of themselves from the event, while a firm in Singapore is developing a transport ticketing system that uses voluntary facial recognition to charge commuters as they pass through fare gates.

It helps that consumers have face detection technology in the palm of their hands. Mobile applications such as FaceLock scan a user’s face in order to unlock apps on their smartphone or tablet. Furthermore, a recent patent filed by Apple suggests that the next generation iPhone will have “enhanced face detection using depth information.” Users also are relying on facial recognition for critical tasks such as mobile banking and commerce.

The projected growth of facial recognition and other biometrics usage reflects these trends. Facial recognition market size is estimated to rise from $3.3 billion in 2016 to $6.84 billion in 2021. Analysts attribute the growth to an expanding surveillance market, increasing government deployment, and other applications in identity management. The machine vision industry is starting to find ways to capitalize on the growth opportunities in facial recognition, whether it’s a camera calibrated to work in low light or a mobile app that helps police officers catch suspects. But the technology needs to overcome a few hiccups first.

TO REDACT AND SERVE

Suspect Technologies, a startup in Cambridge, Massachusetts, has developed advanced facial recognition algorithms, but for two very different purposes within law enforcement. One use addresses the privacy considerations around body cameras worn by police officers. The most frequently cited goal of body worn video (BWV) is to improve law enforcement accountability and transparency. When someone files a Freedom of Information Act request to acquire one of these videos, law enforcement agencies must promptly comply.

But they can’t do that without first blurring the identities of victims, minors, and innocent bystanders, which typically has been a slow, tedious process restricted to video specialists. Suspect Technologies’ automated video redaction (AVR) software, available on cameras manufactured by VIEVU, is optimized for the real-world conditions of BWV — most notably high movement and low lighting. The technology, which can track multiple objects simultaneously, features a simple interface that allows users to add or adjust redacted objects. AVR reduces the time it takes to redact video footage by tenfold over existing methods.

Unlike AVR which covers up identities, Suspect Technologies is rolling out a mobile facial recognition app to identify suspects. “As it stands now, there’s no simple way for law enforcement to tell if someone is a wanted criminal,” says Jacob Sniff, CEO and CTO of Suspect Technologies.

Compatible with iPhone and Android devices, the company’s cloud-based watchlist recognition software has been tested on 10 million faces. The algorithm takes advantage of better facial recognition accuracy, which increases tenfold every four years. “Our goal is to be 100% accurate on the order of 10,000 identities,” Sniff says.

Suspect Technologies will start by customizing the product for regional law enforcement agencies in midsized towns, which typically have about 100 wanted felons. The company also plans to introduce its software to schools and businesses for attendance-oriented applications.

Machine Vision System

CAMERAS THAT RECOGNIZE

On the hardware side, the specifications of a facial recognition application are driving machine vision camera selection. “Monochrome cameras offer better sensitivity to light, so they are ideal in low-light conditions indoors and outdoors,” says Mike Fussell, product marketing manager of the integrated imaging division at FLIR SYSTEMS, Inc.(Wilsonville, Oregon). “If someone is strongly backlit or shadowed, cameras with the latest generation of high-performance CMOS sensors really shine in those difficult situations.”

For customers seeking better performance in low light, FLIR offers higher-end sensors that have high frame rates and global shutter. The entire pixel count reads out at the same time instantly, eliminating the distortion caused by the rolling shutter readout found on less expensive sensors, Fussell says. Rolling shutter cameras show distortion caused by the movement of the subject relative to the shutter movement, but they present a lower-cost alternative in low-light conditions.

Most cameras used in facial recognition are in the 3–5 MP range, according to Fussell. But in an application like a passport kiosk, where all of the variables are controlled, a lower-resolution camera is suitable. FLIR also offers stereo vision products that customers calibrate for optical tracking, which measures eye movement relative to the head. Some companies are taking the concept of facial recognition to the next level with gait analysis, the study of human motion. “In a building automation application, where you want to learn people’s habits, you could track their gait to turn lights on and off or have elevators waiting in advance for them,” Fussell says.

FACING OBSTACLES HEAD-ON

For all its potential, facial recognition technology must address fundamental challenges before an algorithm reaches a camera or mobile device. According to one study, face recognition systems are 5–10 percent less accurate when trying to identify African Americans compared to white subjects. What’s more, female subjects were more difficult to recognize than males, and younger subjects were more difficult to identify than adults. ”

As such, algorithm developers must focus more on the content and quality of the training data so that data sets are evenly distributed across demographics. Testing the face recognition system, a service currently offered by the National Institute of Standards and Technology (NIST), can improve accuracy.

Once the algorithm reaches the camera, facial recognition’s accuracy is dependent upon the number and quality of photos in the comparison database. And even though most facial recognition technology Is automated, most systems require human examination to make the final match. Without specialized training, human reviewers make the wrong decision about a match half the time.

The machine vision industry, however, is no stranger to waiting for a technology to mature. Once facial recognition does that, camera makers and software vendors will be ready to supply the equipment and services for secure, accurate identity verification.

 

TO KNOW MORE ABOUT MACHINE VISION SYSTEM, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

WHAT IS EMBEDDED VISION

mvasia-logo

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and powerful. This trend can also be observed in the world of vision technology.

A classic machine vision system consists of an industrial camera and a PC: Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers, i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. Such systems are called embedded (vision) systems.

DESIGN AND USE OF AN EMBEDDED VISION SYSTEM

An embedded vision system consists, for example, of a camera, a so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB or BASLER BCON for LVDS.

Basler Camera Distributor in India

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

WHICH EMBEDDED SYSTEMS ARE AVAILABLE?

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi®. The Raspberry Pi ® is a mini-computer with established interfaces and offers a similar range of features as a classic PC or laptop.

Embedded vision solutions can also be implemented with so-called system on modules (SoM) or computer on modules (CoM). These modules represent a computing unit. For the adaptation of the desired interfaces to the respective application, a so-called individual carrier board is needed. This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs or CoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

For large manufactured quantities, individual processing boards are a good idea.

All modules, single-board computers, and SoMs, are based on a system on chip (SoC). This is a component on which the processor(s), controllers, memory modules, power management and other components are integrated on a single chip.

Due to these efficient components, the SoCs, embedded vision systems have only become available in such a small size and at a low cost as today.

CHARACTERISTICS OF EMBEDDED VISION SYSTEMS VERSUS STANDARD VISION SYSTEMS

Most of the above-mentioned single-board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries.

Increasingly, however, x86-based single-board computers are also spreading.

A consistently important criterion for the computer is the space available for the embedded system.

For the SW developer, the program development for an embedded system is different than for a standard PC. As a rule, the target system does not provide a suitable user interface which can also be used for programming. The SW developer must connect to the embedded system via an appropriate interface if available (e.g. network interface) or develop the SW on the standard PC and then transfer it to the target system.

When developing the SW, it should be noted that the HW concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the mobile phone, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and is therefore a universal computer.

WHAT ARE THE BENEFITS OF EMBEDDED VISION SYSTEMS?

In some cases, much depends on how the embedded vision system is designed. A single-board computer is often a good choice as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. This solution is suitable for small to medium quantities. The leanest setup is obtained through a customized system. Here, however, higher integration effort is a factor. This solution is therefore suitable for large unit numbers.

The benefits of embedded vision systems at a glance:

  •  Lean system design
  •  Light weight
  •  Cost-effective, because there is no unnecessary hardware
  •  Lower manufacturing costs
  •  Lower energy consumption
  •  Small footprint

WHICH INTERFACES ARE SUITABLE FOR AN EMBEDDED VISION APPLICATION?

Embedded vision is the technology of choice for many applications. Accordingly, the design requirements are widely diversified. Depending on the specification, BASLER offers a variety of cameras with different sensors, resolutions and interfaces.

The two interface technologies that Basler offers for embedded vision systems in the portfolio are:

  •  USB3 Vision for easy integration and
  •  Basler BCON for LVDS for a lean system design

Both technologies work with the same Basler pylon SDK, making it easier to switch from one interface technology to the other.

USB3 VISION

USB 3.0 is the right interface for a simple plug and play camera connection and ideal for camera connections to single-board computers. The Basler pylon SDK gives you easy access to the camera within seconds (for example, images and settings), since USB 3.0 cameras are standard-compliant and GenICam compatible.

Benefits

  •  Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
  •  Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
  •  Profitable solutions for SoMs with associated base boards
  •  Stable data transfer with a bandwidth of up to 350 MB/s

BCON FOR LVDS

BCON – Basler’s proprietary LVDS-based interface allows a direct camera connection with processing boards and thus also to on-board logic modules such as FPGAs (field programmable gate arrays) or comparable components. This allows a lean system design to be achieved and you can benefit from a direct board-to-board connection and data transfer.

The interface is therefore ideal for connecting to a SoM on a carrier / adapter board or with an individually-developed processor unit.

If your system is FPGA-based, you can fully use its advantages with the BCON interface.

BCON is designed with a 28-pin ZIF connector for flat flex cables. It contains the 5V power supply together with the LVDS lanes for image data transfer and image triggering. You can configure the camera vialanes that work with the I²C standard.

BASLER’S PYLON SDK is tailored to work with the BCON for LVDS interface. Therefore, it is easy to change settings such as exposure control, gain, and image properties using your software code and pylons API. The image acquisition of the application must be implemented individually as it depends on the hardware used.

Benefits

  •  Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
  •  Direct connection via LVDS-based image data exchange to FPGA
  •  With the pylon SDK the camera configuration is possible via standard I²C bus without further programming. The compatibility with the GenICam standard is given.
  •  The image data software protocol is openly and comprehensively documented
  •  Development kit with reference implementation available
  •  Flexible flat flex cable and small connector for applications with maximum space limitations
  •  Stable, reliable data transfer with a bandwidth of up to 252 MB/s

HOW CAN AN EMBEDDED VISION SYSTEM BE DEVELOPED AND HOW CAN THE CAMERA BE INTEGRATED?

Although it is unusual for developers who have not had much to do with embedded vision to develop an embedded vision system, there are many possibilities for this. In particular, the switch from standard machine vision system to embedded vision system can be made easy. In addition to its embedded product portfolio, Basler offers many tools that simplify integration.

Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.

MACHINE LEARNING IN EMBEDDED VISION APPLICATIONS

Embedded vision systems often have the task of classifying images captured by the camera: On a conveyor belt, for example, in round and square biscuits. In the past, software developers have spent a lot of time and energy developing intelligent algorithms that are designed to classify a biscuit based on its characteristics (features) in type A (round) or B (square). In this example, this may sound relatively simple, but the more complex the features of an object, the more difficult it becomes.

Algorithms of machine learning (e.g., Convolutional Neural Networks, CNNs), however, do not require any features as input. If the algorithm is presented with large numbers of images of round and square biscuits, together with the information which image represents which variety, the algorithm automatically learns how to distinguish the two types of biscuits. If the algorithm is shown a new, unknown image, it decides for one of the two varieties because of its “experience” of the images already seen. The algorithms are particularly fast on graphics processor units (GPUs) and FPGAs.

 

TO KNOW MORE ABOUT BASLER CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

HIGH-SPEED CAMERA TECHNOLOGY

mvasia-logo

Bayer Filter

    •  Nearly all color sensors follow the same principle (according to its inventor Dr.Bryce E. Bayer).

 

    •  The light sensitive cells or pixels on the sensor are only capable of distinguishing different levels of light. For this reason tiny color filters (red,green and blue) are placed in front of the pixel as part of the production process.

 

    •  In a subsequent step of image processing the filtered output values are combined to a “color pixel” again.

 

  •  To adapt closer to the perception of the human eye (which is much more green-sensitive than to other colors), twice as many green filters are used.

Burst Trigger Mode

  •  Generally a trigger event indicates the camera when to start recording, after a predefined amount of time (or when the memory is full) the recording stops.
  •  Depending on the application yet another trigger event tells the camera when to terminate the recording.
  •  In Burst Trigger Mode however the camera records as long and as often as the trigger is active (comparable to the triggering mechanism of a machine gun).

Mikrotron High Speed Camera in Singapore

CCD / CMOS comparison

    •  Abbreviations for the two main sensor technologies, describing the inner structure of the chip:

 

    •  „CMOS“: complementary metal-oxide semiconductor

 

  •  „CCD“: charge coupled device

 CCD:

A CCD-sensor provides a determined electrical charge per pixel, i.e. a certain amount of electrons according to the previous exposure.

These have to be captured pixel by pixel with a subsequent electronic circuit, converted into a voltage quantity and recalculated into a binary value.

This operation is rather time consuming. In addition the whole frame has to be grabbed, which requires comprehensive postprocessing.

 CMOS:

CMOS sensors can be produced cheaper and offer the possibility of onboard preprocessing, the information of every pixel can be provided in a digitised mode.

    •  Thus the camera may be designed smaller and random acces to particular parts of the image (“ROI”, region of interest) is possible.

 

  •  Needing less external circuits results in reduced power consumtion of the camera, the stored frames can be read out much faster.

Dynamic Range Adjustment

    •  The human eye has a very extensive dynamic range, i.e. can evaluate very low lighting conditions (like candle- or starlight) as well as extreme light impressions (reflected sunlight on a water surface).

 

    •  This corresponds to a (logarhithmic) dynamic range of 90dB.That means, two objects with 1,000,000,000 times different quantity of light can both be seen clearly.

 

    •  Unlike this, a CMOS camera has a linear dynamic range of about 60dB which equals a ratio of 1:1000.

 

    •  If for instance a recording setup requires to identify dim component labels with large welding reflections, image details within the reflection area can not be seen.

 

    •  Cameras with Dynamic Range Adjustment enable the user to adjust the linear response in certain areas: overexposed objects become darker without loosing intensitiy on the dark ones.

 

    •  Thus minimal variations of luminosity can be detected, even in areas

 

  •  of intense reflective light.

Fixed Pattern Noise (FPN)

    •  Every single pixel or photodiode in a CMOS camera has a construction related tolerance.

 

    •  Even without any exposure to light the diodes generate slightly varying output values.

 

    •  To avoid a corruption of the image, a process similar to the white balance in digital photography compares a reference picture with a dark frame.

 

    •  This frame contains only the detected differences and is used to correct the subsequent images of the sensor.

 

  •  Only after this kind of postprocessing e.g. a plain white area is displayed homogenously white.

Gigabit Ethernet (GigE)

    •  This data transfer technology allows the transmission among various devices (server, printer, mass storage, cameras) within a network.

 

    •  While standard Ethernet is to slow for the transfer of comprehensive image data, Gigabit Ethernet (GigE) with a maximum transfer rate of 1000Mbit/s or 1 Gigabit per second ensures a dependable image transfer in machine vision cameras.

 

GigE Vision

    •  GigE Vision is a industrial standard, developed by the AIA (Automated Imaging Association) for high performance machine vision cameras, optimised for the transfer of large amounts of image data.

 

    •  GigE Vision bases on the network structure of Gigabit Ethernet and includes a hardware interface standard (Gigabit Ethernet) and communication protocolls as well as standardised communication- and controlmodi for cameras.

 

    •  The GigE Vision camera control is based on a command structure named GenICam.

 

    •  This establishes a common camera interface to enable communication with third party vision cameras without any customisation.

 

ImageBLITZ automatic trigger

    •  To capture an unpredictable or unmeasurable event for “inframe” triggering purpose, Mikrotron invented the ImageBLITZ operation mode.

 

    •  In most cases no further equipment or elaborate trigger sensing devices for camera control are needed, the picture itself is the trigger.

 

    •  Within certain limits the ImageBLITZ is adjusted to react only to the expected changes in a predefined area of the picture.

 

Multi Sequence Mode

    •  In this mode the available memory of the camera is divided into many individual sequences. Following each trigger event (e.g. keystroke or a light barrier is set off) a predefined number of frames is saved.

 

    •  In repeatedly occuring events the different variations can be compared and provide a valuable base for the analysis of malfunctions or technical processes.

 

    •  Even a previously determined amount of frames before and after the trigger event can be saved within every recorded sequence.

 

Sobel Filter

    •  In several machine vision applications as motion analysis, positioning or pattern matching it is essential to determine certain edges, outlines or coordinates.

 

    •  The Sobel filter uses an edge-detection algorithm to detect just those edges and produces a chain of pixels (just on/off) that resembles the edges.

 

    •  This process allows to cut down the data stream already in the FPGA-chip of the camera for more than 80%. Less data has to be transferred and processed, the transfer rate rises considerably.

 

Suspend to Memory Mode

    •  The operation of a camera is reduced to the preservation of recorded images.

 

    •  Due to resulting low power consumtion the charge of the storage battery lasts significantly longer.

 

    •  This mode is activated either automatically after recording or manually by pressing a button.

 

  •  Thus the recording memory can be preserved for 24 hours.

 

TO KNOW MORE ABOUT MIKROTRON HIGH SPEED CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – MIKROTRON.DE

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

VISION SYSTEM INSPECTS X-RAY DOSIMETER BADGES – HELMHOLTZ-ZENTRUM

mvasia-logo

In Germany, the inspection of x-ray dosimeters worn by people who may be exposed to radiation is a governmental responsibility. Only a handful of institutions are qualified to perform such tasks. One of which, the Helmholtz-Zentrum (Munich, Germany) is responsible for the analysis of approximately 120,000 film badge dosimeters a month.

Previously these 120,000 film badges were evaluated manually. To speed this inspection and increase reliability, the Helmholtz-Zentrum has developed a machine-vision system to automatically inspect these films. The film from each dosimeter badge is first mounted on a plastic adhesive foil, which is wound into a coil. This coil is then mounted on the vision system so that each film element can be inspected automatically (see figure). To analyze each film, a DX4 285 FireWire camera from Kappa optronics (Gleichen, Germany) is mounted on a bellows stage above the film reel.

Data from this camera is then transferred to a PC and processed using HALCON 9.0 from MVTec Software (Munich, Germany). Resulting high-dynamic-range images are then displayed using an ATIFire GL V3600 graphic board from AMD (Sunnyvale, CA, USA) on a FlexScan MX 190 S display from Eizo (Ishikawa, Japan). Before the optical density of the film is measured, its presence and orientation must be determined. As each film moves under the camera system’s field of view, this presence and orientation task is computed using HALCON’s shape-based matching algorithm.

Both the camera and a densitometer are used to measure the optical density of the film. The densitometer measures the brightness at each of seven points on the film in high precision and is used to calibrate the camera measurement for every film image. To increase the dynamic range of the gray-level image of the film, two images with different exposure times are computed and combined into a high-dynamic-range image. Because the background lighting is not homogenous, shading correction is performed to eliminate any lighting variation. Any lens vignetting and variations caused by pixel-to-pixel sensitivity variation is eliminated by flat-field correction. The optical density is converted into a photon dose using a linear algebraic function to calculate the x-ray dose to which the film was exposed.

Every film reading must be correlated with the unique specimen number associated with each badge. Since these numbers are deposited onto the film material, approximately 10,000 characters needed to be trained and saved to an OCR database using HALCON. After the film is identified, the system must also detect which type of dosimeter cassette has been used to house the film. Since each cassette uses a different x-ray filter, the shadow cast on the film can be either rectangular or round. Thus, a grayscale analysis of these shadows can be used to detect the differences between the different types of cassettes that were used to house the film. To pinpoint the specific causes of x-ray exposure, the system is also programmed to detect whether any potential exposure is caused by errors in film developing or x-ray contamination. If the imaging system detects contamination events, these are then reported manually.

 

TO KNOW MORE ABOUT MACHINE VISION SYSTEM IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – MVTEC.COM

 MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

INDUSTRIAL CAMERAS – LETTING ROBOTIC ARMS SEE

mvasia-logo

Robotic arms are widely used in industrial automation. They complete tasks which humans cannot accomplish, are considered too time consuming or dangerous, or which require precise positioning and highly repetitive movements. Tasks are completed in high quality with speed, reliability and precision. Robotic arms are used in all areas of industrial manufacturing from the automobile industry to mold manufacturing and electronics but also in fields where the technology might be less expected such as agriculture, healthcare and service industries.

ROBOTIC ARMS “SEE” WITH MACHINE VISION

Like humans, robotic arms need “eyes” to see and feel what they grasp and manipulate: machine vision makes this possible. Industrial cameras and image processing software work together to enable the robot to move efficiently and precisely in three dimensional space which enables them to perform a variety of complex tasks: welding, painting, assembly, picking and placing for printed circuit boards, packaging and labeling, palletizing, product inspection, and high-precision testing. Not all industrial cameras are compatible with or can be installed in robotic arms, but The Imaging Source’s GigE industrial cameras provide an optimal solution.

GIGE INDUSTRIAL CAMERAS FROM THE IMAGING SOURCE – THE COST EFFECTIVE AND HIGHLY VERSATILE IMAGING SOLUTION

THE IMAGING SOURCE’S GIGE INDUSTRIAL CAMERAS are best known for their outstanding image quality, easy integration and rich set of features. They are shipped with highly sensitive CCD or CMOS sensors from Sony and Aptina, which offer very low noise levels, provide multiple options in terms of resolution and frame rate, guarantee precise positioning capture and output first-rate image quality. External Hirose ports make the digital I/O, strobe, trigger inputs and flash outputs easily accessible. Binning and ROI features (CMOS only) enable increased frame rates and improved signal to noise ratios. The cameras’ extremely compact and robust industrial housing means straightforward integration into robotic assemblies.

In addition, The Imaging Source’s GigE industrial cameras are shock-resistant, so camera-shake and blurred images can be avoided. The cameras are shipped with camera-end locking screws, and the built-in Gigabit Ethernet interface allows for very long cable lengths (up to100 meters) for maximum flexibility.

The Imaging Source’s GigE industrial cameras come bundled with highly compatible end-user software and SDKs which makes the setup and integration with robotic arms fast and simple. Trained personnel without extensive robot programming experience can reprogram the cameras to complete new tasks in a snap. These camera characteristics, along with their competitive price, make The Imaging Source GigE industrial cameras the perfect solution for robotic arm applications.

Suitable cameras for robotic arms:

  • GigE color industrial cameras
  • GigE monochrome industrial cameras

TO KNOW MORE ABOUT IMAGING SOURCE MACHINE VISION CAMERAS, SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – THEIMAGINGSOURCE.COM

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg