LAB AUTOMATION WITH VISION

81868-mvasiaAPRIL, 2019  

 

Thanks to technical, scientific and medical progress, human life expectancy has increased considerably in recent decades. Precise, highly technical and increasingly automated equipment in large hospitals and labs now provides valuable support in numerous measuring and analytical tasks.

Medical Automation Product Dealers in Singapore

                                                           Credits : freepik.com

WIDE RANGE OF LAB APPLICATIONS

The concept of lab automation in general can be interpreted in many ways, and includes various tasks: from simple applications such as weighing, to complex robotic and analytical systems, process tracking, and storage systems. This results in numerous possible camera applications in the medical, scientific, pharmaceutical and analytical fields. Some of these are obviously recognizable, such as those in an imaging urine sediment analysis device, while others run in the background and provide information for the medical diagnosis we receive from a physician. Others in turn support processes that are internal to the devices, and not directly connected with the actual detection process. These applications range from the simple input of a barcode to the support of laser technologies; from the path traversed by blood, starting with the prick of the needle when the blood is drawn, to the various test processes and then the result, up to complex processes in cell technology which offer scientists insights into the origins of diseases, thus advancing diagnostic and therapeutic innovations.

AUTOMATION TREND IN MEDICINE AND RESEARCH

Hospital and research labs increasingly follow the trend towards automation. The essential drivers for this development are

1. RISING COST PRESSURE:

Health systems and research institutions are subject to growing economic strains, and try to counteract this pressure with cost reductions in their services. Automation through modern technologies with inexpensive system components makes it possible to lower costs in the lab equipment, relieves the staff, and frees up capacities that can be utilized elsewhere.

2. HIGH SPEED:

The faster pr

WIDE RANGE OF LAB APPLICATIONS
The concept of lab automation in general can be interpreted in many ways, and includes various tasks: from simple applications such as weighing, to complex robotic and analytical systems, process tracking, and storage systems. This results in numerous possible camera applications in the medical, scientific, pharmaceutical and analytical fields. Some of these are obviously recognizable, such as those in an imaging urine sediment analysis device, while others run in the background and provide information for the medical diagnosis we receive from a physician. Others in turn support processes that are internal to the devices, and not directly connected with the actual detection process. These applications range from the simple input of a barcode to the support of laser technologies; from the path traversed by blood, starting with the prick of the needle when the blood is drawn, to the various test processes and then the result, up to complex processes in cell technology which offer scientists insights into the origins of diseases, thus advancing diagnostic and therapeutic innovations.

AUTOMA

ocessing of analyses enables more analyses per time spent for clinical and analytical contract laboratories, giving them an advantage over competitors since they can serve their customers faster. Automation can also help generate more results per time in research, which shortens project periods and makes new developments or technologies available sooner.

3. BETTER QUALITY MANAGEMENT AND STANDARDIZATION:

Many examinations, once manually executed, are increasingly handled by machines, whose technological features make it possible to complete these tasks with greater precision and improved reproducibility. Thanks to an applied vision system and automated microscopy, for example, researchers can now view detailed and precise image data on their office monitors without having to look through eyepieces in darkrooms. Furthermore, the captured image data offers the capability of documentation and archiving, which meets the growing demands of quality management systems. Automated systems also aren’t subject to the process-related variances of manual work steps, giving them greater reproducibility and paving the way for advancing standardization. Digital image data can be viewed across different locations if desired, e.g. for a scientific exchange or an external diagnostic consultation. The conditions for a reliable diagnostic statement are therefore improved by camera-supported examinations and analyses.

4. WIDER AVAILABILITY:

Lab automation efficiently makes new technologies accessible to many users. This makes it possible for research to determine the pathogenic processes of diseases more quickly. As a result,for example, diseases can be recognized earlier with the help of molecular-biological analyses in in-vitro diagnostics, which may reduce or even prevent their onset and the associated costly therapies that are so strenuous for patients. Devices that are easy to use and inexpensive enable diagnostics even in regions with economic- and infrastructure-related challenges. This means medical care can be improved in epidemic regions, since staff in those areas are often less well-trained, lab equipment has a lower standard overall, and the financial means of the affected patients are low. Here we can expect an increasing amount of so-called POC systems (POC = point of care) and lab-on-a-chip technologies.

APPLICATION AREAS FOR LAB AUTOMATION

Below are some examples of typical application areas for automated, camera-based applications in labs:

1. PROCESS AUTOMATION

This includes general camera applications that generate imaging and data, not for purely analytical but for process-supporting purposes, e.g. barcode/matrix code compilation, as it is applied in most devices for in-vitro diagnostics (IVD). This could involve the simple identification of a patient’s sample vial or the transmission of data from the used reagent, which the device needs in order to calculate the analyses and a batch documentation for purposes of quality management. In an automatic exchange with a lab information system, the right results are thus attributed to the requirements of a patient sample and managed digitally.

Many lab devices work with liquid test material. Depending on the application area, different parameters in this so-called liquid handling process must be determined and/or checked. This may be e.g. the state of the liquid (no air may be pipetted, since it would falsify the analysis result), the type of vial, the color of the lid to code the test material inside (e.g. whether it is a serum or whole blood vial), or color properties / layers or irregularities (bubbles, foam) in the liquid. Cameras may offer advantages since they don’t need contact with the sample, and don’t necessitate a removal of the lid, in contrast to other methods such as the capacitive determination of the liquid state. This prevents such problems as contamination, and enables higher flow rates.

2. AUTOMATED MICROSCOPY

Automated microscopy includes, for instance, applications of light and fluorescence microscopy for in-vitro diagnostics (IVD), in life sciences, pharmaceutical research and in digital pathology.

Different manufacturers use camera systems in their devices to diagnose autoimmune diseases, or diagnose diseases of the blood and hematopoietic organs in hematology, as well as in digital pathology. Pathologists examine tissue sections or cell samples for pathological changes. To this end, they prepare slides which can be examined by microscope to draw conclusions about diseases and provide valuable information for the diagnosis and therapy options, which may not be discernible through other means such as radiology.

There is a wide selection of additional automatic microscope systems with different purposes. From a small device the size of half a shoe box, used for simple cell counting, to systems that are used directly in incubators and enable time-dependent life cell imaging without manual intervention, all the way to the high-content screening systems that are used e.g. in pharmaceutical substance screening – anything is possible.

WHICH CAMERA FOR WHICH LAB APPLICATION?

In addition to the above mentioned fields there is a wide variety of other potential applications and use cases of cameras in the whole scientific field, as for example in protein and nucleic acid analytics, microbiology, particle analytics and more. It’s important to offer cameras with the right features to cover a wide range of applications in the various specialty areas. Independently of the camera’s specific product features, it should offer easy and flexible integrat

TO KNOW MORE ABOUT MEDICAL AUTOMATION PRODUCT DEALERS IN SINGAPORE, ASIA , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

ion with an efficient and comfortable SDK, and, of course, provide high quality and reliability. Technically excellent support, quickly and readily available, also simplifies the integration process for the system developer.

Also Read: WHAT IS EMBEDDED VISION?

TO KNOW MORE ABOUT MEDICAL AUTOMATION PRODUCT DEALERS IN SINGAPORE, ASIA , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

WHAT IS EMBEDDED VISION?

http://mvasiaonline.com

 

March 2019

 

  

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and also more powerful. This trend can also be observed in the world of vision technology.

Machine Vision Cameras Dealer in Singapore Asia

 

A classic machine vision system consists of an industrial camera and a PC:

Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers (SPCs), i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. These systems are called embedded (vision) systems.

 

Design and use of an embedded vision system

An embedded vision system consists, for example, of a camera, so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB, Basler’s BCON for MIPI or BCON for LVDS.

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

 

Which Embedded Systems Are Available?

A so-called SoC (system on chip) lies at the heart of all embedded processing solutions. This is a single chip on which the CPU (which may be multiple units), graphics processors, controllers, other special processors (DSP, ISP) and other components are integrated.

Due to these efficient SoC components, embedded vision systems have become available in such a small size and at a low cost only recently.

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi® or DragonBoard®. These are mini-computers with the established interfaces (USB, Ethernet, HDMI, etc.) and a range of features similar to traditional PCs or laptops, although the CPUs are of course less powerful.

Embedded vision solutions can also be designed with a so-called SoM (system on module, also called computer on module or CoM). In principle, an SoM is a circuit board which contains the core elements of an embedded processing platform, such as the SoC, storage, power management, etc. An individual carrier board is required for the customization of the SoM to each application (e.g. with the appropriate interfaces). This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

Completely individual processing boards in the form of a full custom design may also be a sensible choice for high quantities.

 

Characteristics of Embedded Vision Systems versus Standard Vision Systems

Most of the above-mentioned single board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries. Increasingly, however, x86-based single-board computers are also spreading. A consistently important criterion for the computer is the space available for the embedded system.

For the software developer, the program development for an embedded system is much more complex than for a standard PC. While the PC used in standard software development is also the main target platform (meaning the type of computer which the program is later intended to run on), this is different in the case of embedded software, where the target system generally can’t be used for the development due to its limited resources (CPU performance, storage). This is why the development of embedded software also uses a standard PC on which the program is coded and compiled with tools that may get very complex. The compiled program must then be copied to the embedded system and subsequently be debugged remotely.

When developing the software, it should be noted that the hardware concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the popular Raspberry Pi, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and, with the connection of a monitor, mouse and keyboard, is therefore a universal computer.

 

What Are the Benefits of Embedded Vision Systems?

In some cases, much depends on how the embedded vision system is designed. An SBC (single-board computer) is often a good choice, as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. For that reason, this approach is not very economical in terms of manufacturing costs and is more suitable for small unit numbers, where the development costs must be kept low while the manufacturing costs are of secondary importance.

The leanest setup is obtained with a full-custom design, a system that is highly optimized for individual applications. But this involves high integration costs and the associated high development expenditures. This solution is therefore suitable for large unit numbers.

An approach with a conventionally available system on module (SoM) and an appropriately customized carrier board presents a compromise between an SBC and a full-custom design (also see above: “Which embedded systems are available? “). The manufacturing costs are not as optimized as in a full custom design (after all, a setup with a carrier board plus a more or less generic SoM is a bit more complex), but at least the hardware development costs are lower, since the significant part of the hardware development is already completed with the SoM. This is why a module-based approach is a very good choice for medium-level unit numbers in which the manufacturing and development costs must be well-balanced.

The benefits of embedded vision systems at a glance:

  • Leaner system design
  • Light weight
  • Cost-effective, because there is no unnecessary hardware
  • Lower manufacturing costs
  • Low energy consumption
  • Small footprint

 

Also Read: HOW HIGH-SPEED CAMERAS CAN BE USED TO STUDY HUMAN MOTION SEQUENCES

 

Back to All Robotics and Autonomous Systems Articles, Resources and News 

To Know More About Machine Vision Cameras Dealer in Singapore Asia , Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com

 

Source – www.baslerweb.com

BITFLOW PREDICTS VISION-GUIDED ROBOTICS TO BECOME MAJOR DISRUPTIVE FORCE IN GLOBAL MANUFACTURING

mvasia-logo

As the plant floor has become more digitally connected, the relationship between robots and machine vision has merged into a single, seamless platform, setting the stage for a new generation of more responsive vision-driven robotic systems. BitFlow, Inc., a global innovator in frame grabbers used in industrial imaging, predicts vision-guided robots will be one of the most disruptive forces in all areas of manufacturing over the next decade.

“Since the 1960s robots have contributed to automation processes, yet they’ve done so largely blind,” said Donal Waide, Director of Sales for BitFlow, Inc. “Vision-equiped robots are different. Now, just like a human worker, robots can see a specific part to validate whether it is being placed correctly in a pick and place application, for example. Cost savings will be realized since less hard fixturing is required and the robot is more flexible in its ability to locate a variety of different parts with the same hardware.”

Bitflow Frame Grabber Cards Dealer Singapore

HOW ROBOTIC VISION WORKS

Using a combination of camera, cables, frame grabber and software, a vision system will identify a part, its orientation and its relationship to the robot. Next, this data is fed to the robot and motion begins, such as pick and place, assembly, screw driving or welding tasks. The vision system will also capture information that would be otherwise very difficult to obtain, including small cosmetic details that let the robot know whether or not the part is acceptable. Error-proofing reduces expensive quality issues with products. Self-maintenance is another benefit. In the event that alignment of a tool is off because of damage or wear, vision can compensate by performing machine offset adjustment checks on a periodic basis while the robot is running.

DUAL MARKET GROWTH

In should come as no surprise that the machine vision and robotic markets are moving in tandem. According to the Association for Advancing Automation (A3), robot sales in North America last year surpassed all previous records. Customers purchased 34,904 total units, representing $1.896 billion in total sales. Meanwhile total machine vision transactions in North America increased 14.8%, to $2.262 billion. The automotive industry accounts for appoximately 50% of total sales.

THE ROLE OF FRAME GRABBERS

Innovations in how vision-guided robots perceive and respond to their environments are exactly what manufacturers are looking for as they develop automation systems to improve quality, productivity and cost efficiencies. These types of advancements rely on frame grabbers being paired with high-resolution cameras to digitize analog video, thus converting the data to a form that can be processed by software.

BitFlow has responded to the demands of the robotics industry by introducing frame grabbers based on the CoaXPress (CXP) machine vision standard, currently the fastest and most powerful interface on the market. In robotics applications, the five to seven meters restriction of a USB cable connection is insufficient. BitFlow CXP frame grabbers allow up to 100 meters between the frame grabber and the camera, without any loss in quality. To minimize cabling costs and complexity, BitFlow frame grabbers require only a single piece of coax to transmit high-speed data, as well as to supply power and send control signals.

BitFlow’s latest model, the Aon-CXP frame grabber, is engineered for simplified integration into a robotics system. Although small, the Aon-CXP receives 6.25 Gb/S worth of data over its single link, almost twice the real world data rate of the USB3 Vision standard and significantly quicker than the latest GigE Vision data rates. The Aon-CXP is designed for use with a new series of single-link CXP cameras that are smaller, less expensive and cooler running than previous models, making them ideal for robotics.

Also Read: AN INTRODUCTION TO MACHINE VISION SYSTEMS

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT BITFLOW FRAME GRABBER CARDS DEALER SINGAPORE , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – WWW.ROBOTICSTOMORROW.COM

AN INTRODUCTION TO MACHINE VISION SYSTEMS

mvasia-logo

Machine vision is the incorporation of computer vision into industrial manufacturing processes, although it does differ substantially from computer vision. In general, computer vision revolves around image processing. Machine vision, on the other hand, uses digital input and output to manipulate mechanical components. Devices that depend on machine vision are often found at work in product inspection, where they often use digital cameras or other forms of automated vision to perform tasks traditionally performed by a human operator. However, the way machine vision systems ‘see’ is quite different from human vision.

THE COMPONENTS OF A MACHINE VISION SYSTEM CAN VARY, BUT THERE ARE SEVERAL COMMON FACTORS FOUND IN MOST. THESE ELEMENTS INCLUDE:

    • – Digital or analog cameras for acquiring images
    • – A means of digitizing images, such as a camera interface
    • – A processor

 

Industrial Machine Vision System Singapore
Credits : freepik.com

When these three components are combined into one device, it’s known as a smart camera. A machine vision system can consist of a smart camera with the following add-ons:

    • – Input and output hardware
    • – Lenses
    • – Light sources, such as LED illuminators or halogen lamps
    • – An image processing program
    • – A sensor to detect and trigger image acquisition
    • – Actuators to sort defective parts

 

HOW MACHINE VISION SYSTEMS WORK

Although each of these components serves its own individual function and can be found in many other systems, when working together they each have a distinct role in a machine vision system.

To understand how a machine vision system works, it may be helpful to envision it performing a typical function, such as product inspection. First, the sensor detects if a product is present. If there is indeed a product passing by the sensor, the sensor will trigger a camera to capture the image, and a light source to highlight key features. Next, a digitizing device called a framegrabber takes the camera’s image and translates it into digital output, which is then stored in computer memory so it can be manipulated and processed by software.

In order to process an image, computer software must perform several tasks. First, the image is reduced in gradation to a simple black and white format. Next, the image is analyzed by system software to identify defects and proper components based on predetermined criteria. After the image has been analyzed, the product will either pass or fail inspection based on the machine vision system’s findings.

GENERAL APPLICATIONS

Beyond product inspection, machine vision systems have numerous other applications. Systems that depend on visual stock control and management, such as barcode reading, counting, and store interfaces, often use machine vision systems. Large-scale industrial product runs also employ machine vision systems to assess the products at various stages in the process and also work with automated robotic arms. Even the food and beverage industry uses machine vision systems to monitor quality. In the medical field, machine vision systems are applied in medical imaging as well as in examination procedures.

Also Read: A LOOK AT THE PROGRESSION OF MACHINE VISION TECHNOLOGY OVER THE LAST THREE YEARS

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEM SINGAPORE , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – WWW.THOMASNET.COM

A LOOK AT THE PROGRESSION OF MACHINE VISION TECHNOLOGY OVER THE LAST THREE YEARS

mvasia-logo

Machine vision represents a diverse and growing global market, one that can be difficult to keep up with, in terms of the latest technology, standards, and product developments, as they become available from hundreds of different organizations around the world.

If you are looking for an example of how fast the market moves, and how quickly trends and new technologies emerge, our Innovators Awards provides a good reference point. In 2015, we launched our first annual Innovators Awards program, which celebrates the disparate and innovative technologies, products, and systems found in the machine vision and imaging market. In comparing the products that received distinction in 2015 to this past year’s crop of honorees, it does not take long to draw some obvious conclusions. First, let’s start with the most noticeable, which was with the cameras that received awards.

In 2015, five companies received awards for cameras. These cameras performed various functions and offered disparate capabilities, including pixel shifting, SWIR sensitivity, multi-line CMOS time delay integration, high-speed operation, and high dynamic range operation. In 2018, 13 companies received awards for their cameras, but the capabilities and features of these cameras look much different.

Vision Inspection Systems In Singapore

CAMERAS THAT RECEIVED AWARDS IN 2018 OFFERED THE FOLLOWING FEATURES:

Polarization, 25GigE interface, 8K line scan, scientific CMOS sensor, USB 3.1 interface, fiber interface, embedded VisualApplets software, 3-CMOS prism design, and subminiature design. Like in 2015, a few companies were also honored for high-speed cameras, but overall, it is evident that most of the 2018 camera honorees are offering much different products than those from our inaugural year.

There are two other main categories that stick out, in terms of 2018 vs. 2015, the first of which is software products. In 2015, two companies received awards for their software—one for a deep learning software product and another for a machine learning-based quality control software. In 2018, eight companies received awards for software.

THESE SOFTWARE PRODUCTS OFFERED THE FOLLOWING FEATURES OR CAPABILITIES:

Deep learning (three honorees), data management, GigE Vision simulation, neural network software for autonomous vehicles, machine learning-based desktop software for autonomous vehicle vision system optimization, and a USB3 to 10GigE software converter.

Lastly, the category of embedded vision looked much different in 2018 than it did in 2015. In the embedded vision category—which I am combining with smart cameras due to overlap—there were two companies that received awards in 2015, both of which were for smart cameras that offered various capabilities. This year, however, there were 12 companies that were honored for their embedded vision innovations, for products that offered features including: embedded software running on Raspberry Pi, computer vision and deep learning hardware and software platform, embedded vision development kits, embedded computers, 3D bead inspection, as well as various smart cameras.

Throughout the other categories, there was equal or similar number of honorees from both years, but there were several interesting technologies or applications that products that popped up in 2018 offered. This includes a lens for virtual reality/augmented reality applications, a mobile hyperspectral camera, a 3D color camera, and various lighting products that targeted multispectral and hyperspectral imaging applications.

This is all to say that, when looking back to 2015 to today, machine vision technology has grown quite a bit. With the rapid pace of advancements, the growing needs of customers and end users, the miniaturization and smaller costs of components, and so on; it is exciting to think about what machine vision products in 2021 might look like.

Also Read: FIVE MYTHS ABOUT ROBOTIC VISION SYSTEMS.

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

 

TO KNOW MORE ABOUT VISION INSPECTION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – VISION-SYSTEMS.COM

FIVE MYTHS ABOUT ROBOTIC VISION SYSTEMS

mvasia-logo

Vision systems for robotic manufacturing applications have significantly evolved over the last decade. While the vision systems of old were unreliable, clunky and expensive, today’s systems are anything but. Proper vision systems can make the difference between an efficient robotic system and one that is not working optimally.

HERE ARE 5 MYTHS AND TRUTHS ABOUT VISION SYSTEMS.

Machine Vision System In Singapore
Credits : freepik.com

MYTH #1: VISION SYSTEM ARE COMPLICATED

In actuality, modern vision systems are very simple to install and use. Most of the algorithms and communications are built in, so it can be very easy and quick to make adjustments without the help of a trained engineer. New users are often surprised just how easy it is to use and maintain their vision systems.

MYTH #2: VISION SYSTEMS ARE NOT RELIABLE

If a vision system is properly applied, it will be highly robust, repeatable and reliable. Today’s vision system components are very robust, even in harsh environments. They are built to operate in rugged applications. Unlike a human, a vision system will see accurately every time. It never gets tired, takes a break or goes home for the evening.

MYTH #3: ALL VISION SYSTEMS ARE THE SAME

There is no truly out-of-the-box solution for vision systems. Each application is unique, and many factors of need to be considered. Anyone who tells you there’s a plug-and-play option for your operations is not selling you a solution that’s properly engineered for your needs. Customized vision systems are the only ones that will work efficiently and reliably.

MYTH #4: VISION SYSTEMS ARE ALWAYS THE BEST SOLUTION

While vision systems are helpful in many robotic applications, there are some jobs in which vision may not be the answer. For example, operations that have drastic changes from part to part moving quickly on a single line may not benefit from a vision system because more discriminating inspection may be necessary. In addition, a vision system helps provide tight tolerances, so applications with loose tolerances may be just fine with sensors and not need to be upgraded to a vision system.

MYTH #5: VISION SYSTEMS ARE TOO EXPENSIVE

Just 10 years ago, typical vision systems cost an average of $30,000. Today, that same system may cost only $5,000 to $15,000. The evolution of vision technologies have brought down the cost considerably. In fact, many companies can see an ROI relatively quickly because a vision system requires fewer special fixtures and conveyors, decreases downtime for fixture changeout, and increases operations overall.

An efficient manufacturer must get products in and out of a cell quickly and reliably. Vision systems paired with robotic operations can put an operation at a competitive advantage by providing opportunities to make more and streamline the process for optimum profitablilty.

Also read: MACHINE VISION TRENDS TO WATCH IN 2018 AND BEYOND

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT MACHINE VISION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTDAT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – ROBOTICSTOMORROW.COM

MACHINE VISION TRENDS TO WATCH IN 2018 AND BEYOND

mvasia-logo

MACHINE VISION TECHNOLOGY has found its way into applications inside and outside of factory settings, riding a wave of progress in automation technology and growing into a sizable global industry. Quite a bit of future technology will depend on machine vision, and the market will grow accordingly.

In 2017, according to a recent report, the global machine vision market was valued at $7.91 billion1. By 2023, the global market is expected to reach $12.29 billion – a compound annual growth rate (CAGR) of 7.61%. This robust growth is caused by a number of broader economic factors.

WHAT’S DRIVING LONG-TERM GROWTH IN MACHINE VISION?

The main drivers of growth in the machine vision market are the need for quality inspection and automation inside factories, growing demand for AI and IoT integrated systems that depend on machine vision, increasing adoption of Industrial 4.0 technology that uses vision to improve the productivity of robotic automation, and government initiatives to support smart factories across the globe.

Machine vision software will be one of the fastest growing segments between 2017 and 2023. The main reason for this is the expected increase in integration of AI into industrial machine vision software to enable deep learning in robotics technology.

PC-based INDUSTRIAL MACHINE VISION PRODUCTS, the oldest form of industrial machine vision, will retain a large portion of machine vision market share because of their ease of use and processing power.

Machine Vision Cameras Blog Singapore
Credit : freepik.com

WHAT TRENDS ARE WORTH WATCHING NOW?

While there are several main factors in the expected long-term growth of the global machine vision market, there are a few trends to keep an eye on now that are changing the way machine vision technology is deployed.

    •  Industrial Internet of Things (IIoT): while AI and IoT technology are long-term drivers of growth, the IIoT is connecting production technology with information technology in today’s factories to increase productivity. The IIoT depends on heavily on machine vision to collect the information it needs.
    •  Non-Industrial Applications: driverless cars, autonomous farm equipment, drone applications, intelligent traffic systems, guided surgery and other non-industrial uses of machine vision are rapidly growing in popularity, and often call for different functionality in machine vision than industrial applications. These non-industrial uses of machine vision are being deployed today and could be an important part of machine vision growth.
    •  Ease of Use: Machine vision systems can often be complex from the user’s perspective. As mentioned above, PC-based machine vision systems will remain popular, despite their age, because of their ease of use. The desire for ease of use may drive further standardization in machine vision products, which could make them even easier to deploy inside and outside of factory settings.

The machine vision market is poised for long-term growth. The IIoT, growing non-industrial applications and ease of use are all helping buoy today’s machine vision market, but there are several other factors effecting long-term market expansion.

With market growth comes innovation. There’s EXCITING THINGS ON THE HORIZON FOR MACHINE VISION AND VISION TECHNOLOGY.

Also read: LENS SEES CLEARLY EVEN THROUGH SHOCK AND VIBRATION

 

TO KNOW MORE ABOUT MACHINE VISION CAMERAS BLOG SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – VISIONONLINE.ORG

HOW DANFOSS IXA AND EDMUND OPTICS ARE CREATING A CLEANER ENVIRONMENT

mvasia-logo

The future depends on monitoring and regulating air pollution, which is an essential step towards creating a cleaner environment.

MONITORING MARITIME POLLUTION WITH OPTICS

Monitoring and regulating air pollution is an essential step towards creating a cleaner environment. Danfoss IXA, a high-tech company based in Denmark, is developing a device called MES 1001, a marine emission sensor based on ultraviolet absorption spectroscopy which monitors the NO, NO2, SO2 and NH3 emissions produced by cargo ships to ensure that they are complying with all environmental regulations. The optical sensor is placed inside the exhaust system of ships, so the involved optics will be exposed to extreme conditions and must be able to withstand temperatures up to 500°C and very high pressures simultaneously.

Danfoss IXA was looking for a partner to develop optics fulfilling their demanding requirements, and in EDMUND OPTICS (EO) they found a partner who was prepared to take on this challenge which went beyond their normal capabilities. EO created custom test beds for verifying the unique requirements of the sensor, which enabled EO to develop a robust system to meet Danfoss IXA’s specifications.

Danfoss IXA develops sensors and systems for the maritime industry, focusing on energy optimization and the measurement of emission gases. They are a part of the Danfoss Group, a global enterprise which produces a wide range of technologies that address a variety of markets including food supply, energy efficiency, and climate-friendly solutions.

Smokestack emissions from international shipping are a severe problem for human health, contributing to the premature mortality of people all across the world from lung damage and cardio vascular diseases.

Industrial Machine Vision System In Singapore

CREATING A CLEANER ENVIRONMENT STARTS AT SEA

 

The International Maritime Organization (IMO) has recently decided that commercial ships must comply with low sulfur fuel requirements globally by 2020. In addition, the current Nitrogen Oxide emission control area along the North American coastline will be expanded to cover the Baltic and North Seas in 2021. There currently aren’t convenient and reliable ways for the IMO to monitor ships’ emissions and enforce these regulations. A multitude of local and regional initiatives seeking to limit the air emission from ships further underline the fact that the industry needs to adapt to a world where strict emission requirements are part of the game. Danfoss IXA is developing the MES1001, which is a comprehensive marine emissions sensor suitable for accurately measuring a ship’s air emissions in real time.

THE CHALLENGE

 

Danfoss IXA approached several providers of optical components to jointly design the optical system for the new MES 1001 device. This project turned out to be very challenging due to the extreme high temperature and pressure requirements. High temperatures can cause optics to fail due to melting and thermal stresses, which severely limits the types of optical materials that can be used. High temperatures can also cause adhesives used in the optical assembly to outgas, contaminating the system. The high pressure requirements made the sealing of the optical system critically important. Most of the optics partners faced their limits in terms of design, metrology for these harsh conditions, or working across different continents and time zones.

THE SOLUTION

 

EDMUND OPTICS (EO), with its global presence and large staff of optical engineers and designers, is always keen to face new challenges. One of the reasons that Danfoss IXA selected EO as a partner is their ability to ramp-up products from prototype to volume production. When approached by Danfoss, EO dedicated R&D and project management resources to developing an optical assembly for the MES 1001, even though EO had never designed systems to work at temperatures as high as 500°C before. EO investigated many different materials and mounting options, recognizing this project as a learning experience and opportunity to expand their capabilities. Custom testbeds for verifying the optical system’s unique requirements were created and proper sealants and optomechanics were identified to allow the assembly to survive these high pressures. The start of that development process was faced with many issues including cracking optics and outgassing adhesives, but by iterating the design process multiple times and researching in different materials these issues were solved and Edmund Optics eventually delivered an optical assembly that could survive the harsh environment inside a ship’s exhaust system. Edmund Optics is proud to be a part of this product which will positively impact the environment and support a global effort to reduce harmful emissions.

Danfoss IXA “greatly appreciated EO’s professional way of involving [them] along the development process as well as their ability to adapt to changing requirements as [Danfoss IXA] learned more about the exact conditions in which the sensor would be used.” During that time Danfoss IXA “found the support from EO’s project managers extremely fruitful and very efficient in bringing the development process to success.”

The robust optical system is a critical component of the new MES 1001 device, which was launched in 2017. It was exciting for EO to work on this cutting-edge technology in such a close collaboration with Danfoss IXA’s skilled research and development team. The MES 1001 will allow the IMO and other organizations to enforce maritime emissions requirements and help lead to a cleaner environment across the globe.

 

Also read: OPTIMAL LIGHTING FOR LINE SCAN CAMERA APPLICATIONS

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – EDMUNDOPTICS.COM

OPTIMAL LIGHTING FOR LINE SCAN CAMERA APPLICATIONS

mvasia-logo

The speed of line scan cameras has greatly increased in the last years. MODERN LINE SCAN CAMERAS operate with integration times in the range of 15 µs. In order to achieve excellent image quality, in some cases illuminance levels of over 1 million Lux are required. One of the most important criteria for assessing image quality is noise disturbance (white noise). There are various noise sources in image processing systems and the most dominant one is called “shot noise”.

Shot noise has a physical cause and this has nothing to do with the quality of the camera. The noise is caused by the special essence of light, by photons. The image quality depends on the number of photons which hit the object and ultimately on the number of photons which reach the camera sensor.

In a set-up with a defined signal transmission there are three parameters which influence the ‘shot noise’ when capturing an image:

  • integration time (scanning speed)
  • aperture (depth of focus and maximum definition)
  • amount of light on the scanned object

The choice of lens aperture greatly determines the required light intensity. If, for instance, the aperture is changed from 4 to 5.6, twice the amount of light is required in order to maintain the same signal to noise ratio (SNR) – see fig. 01). By using a greater aperture, more depth of focus is achieved and the image quality is improved due to reduced vignetting effects with the majority of lenses.

Industrial Machine Vision Systems in Singapore

LIGHT FOR ALL APPLICATIONS CURRENTLY

LEDs are available in various shades of color. You can get them in red, green, blue, yellow or amber. Even UV LEDs and IR LEDs are obtainable. The choice of a specific color and thus a specific wave length can determine how object properties on surfaces with diverse spectral response are made visible.

In the past, red light was often used wherever high intensity was required. However, relevant performance increase in LED technology today usually occurs with white LEDs. These high-performance LEDs are used for example in car headlights and street lamps. The core of a white LED actually consists of a blue LED. Using fluorescent substances, part of the light from the blue LED is converted into other visible spectral ranges in order to produce a ‘white’ light.

UV-LEDs are frequently used to make fluorescent effects visible. In many cases a wavelength of approx. 400nm is sufficient. UV-LEDs with shorter wavelengths may be suitable for hardening paint, adhesives or varnishes. In comparison to blue or white LEDs, UV-LEDs are less efficient. By focusing through a reflector however this can be improved. IR lighting is implemented for food inspection. Wavelengths of 850nm or 940nm are used. When sorting recyclable material, wavelengths from 1,200nm to 1,700nm are used to identify different types. Here however, IR-LEDs in this range are not as adequate as classic halogen lamps with appropriate filters where beam output is concerned.

KEEP COOL

The small design enables a very short warm-up phase. This circumstance presupposes good thermal dissipation, in order to maintain appropriate working conditions, i.e. temperatures. As a rule: the better the cooling, the longer the LED durability. Apart from durability, LED temperatures also influence spectral behavior (possible color shifting) and general output (luminance).

In systems where precise color reproduction is required, it is recommended to keep the lighting’s temperature steady at a predetermined value. At present, efficient control systems can regulate the LED temperature to within a spectrum range of less than 2°C.

Modern lighting systems, such as the Corona II lighting system developed by Chromasens, provide numerous cooling options. This includes passive cooling with thermal dissipation via convection, compressed air cooling, water cooling and ventilation. Active ventilation, compressed air or water cooling are good cooling methods for measuring applications situated in surroundings with high temperatures. By monitoring the temperature of the LEDs and regulating the cooling system, shifts in color reproduction can be completely avoided or at least greatly reduced.

FOCUS ON THE ESSENTIAL

If a flat object at a known and fixed distance is to be illuminated, selecting the adequate focus is relatively simple. Selecting the right lighting is more complicated, if the object is not at a predetermined distance from the light or has no flat surface. In such a case, assuring a permanently sufficient image brightness is a challenge. Here the use of reflector technology facilitates the accumulation of light from a LED (greater coverage angle of the reflected light) and a better light distribution from the depth.

In contrast to background or bright field lighting, focused lighting is normally used for top lighting. Customary lighting systems use rod lenses or Fresnel lenses in order to achieve the necessary lighting intensity. CHROMASENS adopts a novel and completely unique approach. While the use of rod lenses causes color deviations due to refraction, the mirror (reflector) principle developed and patented by Chromasens has no such trouble.

Shiny or reflective materials are a challenge for lighting. Unwished for reflections often appear in the image. In combination with a polarizing filter rotated 90 degrees in front of the camera, these unwanted light reflections can be prevented. When using such filters, certain factors have to be considered. The temperature stability of the filter is one point. In this respect, many polarizing filters can only be used to a certain extent. Another criterion is effectiveness: with such settings, only about 18-20 % of the original amount of light reaches the sensor. The amount of light provided by the lighting must therefore be great enough to minimize noise and yet achieve sufficiently good image quality.

SUMMARY

When selecting the correct LIGHTING FOR LINE SCAN CAMERA APPLICATIONS, following factors ought to be considered:

  • The lense aperture and the light amount significantly influence the signal noise ratio
  • LED systems offer definite advantages compared to traditional lighting technologies such as halogen or fluorescent lamps + Good cooling ensures long durability, consistent spectral behavior and a high level of brightness
  • The use of reflectors assures optimal lighting, even from different distances
  • Color LEDs, UV- and IR-LEDs are extremely versatile
  • Polarizing filters prevent unwanted light reflection on shiny surfaces. The amount of light provided by the lighting must still be sufficient

 

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – CHROMASENS.DE

COAXIAL BRIGHTFIELD LIGHT FOR 3DPIXA APPLICATIONS

mvasia-logo

Choosing the right illumination for the application is critical for acquiring the high quality images needed for calculating 3D data. We compare the imaging results of a directional coaxial brightfield illumination with a Corona tube light in terms of color image quality and height map for different samples. It could be shown that for material that exhibit considerable amounts of subsurface scattering, coaxial lighting geometry benefits the 3D measurement using 3DPIXA.In practice, it has to be kept in mind that introducing the beam splitter in the light path results in a shift of the working distance of the camera system, and a slight reduction of image quality.

1.INTRODUCTION

 

An illumination scheme where the source rays are reflected from a flat sample directly into the camera is called a brightfield. With line scan cameras there are two possible ways to realize such a setup: either by tilting the camera and light source such that the angle with respect to the surface normal is the same but opposite direction, or by using a beam splitter. The first method is not recommended as it can lead to occlusion and keystone effects. Thus we want to discuss the brightfield setup using a beam splitter.

Figure 1 shows the principle of this setup in comparison to a setup with a tubelight. The tubelight is the superior illumination choice for a wide array of possible applications. It reduces the intensity of specular reflections and evenly illuminates curved glossy materials. Most of the time the tubelight should be your first choice and only some materials require the use of a coaxial brightfield illumination.

An example as such is material that exhibits strong subsurface scattering, which means that light beams partially penetrate a material in a certain direction, are scattered multiple times, and then exit at a different location with possibly different direction. Resulting from that is a material appearance that is translucent. Examples of such materials are marble, skin, wax or some plastics.

Using tube light on such materials results in a very homogeneous appearance with little texture, which is problematic for 3D reconstruction. Using coaxial brightfield illumination results in relatively more direct reflection from the surface to the camera, as compared to a tube light illumination. This first surface reflection contributes to the image texture; the relative amount of sub-surface scattered light entering the camera is thereby reduced.

There are some specific properties that have to be taken into consideration when using a coaxial setup with a 3DPIXA. Firstly, only a maximum of 25% of the source intensity can reach the camera as the rest is directed elsewhere in the two transits of the beam splitter. Secondly, the glass is an active optical element that influences the imaging and 3D calculation quality. In chapter 3 we have a closer look at these factors and offer some guidelines for mechanical system design to account for resulting effects. Prior to that, we discuss the effects of the brightfield illumination on a selection of a few samples to give an idea when this type of illumination setup should be used.

Industrial Machine Vision Systems in Singapore

2.COMPARING BRIGHTFIELD AND TUBELIGHT ILLUMINATION

 

In this chapter we want to give you some impressions of the differences between using a coaxial illumination in comparison to a tubelight using different samples. As a tubelight we used the CHROMASENS CORONA II Tube light (CP000200-xxxT) and for the brightfield we used a CORONA II Top light (CP000200-xxxB) with diffusor glass together with a beam splitter made from 1.1 mm “borofloat” glass.

In figure 2 we show a scanned image of a candle made of paraffin, which is a material that exhibits strong subsurface scattering. With coaxial illumination (right image) the surface texture is clearly visible and the height image shows the slightly curved shape of the candle. In comparison the tube light (left image) contains very low texture and height information could not be recovered for most of the heights (black false colored region). The texture is only visible with coaxial illumination because under this condition the light reflected from the surface is more dominant in the final image than the subsurface scattered light. However, the ratio between these two effects varies with different surface inclinations. The more deviated the surface normal is from the camera observation angle, the less direct light is reflected directly from the first surface. Therefore, texture in the image gets lower. For the candle sample, more than 15° deviation resulted in failure in recovering height information. This can be seen in the right image at the outer edges of the candle.

3Fehler! Verweisquelle konnte nicht gefunden werden.. The substrate area in the tube light image (left) shows low texture, resulting in partially low performance height reconstruction (black points in false-colored image overlay). With coaxial illumination (right image), the amount of source rays reflected back into the camera from the surface of the material is larger than the subsurface scattered light. The image texture is higher and height reconstruction performance improves.

However, if the height of the balls is the focus in the application rather than inspecting the substrate, the situation becomes more complex as the coaxial illumination results in specular reflection on the ball tops. If these areas are saturated, it negatively affects height measurements as well.

The best illumination therefore strongly depends on the measurement task and materials used and can often only be determined by testing. If you are unclear which light source is best for your application, please feel free to contact our sales personnel to discuss options and potentially arrange for initial testing with your samples at our lab.

3.OPTICAL INFLUENCE

 

The beam splitter essentially is a plan parallel glass plate which offsets each ray passing through without changing its direction. The size of this offset depends on the incidence angle, the thickness of the glass and its refractive index. The thickness of the beam splitter should therefore be only as small as is needed for stability reasons. In the following analysis we assume a thickness of the beam splitter of d=1.1 mm “borofloat” glass.

The result of the beam splitters influence is a movement of the point from where the sharpest image can be acquired in all three spatial coordinates. The change along the sensor direction (called x-direction) leads to a magnification change of the imaging system that is negligible small (<0.4%, with a small dependence on camera type).

The change along the scan direction (called y-direction) only offsets the starting point of the image. If the exact location of the scanline is important (e.g. when looking on a roll) the camera needs to be displaced relative to the intended scan line by

Δy = d*(0.30n – 0.12).

The equation is valid for all glass thicknesses d and is a linear approximation of the real dependency on n, where n is the refractive index of the glass material introduced into the light path. The approximation is valid in the interval of n= [1.4, 1.7] and for all types of 3DPIXAs. The direction of the displacement is towards the end of the beam splitter that is nearer to the sample, so in the scheme in figure 1 the camera has to be moved to the left.

The change of the working distance is different along the x- and y-axis of the system because of the 45° tilt of the beam splitter leading to astigmatism. In y-direction the working distance is increased by

zy = +d*(0.24n +0.23).

As above, the formula is valid for all d and n= [1.4, 1.7]. The change of the working direction along the x-direction is not constant but also changes depending on the position of the imaged point which leads to field curvature. Both astigmatism and field curvature slightly lower your image quality which influences the imaging of structures near the resolution limit. But they should not influence the 3D algorithm as generally only height structures that are several pixels in size can be computed.

Additionally to the optical effects discussed above the beam splitter also changes the absolute height values computed by the 3D algorithm (i. e. the absolute distance to the camera). The exact value of this height change is slightly different for each camera. Generally the measured distance between camera and sample decreases, so that structures appear nearer to the camera than they really are. This change is constant over the whole height range (simulations show 0.2% change) and also constant over the whole Field of View. In summary, relative height measurements are not influenced at all, and absolute measurements are shifted by a constant offset.

As the precise change of the calculated height is not known, the zero plane of the height map can’t be used to adjust the camera to the correct working distance. We advise you instead to set up your camera using the free working distance given in the data sheet and correcting it with Δzy from above.

4.SUMMARY

 

On certain translucent materials (those exhibiting considerable subsurface scattering of light), using coaxial illumination can result in a significant increase in image texture which greatly benefits the 3D height reconstruction. However, the additional glass of the beam splitter in the optical path of the camera system when using coaxial illumination influences the optical quality negatively. Further, the working distance of the system changes slightly and the absolute measured distances are set off by a constant value. This does not affect relative measurements, which are generally recommended with the 3DPIXA.

 

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEMS, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM