HOW MACHINE VISION AND DEEP LEARNING ENABLE FACTORY AUTOMATION

81868-mvasia

MARCH, 2020

Cognex-Machine-Vision-System-Cameras-in-Singapore-How-Machine-Vision-and-Deep-Learning-Enable-Factory-Automation

Credits : pexels.com

The pace of technology’s change over the last decade has been nearly unprecedented in human history and it’s only poised to become even more breathtaking in the years ahead: blockchain, robotics, edge computing, artificial intelligence (AI), big data, 3D printing, sensors, machine vision, internet of things, are just some of the massive technological shifts on the cusp for industries

Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry. In the United States, manufacturing accounts for $2.17 trillion in annual economic activity, but by 2025 – just half a decade away – McKinsey forecasts that “smart factories” could generate as much as $3.7 trillion in value. In other words, the companies that can quickly turn their factories into intelligent automation hubs will be the ones that win long term from those investments.

“If you’re stuck to the old way and don’t have the capacity to digitalize manufacturing processes, your costs are probably going to rise, your products are going to be late to market, and your ability to provide distinctive value-add to customers will decline,” Stephen Ezell, an expert in global innovation policy at the Information Technology and Innovation Foundation, says in a report from Intel on the future of AI in manufacturing.

These technologies as applied in a factory or manufacturing setting are no longer nice to have, they are business critical. According to a recent research report from Forbes Insights, 93% of respondents from the automotive and manufacturing sectors classified AI as ‘highly important’ or ‘absolutely critical to success’. And yet, only 56% of these respondents plan to increase spending on artificial intelligence by less than 10%.

The disconnect between recognizing the importance of new technologies that allow for more factory automation and the willingness to spend on them will be the difference between those companies that win and those that lose. Perhaps this reticence to invest in something like AI could be attributed to the lack of understanding of its ROI, capabilities, or real-world use cases. Industry analyst Gartner, Inc. still slots many of AI’s applications into the “peak of inflated expectations” after all.

But AI, specifically deep learning or examples-based machine vision, combined with traditional rules-based machine vision can give a manufacturing factory and its teams superpowers. Take a process such as the complex assembly of a modern smartphone or other consumer electronic devices. The combination of rules-based machine vision and deep learning can help robotic assemblers identify the correct parts, identify differences like missing screws or misaligned casings, help detect if a part was present or missing or assembled in a different place on the product, and more quickly determine if those were problems. And they can do this at an unfathomable scale.

The combination of machine vision and deep learning are the on-ramp for companies to adopt smarter technologies that will give them the scale, precision, efficiency, and financial growth for the next generation. But understanding the nuanced differences between traditional machine vision and deep learning and how they complement each other, rather than replace, are essential to maximizing those investments.

Also Read: THREE TRENDS DRIVING INDUSTRIAL AUTOMATION

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

THREE TRENDS DRIVING INDUSTRIAL AUTOMATION

81868-mvasiaFEBRUARY, 2020

high-resolution-standalone-smart-cameras-dealer-Singapore-Three-Trends-Driving-Industrial-Automation (1)

Credits : pexels.com

Since its inception in the 1980s, machine vision has concerned itself with two things: improving the technology’s power and capability and making it easier to use. Today, machine vision is turning to higher-resolution cameras with greater intelligence to empower new automated solutions both on and off the plant floor — all with a simplicity of operation approaching that of the smartphone, which significantly reduces engineering requirements and associated costs.

And, just like in other industries which are benefiting from rapid advancements in technology like big data, the cloud, artificial intelligence (AI), and mobile, so too will manufacturers, logistics operations, and other enterprises benefit from three key advances in machine vision for automation.

RAPIDLY IMPROVING SENSOR TECHNOLOGY

While 1-, 2-, and 5-megapixel (MP) cameras continue to make up the bulk of machine vision camera shipments, we’re seeing considerable interest in even higher-resolution smart cameras, up to 12 MP. High-resolution sensors mean that a single smart camera inspecting an automobile engine can do the work of several lower resolution smart cameras while maintaining high-accuracy inspections.

Cognex’s patent-pending High Dynamic Range Plus (HDR+) image processing technology provides even better image fidelity than your typical HDR. It will help smart cameras inspect multiple areas across large objects where lighting uniformity is less than ideal. In the past, lighting variations could be mistaken for defects or the feature was not even visible. Today, HDR+ helps reduce the effects of lighting variations, enabling applications in challenging environments that were beyond the capability of machine vision technology just a few years ago.

While advanced smart cameras run HDR+ technology on field-programmable gate arrays (FPGAs) to improve the quality of the acquired image at frame rate speeds, complementary sensor technology, such as time-of-flight (ToF) sensors, are being incorporated to enable “distance-based dynamic focus”.

The new high-powered integrated torch (HPIT ) image formation system, using ToF distance measurement and high-speed liquid lens technology, are also making an impact by enabling dynamic autofocus at frame rate. The newest barcode readers incorporate HPIT capability for applications such as high-speed tunnel sortation and warehouse management in situations where packages and product size can vary significantly, requiring the camera to quickly adapt to different focal ranges.

INTEGRATION WITH DEEP LEARNING

Just like AI’s impact in other industries, deep learning vision software for factory automation is allowing enterprises to automate inspections that were previously only able to do manually or more efficiently solve complex inspection challenges that are cumbersome or time-consuming to do with traditional rule-based machine vision.

The biggest use driving the investment in deep learning is the potential of re-allocating, in many cases, hundreds of human inspectors with deep learning-based inspection systems. For the first time, manufacturers have a technology that offers an inspection solution that can achieve comparable performance to that of a human.

One example of how deep learning will benefit organizations is in defect detection inspection. Every manufacturer wants to eliminate industrial defects as much as possible and as early as possible in the manufacturing process to reduce downstream impacts that cost time and money.

Defect detection is challenging because it is nearly impossible to account for the sheer amount of variation in what constitutes a defect or what anomalies might fall within the range of acceptable variation.

As a result, many manufacturers utilize human inspectors at the end of the process to perform a final check for unacceptable product defects. With deep learning, quality engineers can train a machine vision system to learn what is an acceptable or unacceptable defect from a data set of reference pictures rather than program the vision system to account for the thousands of defect possibilities.

THE INTERNET OF THINGS

An important development for smart camera vision systems enabling Industry 4.0 initiatives is Open Platform Communications Unified Architecture (OPC UA). With contributions from all major machine vision trade associations around the world, OPC UA is an industrial interoperability standard developed to help machine-to-machine communication.

Combined with advanced sensor technology and trends such as deep learning, OPC UA will help transition machine vision technology from a point solution to bridge the industrial world inside the plant and the physical world outside it. Today, vision systems and barcode readers are key sources of data for modern enterprises.

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

THERMAL IMAGING FOR SAFER AUTONOMOUS VEHICLES

81868-mvasiaFEBRUARY, 2020

Seek-Thermal-Cameras-dealer-in-Mumbai-India-Menzel-Vision-and-Robotics-Thermal-Imaging-for-Safer-Autonomous-Vehicles (1)

Credits : pexels.com

For the automotive industry, pedestrian safety has been a serious concern since the horseless carriage. Londoner Arthur Edsall was the first driver to strike and kill a pedestrian in 1896 at a speed of four miles per hour. It took the U.S. Congress almost seventy years to impose automotive safety standards and mandate the installation of safety equipment and another thirty years before airbags became a required safety feature. Automotive safety standards in the United States are promulgated by a process of reviewing accidents after they have occurred.

In 2019, the National Transportation Safety Board (“NTSB”) finally addressed this standards – promulgation process in their Most Wanted List of transportation safety improvements calling for an increase in the implementation of collision-avoidance systems in all new highway vehicles. The progression of this change in policy derived from the 2015 study (SIR-15/01) that described the benefits of forward-collision-avoidance systems and their ability to prevent thousands of accidents.

After that report was published, an agreement was reached with the National Highway Traffic Safety Administration (“NHTSA”) and the Insurance Institute for Highway Safety that would require compliance with the Automatic Emergency Braking standard (“AEB”) on all manufactured vehicles by 2022. However, the agreement did not identify the specific technology that would enable AEB, and the question remains whether such technology is readily available and economically viable for industry-wide adoption.

RAPIDLY IMPROVING SENSOR TECHNOLOGY

The pace of technology over the last thirty years has been astronomical, yet technology to make driving safer has not kept pace. A computer that not too long ago was the size of a garage now fits into the palm of your hand. Today driving should be safer than ever, but the reality is that without the implantation of available modern technologies, the uncertainties of the road will always be with us. According to the NHTSA, there were 37,461 traffic fatalities in 2016 in the United States.

In 2015, there were a total of 6,243,000 passenger car accidents. 1 Globally, there is a fatality every twenty-five seconds and an injury every 1.25 seconds. In the United States there is a fatality every thirteen minutes and an injury every thirteen seconds. These statistics are mind blowing. Compared to recent events affecting the aviation industry, two Boeing 737 MAX 8 airplanes crashed killing 346 people, the same number of people that die as a result of automobile accidents every 144 minutes, and all Boeing 737 MAX 8 airplanes were grounded

The cost for automotive accidents is high. According to the national safety counsel, in the United States, the annual cost of health care resulting from cigarette smoking is approximately $300 billion whereas the annual cost of health care for injuries arising from automobile accidents is roughly $415 billion.

Technology to protect automobile occupants has reduced the number of driver and passenger fatalities. However, the number of people who die as a result of an accident outside the automobile continue to climb at an alarming rate. Pedestrians are at the greatest risk, especially after dark.

The NHTSA reports that in 2018, 6,227 pedestrians were killed in United States traffic accidents, with seventy-eight percent of pedestrian deaths occurring at dusk, dawn, or night.2 In the United States, pedestrian fatalities have increased forty-one percent since 2008. Solutions to address pedestrian fatalities are needed to meet the standards by 2022.

TECHNOLOGY IN THE DRIVER’S SEAT

Ultimately, it is safer cars and safer drivers that make driving safer, and automotive designers need to deploy every possible technological tool to improve driver awareness and make cars more automatically responsive to impending risks. Today’s safest cars can be equipped with a multitude of cameras and sensors to make them hyper-sensitive to the world around them and intelligent enough to take safe evasive action as needed. Microprocessors can process images and identify subject matter 1,000,000 times faster than a human being

Advanced Driver Assist Systems (“ADAS”) are becoming the norm, spotting potential problems ahead of the automobile making auto travel safer for drivers, passengers, and pedestrians, not to mention the more than one million ‘reported’ animals struck by automobiles in the United States annually resulting in $4.2 billion in insurance claims each year. The advances we have seen so far are the first steps to evolving towards a future of truly autonomous vehicles that will revolutionize both personal and commercial transportation.

Drivers need no longer rely on eyes alone to maintain situational awareness. Early generations of vision-assisting cameras were innovative, but they were not particularly intelligent and could do little to perceive the environment around the car and communicate information that could be used for driver decision-making.

Today, with tools such as radar, light detection and ranging (“LIDAR”), cameras, and ultrasound installed, a car knows much more about the environment than the driver does and can control the vehicle faster and safer than the human driver. Risky driving conditions such as rain, fog, snow, and glare, are less hazardous when a driver is assisted by additional onboard sensors and data processors.

One of the most advanced automotive sensors is a thermal sensor that allows a driver and the automobile to perceive the heat signature of anything ahead of the driver. Previously used mainly for military and commercial applications, early forms of night vision first came to the mainstream automotive market in the 2000 Cadillac DeVille, albeit as a cost-prohibitive accessory priced at almost at a cost approaching $3,000.

Since then, thermal cameras and sensors have become smaller, lighter, faster and cheaper. After years of exclusive availability in luxury models, thermal sensors are now ready to take their place among other automotive sensors to provide a first line of driving defense that reaches far beyond the reach of headlights in all vehicles, regardless of the cost of the vehicle.

TO KNOW MORE ABOUT HIGH RESOLUTION STANDALONE SMART CAMERAS DEALER SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

LAB AUTOMATION WITH VISION

81868-mvasiaAPRIL, 2019  

 

Thanks to technical, scientific and medical progress, human life expectancy has increased considerably in recent decades. Precise, highly technical and increasingly automated equipment in large hospitals and labs now provides valuable support in numerous measuring and analytical tasks.

Medical Automation Product Dealers in Singapore

                                                           Credits : freepik.com

WIDE RANGE OF LAB APPLICATIONS

The concept of lab automation in general can be interpreted in many ways, and includes various tasks: from simple applications such as weighing, to complex robotic and analytical systems, process tracking, and storage systems. This results in numerous possible camera applications in the medical, scientific, pharmaceutical and analytical fields. Some of these are obviously recognizable, such as those in an imaging urine sediment analysis device, while others run in the background and provide information for the medical diagnosis we receive from a physician. Others in turn support processes that are internal to the devices, and not directly connected with the actual detection process. These applications range from the simple input of a barcode to the support of laser technologies; from the path traversed by blood, starting with the prick of the needle when the blood is drawn, to the various test processes and then the result, up to complex processes in cell technology which offer scientists insights into the origins of diseases, thus advancing diagnostic and therapeutic innovations.

AUTOMATION TREND IN MEDICINE AND RESEARCH

Hospital and research labs increasingly follow the trend towards automation. The essential drivers for this development are

1. RISING COST PRESSURE:

Health systems and research institutions are subject to growing economic strains, and try to counteract this pressure with cost reductions in their services. Automation through modern technologies with inexpensive system components makes it possible to lower costs in the lab equipment, relieves the staff, and frees up capacities that can be utilized elsewhere.

2. HIGH SPEED:

The faster pr

WIDE RANGE OF LAB APPLICATIONS
The concept of lab automation in general can be interpreted in many ways, and includes various tasks: from simple applications such as weighing, to complex robotic and analytical systems, process tracking, and storage systems. This results in numerous possible camera applications in the medical, scientific, pharmaceutical and analytical fields. Some of these are obviously recognizable, such as those in an imaging urine sediment analysis device, while others run in the background and provide information for the medical diagnosis we receive from a physician. Others in turn support processes that are internal to the devices, and not directly connected with the actual detection process. These applications range from the simple input of a barcode to the support of laser technologies; from the path traversed by blood, starting with the prick of the needle when the blood is drawn, to the various test processes and then the result, up to complex processes in cell technology which offer scientists insights into the origins of diseases, thus advancing diagnostic and therapeutic innovations.

AUTOMA

ocessing of analyses enables more analyses per time spent for clinical and analytical contract laboratories, giving them an advantage over competitors since they can serve their customers faster. Automation can also help generate more results per time in research, which shortens project periods and makes new developments or technologies available sooner.

3. BETTER QUALITY MANAGEMENT AND STANDARDIZATION:

Many examinations, once manually executed, are increasingly handled by machines, whose technological features make it possible to complete these tasks with greater precision and improved reproducibility. Thanks to an applied vision system and automated microscopy, for example, researchers can now view detailed and precise image data on their office monitors without having to look through eyepieces in darkrooms. Furthermore, the captured image data offers the capability of documentation and archiving, which meets the growing demands of quality management systems. Automated systems also aren’t subject to the process-related variances of manual work steps, giving them greater reproducibility and paving the way for advancing standardization. Digital image data can be viewed across different locations if desired, e.g. for a scientific exchange or an external diagnostic consultation. The conditions for a reliable diagnostic statement are therefore improved by camera-supported examinations and analyses.

4. WIDER AVAILABILITY:

Lab automation efficiently makes new technologies accessible to many users. This makes it possible for research to determine the pathogenic processes of diseases more quickly. As a result,for example, diseases can be recognized earlier with the help of molecular-biological analyses in in-vitro diagnostics, which may reduce or even prevent their onset and the associated costly therapies that are so strenuous for patients. Devices that are easy to use and inexpensive enable diagnostics even in regions with economic- and infrastructure-related challenges. This means medical care can be improved in epidemic regions, since staff in those areas are often less well-trained, lab equipment has a lower standard overall, and the financial means of the affected patients are low. Here we can expect an increasing amount of so-called POC systems (POC = point of care) and lab-on-a-chip technologies.

APPLICATION AREAS FOR LAB AUTOMATION

Below are some examples of typical application areas for automated, camera-based applications in labs:

1. PROCESS AUTOMATION

This includes general camera applications that generate imaging and data, not for purely analytical but for process-supporting purposes, e.g. barcode/matrix code compilation, as it is applied in most devices for in-vitro diagnostics (IVD). This could involve the simple identification of a patient’s sample vial or the transmission of data from the used reagent, which the device needs in order to calculate the analyses and a batch documentation for purposes of quality management. In an automatic exchange with a lab information system, the right results are thus attributed to the requirements of a patient sample and managed digitally.

Many lab devices work with liquid test material. Depending on the application area, different parameters in this so-called liquid handling process must be determined and/or checked. This may be e.g. the state of the liquid (no air may be pipetted, since it would falsify the analysis result), the type of vial, the color of the lid to code the test material inside (e.g. whether it is a serum or whole blood vial), or color properties / layers or irregularities (bubbles, foam) in the liquid. Cameras may offer advantages since they don’t need contact with the sample, and don’t necessitate a removal of the lid, in contrast to other methods such as the capacitive determination of the liquid state. This prevents such problems as contamination, and enables higher flow rates.

2. AUTOMATED MICROSCOPY

Automated microscopy includes, for instance, applications of light and fluorescence microscopy for in-vitro diagnostics (IVD), in life sciences, pharmaceutical research and in digital pathology.

Different manufacturers use camera systems in their devices to diagnose autoimmune diseases, or diagnose diseases of the blood and hematopoietic organs in hematology, as well as in digital pathology. Pathologists examine tissue sections or cell samples for pathological changes. To this end, they prepare slides which can be examined by microscope to draw conclusions about diseases and provide valuable information for the diagnosis and therapy options, which may not be discernible through other means such as radiology.

There is a wide selection of additional automatic microscope systems with different purposes. From a small device the size of half a shoe box, used for simple cell counting, to systems that are used directly in incubators and enable time-dependent life cell imaging without manual intervention, all the way to the high-content screening systems that are used e.g. in pharmaceutical substance screening – anything is possible.

WHICH CAMERA FOR WHICH LAB APPLICATION?

In addition to the above mentioned fields there is a wide variety of other potential applications and use cases of cameras in the whole scientific field, as for example in protein and nucleic acid analytics, microbiology, particle analytics and more. It’s important to offer cameras with the right features to cover a wide range of applications in the various specialty areas. Independently of the camera’s specific product features, it should offer easy and flexible integrat

TO KNOW MORE ABOUT MEDICAL AUTOMATION PRODUCT DEALERS IN SINGAPORE, ASIA , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

ion with an efficient and comfortable SDK, and, of course, provide high quality and reliability. Technically excellent support, quickly and readily available, also simplifies the integration process for the system developer.

Also Read: WHAT IS EMBEDDED VISION?

TO KNOW MORE ABOUT MEDICAL AUTOMATION PRODUCT DEALERS IN SINGAPORE, ASIA , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

WHAT IS EMBEDDED VISION?

http://mvasiaonline.com

 

March 2019

 

  

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and also more powerful. This trend can also be observed in the world of vision technology.

Machine Vision Cameras Dealer in Singapore Asia

 

A classic machine vision system consists of an industrial camera and a PC:

Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers (SPCs), i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. These systems are called embedded (vision) systems.

 

Design and use of an embedded vision system

An embedded vision system consists, for example, of a camera, so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB, Basler’s BCON for MIPI or BCON for LVDS.

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

 

Which Embedded Systems Are Available?

A so-called SoC (system on chip) lies at the heart of all embedded processing solutions. This is a single chip on which the CPU (which may be multiple units), graphics processors, controllers, other special processors (DSP, ISP) and other components are integrated.

Due to these efficient SoC components, embedded vision systems have become available in such a small size and at a low cost only recently.

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi® or DragonBoard®. These are mini-computers with the established interfaces (USB, Ethernet, HDMI, etc.) and a range of features similar to traditional PCs or laptops, although the CPUs are of course less powerful.

Embedded vision solutions can also be designed with a so-called SoM (system on module, also called computer on module or CoM). In principle, an SoM is a circuit board which contains the core elements of an embedded processing platform, such as the SoC, storage, power management, etc. An individual carrier board is required for the customization of the SoM to each application (e.g. with the appropriate interfaces). This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

Completely individual processing boards in the form of a full custom design may also be a sensible choice for high quantities.

 

Characteristics of Embedded Vision Systems versus Standard Vision Systems

Most of the above-mentioned single board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries. Increasingly, however, x86-based single-board computers are also spreading. A consistently important criterion for the computer is the space available for the embedded system.

For the software developer, the program development for an embedded system is much more complex than for a standard PC. While the PC used in standard software development is also the main target platform (meaning the type of computer which the program is later intended to run on), this is different in the case of embedded software, where the target system generally can’t be used for the development due to its limited resources (CPU performance, storage). This is why the development of embedded software also uses a standard PC on which the program is coded and compiled with tools that may get very complex. The compiled program must then be copied to the embedded system and subsequently be debugged remotely.

When developing the software, it should be noted that the hardware concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the popular Raspberry Pi, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and, with the connection of a monitor, mouse and keyboard, is therefore a universal computer.

 

What Are the Benefits of Embedded Vision Systems?

In some cases, much depends on how the embedded vision system is designed. An SBC (single-board computer) is often a good choice, as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. For that reason, this approach is not very economical in terms of manufacturing costs and is more suitable for small unit numbers, where the development costs must be kept low while the manufacturing costs are of secondary importance.

The leanest setup is obtained with a full-custom design, a system that is highly optimized for individual applications. But this involves high integration costs and the associated high development expenditures. This solution is therefore suitable for large unit numbers.

An approach with a conventionally available system on module (SoM) and an appropriately customized carrier board presents a compromise between an SBC and a full-custom design (also see above: “Which embedded systems are available? “). The manufacturing costs are not as optimized as in a full custom design (after all, a setup with a carrier board plus a more or less generic SoM is a bit more complex), but at least the hardware development costs are lower, since the significant part of the hardware development is already completed with the SoM. This is why a module-based approach is a very good choice for medium-level unit numbers in which the manufacturing and development costs must be well-balanced.

The benefits of embedded vision systems at a glance:

  • Leaner system design
  • Light weight
  • Cost-effective, because there is no unnecessary hardware
  • Lower manufacturing costs
  • Low energy consumption
  • Small footprint

 

Also Read: HOW HIGH-SPEED CAMERAS CAN BE USED TO STUDY HUMAN MOTION SEQUENCES

 

Back to All Robotics and Autonomous Systems Articles, Resources and News 

To Know More About Machine Vision Cameras Dealer in Singapore Asia , Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at info@mvasiaonline.com

 

Source – www.baslerweb.com

BITFLOW PREDICTS VISION-GUIDED ROBOTICS TO BECOME MAJOR DISRUPTIVE FORCE IN GLOBAL MANUFACTURING

mvasia-logo

As the plant floor has become more digitally connected, the relationship between robots and machine vision has merged into a single, seamless platform, setting the stage for a new generation of more responsive vision-driven robotic systems. BitFlow, Inc., a global innovator in frame grabbers used in industrial imaging, predicts vision-guided robots will be one of the most disruptive forces in all areas of manufacturing over the next decade.

“Since the 1960s robots have contributed to automation processes, yet they’ve done so largely blind,” said Donal Waide, Director of Sales for BitFlow, Inc. “Vision-equiped robots are different. Now, just like a human worker, robots can see a specific part to validate whether it is being placed correctly in a pick and place application, for example. Cost savings will be realized since less hard fixturing is required and the robot is more flexible in its ability to locate a variety of different parts with the same hardware.”

Bitflow Frame Grabber Cards Dealer Singapore

HOW ROBOTIC VISION WORKS

Using a combination of camera, cables, frame grabber and software, a vision system will identify a part, its orientation and its relationship to the robot. Next, this data is fed to the robot and motion begins, such as pick and place, assembly, screw driving or welding tasks. The vision system will also capture information that would be otherwise very difficult to obtain, including small cosmetic details that let the robot know whether or not the part is acceptable. Error-proofing reduces expensive quality issues with products. Self-maintenance is another benefit. In the event that alignment of a tool is off because of damage or wear, vision can compensate by performing machine offset adjustment checks on a periodic basis while the robot is running.

DUAL MARKET GROWTH

In should come as no surprise that the machine vision and robotic markets are moving in tandem. According to the Association for Advancing Automation (A3), robot sales in North America last year surpassed all previous records. Customers purchased 34,904 total units, representing $1.896 billion in total sales. Meanwhile total machine vision transactions in North America increased 14.8%, to $2.262 billion. The automotive industry accounts for appoximately 50% of total sales.

THE ROLE OF FRAME GRABBERS

Innovations in how vision-guided robots perceive and respond to their environments are exactly what manufacturers are looking for as they develop automation systems to improve quality, productivity and cost efficiencies. These types of advancements rely on frame grabbers being paired with high-resolution cameras to digitize analog video, thus converting the data to a form that can be processed by software.

BitFlow has responded to the demands of the robotics industry by introducing frame grabbers based on the CoaXPress (CXP) machine vision standard, currently the fastest and most powerful interface on the market. In robotics applications, the five to seven meters restriction of a USB cable connection is insufficient. BitFlow CXP frame grabbers allow up to 100 meters between the frame grabber and the camera, without any loss in quality. To minimize cabling costs and complexity, BitFlow frame grabbers require only a single piece of coax to transmit high-speed data, as well as to supply power and send control signals.

BitFlow’s latest model, the Aon-CXP frame grabber, is engineered for simplified integration into a robotics system. Although small, the Aon-CXP receives 6.25 Gb/S worth of data over its single link, almost twice the real world data rate of the USB3 Vision standard and significantly quicker than the latest GigE Vision data rates. The Aon-CXP is designed for use with a new series of single-link CXP cameras that are smaller, less expensive and cooler running than previous models, making them ideal for robotics.

Also Read: AN INTRODUCTION TO MACHINE VISION SYSTEMS

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT BITFLOW FRAME GRABBER CARDS DEALER SINGAPORE , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – WWW.ROBOTICSTOMORROW.COM

AN INTRODUCTION TO MACHINE VISION SYSTEMS

mvasia-logo

Machine vision is the incorporation of computer vision into industrial manufacturing processes, although it does differ substantially from computer vision. In general, computer vision revolves around image processing. Machine vision, on the other hand, uses digital input and output to manipulate mechanical components. Devices that depend on machine vision are often found at work in product inspection, where they often use digital cameras or other forms of automated vision to perform tasks traditionally performed by a human operator. However, the way machine vision systems ‘see’ is quite different from human vision.

THE COMPONENTS OF A MACHINE VISION SYSTEM CAN VARY, BUT THERE ARE SEVERAL COMMON FACTORS FOUND IN MOST. THESE ELEMENTS INCLUDE:

    • – Digital or analog cameras for acquiring images
    • – A means of digitizing images, such as a camera interface
    • – A processor

 

Industrial Machine Vision System Singapore
Credits : freepik.com

When these three components are combined into one device, it’s known as a smart camera. A machine vision system can consist of a smart camera with the following add-ons:

    • – Input and output hardware
    • – Lenses
    • – Light sources, such as LED illuminators or halogen lamps
    • – An image processing program
    • – A sensor to detect and trigger image acquisition
    • – Actuators to sort defective parts

 

HOW MACHINE VISION SYSTEMS WORK

Although each of these components serves its own individual function and can be found in many other systems, when working together they each have a distinct role in a machine vision system.

To understand how a machine vision system works, it may be helpful to envision it performing a typical function, such as product inspection. First, the sensor detects if a product is present. If there is indeed a product passing by the sensor, the sensor will trigger a camera to capture the image, and a light source to highlight key features. Next, a digitizing device called a framegrabber takes the camera’s image and translates it into digital output, which is then stored in computer memory so it can be manipulated and processed by software.

In order to process an image, computer software must perform several tasks. First, the image is reduced in gradation to a simple black and white format. Next, the image is analyzed by system software to identify defects and proper components based on predetermined criteria. After the image has been analyzed, the product will either pass or fail inspection based on the machine vision system’s findings.

GENERAL APPLICATIONS

Beyond product inspection, machine vision systems have numerous other applications. Systems that depend on visual stock control and management, such as barcode reading, counting, and store interfaces, often use machine vision systems. Large-scale industrial product runs also employ machine vision systems to assess the products at various stages in the process and also work with automated robotic arms. Even the food and beverage industry uses machine vision systems to monitor quality. In the medical field, machine vision systems are applied in medical imaging as well as in examination procedures.

Also Read: A LOOK AT THE PROGRESSION OF MACHINE VISION TECHNOLOGY OVER THE LAST THREE YEARS

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT INDUSTRIAL MACHINE VISION SYSTEM SINGAPORE , CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – WWW.THOMASNET.COM

A LOOK AT THE PROGRESSION OF MACHINE VISION TECHNOLOGY OVER THE LAST THREE YEARS

mvasia-logo

Machine vision represents a diverse and growing global market, one that can be difficult to keep up with, in terms of the latest technology, standards, and product developments, as they become available from hundreds of different organizations around the world.

If you are looking for an example of how fast the market moves, and how quickly trends and new technologies emerge, our Innovators Awards provides a good reference point. In 2015, we launched our first annual Innovators Awards program, which celebrates the disparate and innovative technologies, products, and systems found in the machine vision and imaging market. In comparing the products that received distinction in 2015 to this past year’s crop of honorees, it does not take long to draw some obvious conclusions. First, let’s start with the most noticeable, which was with the cameras that received awards.

In 2015, five companies received awards for cameras. These cameras performed various functions and offered disparate capabilities, including pixel shifting, SWIR sensitivity, multi-line CMOS time delay integration, high-speed operation, and high dynamic range operation. In 2018, 13 companies received awards for their cameras, but the capabilities and features of these cameras look much different.

Vision Inspection Systems In Singapore

CAMERAS THAT RECEIVED AWARDS IN 2018 OFFERED THE FOLLOWING FEATURES:

Polarization, 25GigE interface, 8K line scan, scientific CMOS sensor, USB 3.1 interface, fiber interface, embedded VisualApplets software, 3-CMOS prism design, and subminiature design. Like in 2015, a few companies were also honored for high-speed cameras, but overall, it is evident that most of the 2018 camera honorees are offering much different products than those from our inaugural year.

There are two other main categories that stick out, in terms of 2018 vs. 2015, the first of which is software products. In 2015, two companies received awards for their software—one for a deep learning software product and another for a machine learning-based quality control software. In 2018, eight companies received awards for software.

THESE SOFTWARE PRODUCTS OFFERED THE FOLLOWING FEATURES OR CAPABILITIES:

Deep learning (three honorees), data management, GigE Vision simulation, neural network software for autonomous vehicles, machine learning-based desktop software for autonomous vehicle vision system optimization, and a USB3 to 10GigE software converter.

Lastly, the category of embedded vision looked much different in 2018 than it did in 2015. In the embedded vision category—which I am combining with smart cameras due to overlap—there were two companies that received awards in 2015, both of which were for smart cameras that offered various capabilities. This year, however, there were 12 companies that were honored for their embedded vision innovations, for products that offered features including: embedded software running on Raspberry Pi, computer vision and deep learning hardware and software platform, embedded vision development kits, embedded computers, 3D bead inspection, as well as various smart cameras.

Throughout the other categories, there was equal or similar number of honorees from both years, but there were several interesting technologies or applications that products that popped up in 2018 offered. This includes a lens for virtual reality/augmented reality applications, a mobile hyperspectral camera, a 3D color camera, and various lighting products that targeted multispectral and hyperspectral imaging applications.

This is all to say that, when looking back to 2015 to today, machine vision technology has grown quite a bit. With the rapid pace of advancements, the growing needs of customers and end users, the miniaturization and smaller costs of components, and so on; it is exciting to think about what machine vision products in 2021 might look like.

Also Read: FIVE MYTHS ABOUT ROBOTIC VISION SYSTEMS.

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

 

TO KNOW MORE ABOUT VISION INSPECTION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – VISION-SYSTEMS.COM

FIVE MYTHS ABOUT ROBOTIC VISION SYSTEMS

mvasia-logo

Vision systems for robotic manufacturing applications have significantly evolved over the last decade. While the vision systems of old were unreliable, clunky and expensive, today’s systems are anything but. Proper vision systems can make the difference between an efficient robotic system and one that is not working optimally.

HERE ARE 5 MYTHS AND TRUTHS ABOUT VISION SYSTEMS.

Machine Vision System In Singapore
Credits : freepik.com

MYTH #1: VISION SYSTEM ARE COMPLICATED

In actuality, modern vision systems are very simple to install and use. Most of the algorithms and communications are built in, so it can be very easy and quick to make adjustments without the help of a trained engineer. New users are often surprised just how easy it is to use and maintain their vision systems.

MYTH #2: VISION SYSTEMS ARE NOT RELIABLE

If a vision system is properly applied, it will be highly robust, repeatable and reliable. Today’s vision system components are very robust, even in harsh environments. They are built to operate in rugged applications. Unlike a human, a vision system will see accurately every time. It never gets tired, takes a break or goes home for the evening.

MYTH #3: ALL VISION SYSTEMS ARE THE SAME

There is no truly out-of-the-box solution for vision systems. Each application is unique, and many factors of need to be considered. Anyone who tells you there’s a plug-and-play option for your operations is not selling you a solution that’s properly engineered for your needs. Customized vision systems are the only ones that will work efficiently and reliably.

MYTH #4: VISION SYSTEMS ARE ALWAYS THE BEST SOLUTION

While vision systems are helpful in many robotic applications, there are some jobs in which vision may not be the answer. For example, operations that have drastic changes from part to part moving quickly on a single line may not benefit from a vision system because more discriminating inspection may be necessary. In addition, a vision system helps provide tight tolerances, so applications with loose tolerances may be just fine with sensors and not need to be upgraded to a vision system.

MYTH #5: VISION SYSTEMS ARE TOO EXPENSIVE

Just 10 years ago, typical vision systems cost an average of $30,000. Today, that same system may cost only $5,000 to $15,000. The evolution of vision technologies have brought down the cost considerably. In fact, many companies can see an ROI relatively quickly because a vision system requires fewer special fixtures and conveyors, decreases downtime for fixture changeout, and increases operations overall.

An efficient manufacturer must get products in and out of a cell quickly and reliably. Vision systems paired with robotic operations can put an operation at a competitive advantage by providing opportunities to make more and streamline the process for optimum profitablilty.

Also read: MACHINE VISION TRENDS TO WATCH IN 2018 AND BEYOND

Back to All ROBOTICS AND AUTONOMOUS SYSTEMS ARTICLES, RESOURCES AND NEWS

TO KNOW MORE ABOUT MACHINE VISION SYSTEMS IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTDAT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – ROBOTICSTOMORROW.COM

MACHINE VISION TRENDS TO WATCH IN 2018 AND BEYOND

mvasia-logo

MACHINE VISION TECHNOLOGY has found its way into applications inside and outside of factory settings, riding a wave of progress in automation technology and growing into a sizable global industry. Quite a bit of future technology will depend on machine vision, and the market will grow accordingly.

In 2017, according to a recent report, the global machine vision market was valued at $7.91 billion1. By 2023, the global market is expected to reach $12.29 billion – a compound annual growth rate (CAGR) of 7.61%. This robust growth is caused by a number of broader economic factors.

WHAT’S DRIVING LONG-TERM GROWTH IN MACHINE VISION?

The main drivers of growth in the machine vision market are the need for quality inspection and automation inside factories, growing demand for AI and IoT integrated systems that depend on machine vision, increasing adoption of Industrial 4.0 technology that uses vision to improve the productivity of robotic automation, and government initiatives to support smart factories across the globe.

Machine vision software will be one of the fastest growing segments between 2017 and 2023. The main reason for this is the expected increase in integration of AI into industrial machine vision software to enable deep learning in robotics technology.

PC-based INDUSTRIAL MACHINE VISION PRODUCTS, the oldest form of industrial machine vision, will retain a large portion of machine vision market share because of their ease of use and processing power.

Machine Vision Cameras Blog Singapore
Credit : freepik.com

WHAT TRENDS ARE WORTH WATCHING NOW?

While there are several main factors in the expected long-term growth of the global machine vision market, there are a few trends to keep an eye on now that are changing the way machine vision technology is deployed.

    •  Industrial Internet of Things (IIoT): while AI and IoT technology are long-term drivers of growth, the IIoT is connecting production technology with information technology in today’s factories to increase productivity. The IIoT depends on heavily on machine vision to collect the information it needs.
    •  Non-Industrial Applications: driverless cars, autonomous farm equipment, drone applications, intelligent traffic systems, guided surgery and other non-industrial uses of machine vision are rapidly growing in popularity, and often call for different functionality in machine vision than industrial applications. These non-industrial uses of machine vision are being deployed today and could be an important part of machine vision growth.
    •  Ease of Use: Machine vision systems can often be complex from the user’s perspective. As mentioned above, PC-based machine vision systems will remain popular, despite their age, because of their ease of use. The desire for ease of use may drive further standardization in machine vision products, which could make them even easier to deploy inside and outside of factory settings.

The machine vision market is poised for long-term growth. The IIoT, growing non-industrial applications and ease of use are all helping buoy today’s machine vision market, but there are several other factors effecting long-term market expansion.

With market growth comes innovation. There’s EXCITING THINGS ON THE HORIZON FOR MACHINE VISION AND VISION TECHNOLOGY.

Also read: LENS SEES CLEARLY EVEN THROUGH SHOCK AND VIBRATION

 

TO KNOW MORE ABOUT MACHINE VISION CAMERAS BLOG SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – VISIONONLINE.ORG