WHAT IS EMBEDDED VISION

mvasia-logo

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and powerful. This trend can also be observed in the world of vision technology.

A classic machine vision system consists of an industrial camera and a PC: Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers, i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. Such systems are called embedded (vision) systems.

DESIGN AND USE OF AN EMBEDDED VISION SYSTEM

An embedded vision system consists, for example, of a camera, a so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB or BASLER BCON for LVDS.

Basler Camera Distributor in India

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

WHICH EMBEDDED SYSTEMS ARE AVAILABLE?

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi®. The Raspberry Pi ® is a mini-computer with established interfaces and offers a similar range of features as a classic PC or laptop.

Embedded vision solutions can also be implemented with so-called system on modules (SoM) or computer on modules (CoM). These modules represent a computing unit. For the adaptation of the desired interfaces to the respective application, a so-called individual carrier board is needed. This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs or CoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

For large manufactured quantities, individual processing boards are a good idea.

All modules, single-board computers, and SoMs, are based on a system on chip (SoC). This is a component on which the processor(s), controllers, memory modules, power management and other components are integrated on a single chip.

Due to these efficient components, the SoCs, embedded vision systems have only become available in such a small size and at a low cost as today.

CHARACTERISTICS OF EMBEDDED VISION SYSTEMS VERSUS STANDARD VISION SYSTEMS

Most of the above-mentioned single-board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries.

Increasingly, however, x86-based single-board computers are also spreading.

A consistently important criterion for the computer is the space available for the embedded system.

For the SW developer, the program development for an embedded system is different than for a standard PC. As a rule, the target system does not provide a suitable user interface which can also be used for programming. The SW developer must connect to the embedded system via an appropriate interface if available (e.g. network interface) or develop the SW on the standard PC and then transfer it to the target system.

When developing the SW, it should be noted that the HW concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the mobile phone, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and is therefore a universal computer.

WHAT ARE THE BENEFITS OF EMBEDDED VISION SYSTEMS?

In some cases, much depends on how the embedded vision system is designed. A single-board computer is often a good choice as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. This solution is suitable for small to medium quantities. The leanest setup is obtained through a customized system. Here, however, higher integration effort is a factor. This solution is therefore suitable for large unit numbers.

The benefits of embedded vision systems at a glance:

  •  Lean system design
  •  Light weight
  •  Cost-effective, because there is no unnecessary hardware
  •  Lower manufacturing costs
  •  Lower energy consumption
  •  Small footprint

WHICH INTERFACES ARE SUITABLE FOR AN EMBEDDED VISION APPLICATION?

Embedded vision is the technology of choice for many applications. Accordingly, the design requirements are widely diversified. Depending on the specification, BASLER offers a variety of cameras with different sensors, resolutions and interfaces.

The two interface technologies that Basler offers for embedded vision systems in the portfolio are:

  •  USB3 Vision for easy integration and
  •  Basler BCON for LVDS for a lean system design

Both technologies work with the same Basler pylon SDK, making it easier to switch from one interface technology to the other.

USB3 VISION

USB 3.0 is the right interface for a simple plug and play camera connection and ideal for camera connections to single-board computers. The Basler pylon SDK gives you easy access to the camera within seconds (for example, images and settings), since USB 3.0 cameras are standard-compliant and GenICam compatible.

Benefits

  •  Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
  •  Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
  •  Profitable solutions for SoMs with associated base boards
  •  Stable data transfer with a bandwidth of up to 350 MB/s

BCON FOR LVDS

BCON – Basler’s proprietary LVDS-based interface allows a direct camera connection with processing boards and thus also to on-board logic modules such as FPGAs (field programmable gate arrays) or comparable components. This allows a lean system design to be achieved and you can benefit from a direct board-to-board connection and data transfer.

The interface is therefore ideal for connecting to a SoM on a carrier / adapter board or with an individually-developed processor unit.

If your system is FPGA-based, you can fully use its advantages with the BCON interface.

BCON is designed with a 28-pin ZIF connector for flat flex cables. It contains the 5V power supply together with the LVDS lanes for image data transfer and image triggering. You can configure the camera vialanes that work with the I²C standard.

BASLER’S PYLON SDK is tailored to work with the BCON for LVDS interface. Therefore, it is easy to change settings such as exposure control, gain, and image properties using your software code and pylons API. The image acquisition of the application must be implemented individually as it depends on the hardware used.

Benefits

  •  Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
  •  Direct connection via LVDS-based image data exchange to FPGA
  •  With the pylon SDK the camera configuration is possible via standard I²C bus without further programming. The compatibility with the GenICam standard is given.
  •  The image data software protocol is openly and comprehensively documented
  •  Development kit with reference implementation available
  •  Flexible flat flex cable and small connector for applications with maximum space limitations
  •  Stable, reliable data transfer with a bandwidth of up to 252 MB/s

HOW CAN AN EMBEDDED VISION SYSTEM BE DEVELOPED AND HOW CAN THE CAMERA BE INTEGRATED?

Although it is unusual for developers who have not had much to do with embedded vision to develop an embedded vision system, there are many possibilities for this. In particular, the switch from standard machine vision system to embedded vision system can be made easy. In addition to its embedded product portfolio, Basler offers many tools that simplify integration.

Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.

MACHINE LEARNING IN EMBEDDED VISION APPLICATIONS

Embedded vision systems often have the task of classifying images captured by the camera: On a conveyor belt, for example, in round and square biscuits. In the past, software developers have spent a lot of time and energy developing intelligent algorithms that are designed to classify a biscuit based on its characteristics (features) in type A (round) or B (square). In this example, this may sound relatively simple, but the more complex the features of an object, the more difficult it becomes.

Algorithms of machine learning (e.g., Convolutional Neural Networks, CNNs), however, do not require any features as input. If the algorithm is presented with large numbers of images of round and square biscuits, together with the information which image represents which variety, the algorithm automatically learns how to distinguish the two types of biscuits. If the algorithm is shown a new, unknown image, it decides for one of the two varieties because of its “experience” of the images already seen. The algorithms are particularly fast on graphics processor units (GPUs) and FPGAs.

 

TO KNOW MORE ABOUT BASLER CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

HIGH-SPEED CAMERA TECHNOLOGY

mvasia-logo

Bayer Filter

    •  Nearly all color sensors follow the same principle (according to its inventor Dr.Bryce E. Bayer).

 

    •  The light sensitive cells or pixels on the sensor are only capable of distinguishing different levels of light. For this reason tiny color filters (red,green and blue) are placed in front of the pixel as part of the production process.

 

    •  In a subsequent step of image processing the filtered output values are combined to a “color pixel” again.

 

  •  To adapt closer to the perception of the human eye (which is much more green-sensitive than to other colors), twice as many green filters are used.

Burst Trigger Mode

  •  Generally a trigger event indicates the camera when to start recording, after a predefined amount of time (or when the memory is full) the recording stops.
  •  Depending on the application yet another trigger event tells the camera when to terminate the recording.
  •  In Burst Trigger Mode however the camera records as long and as often as the trigger is active (comparable to the triggering mechanism of a machine gun).

Mikrotron High Speed Camera in Singapore

CCD / CMOS comparison

    •  Abbreviations for the two main sensor technologies, describing the inner structure of the chip:

 

    •  „CMOS“: complementary metal-oxide semiconductor

 

  •  „CCD“: charge coupled device

 CCD:

A CCD-sensor provides a determined electrical charge per pixel, i.e. a certain amount of electrons according to the previous exposure.

These have to be captured pixel by pixel with a subsequent electronic circuit, converted into a voltage quantity and recalculated into a binary value.

This operation is rather time consuming. In addition the whole frame has to be grabbed, which requires comprehensive postprocessing.

 CMOS:

CMOS sensors can be produced cheaper and offer the possibility of onboard preprocessing, the information of every pixel can be provided in a digitised mode.

    •  Thus the camera may be designed smaller and random acces to particular parts of the image (“ROI”, region of interest) is possible.

 

  •  Needing less external circuits results in reduced power consumtion of the camera, the stored frames can be read out much faster.

Dynamic Range Adjustment

    •  The human eye has a very extensive dynamic range, i.e. can evaluate very low lighting conditions (like candle- or starlight) as well as extreme light impressions (reflected sunlight on a water surface).

 

    •  This corresponds to a (logarhithmic) dynamic range of 90dB.That means, two objects with 1,000,000,000 times different quantity of light can both be seen clearly.

 

    •  Unlike this, a CMOS camera has a linear dynamic range of about 60dB which equals a ratio of 1:1000.

 

    •  If for instance a recording setup requires to identify dim component labels with large welding reflections, image details within the reflection area can not be seen.

 

    •  Cameras with Dynamic Range Adjustment enable the user to adjust the linear response in certain areas: overexposed objects become darker without loosing intensitiy on the dark ones.

 

    •  Thus minimal variations of luminosity can be detected, even in areas

 

  •  of intense reflective light.

Fixed Pattern Noise (FPN)

    •  Every single pixel or photodiode in a CMOS camera has a construction related tolerance.

 

    •  Even without any exposure to light the diodes generate slightly varying output values.

 

    •  To avoid a corruption of the image, a process similar to the white balance in digital photography compares a reference picture with a dark frame.

 

    •  This frame contains only the detected differences and is used to correct the subsequent images of the sensor.

 

  •  Only after this kind of postprocessing e.g. a plain white area is displayed homogenously white.

Gigabit Ethernet (GigE)

    •  This data transfer technology allows the transmission among various devices (server, printer, mass storage, cameras) within a network.

 

    •  While standard Ethernet is to slow for the transfer of comprehensive image data, Gigabit Ethernet (GigE) with a maximum transfer rate of 1000Mbit/s or 1 Gigabit per second ensures a dependable image transfer in machine vision cameras.

 

GigE Vision

    •  GigE Vision is a industrial standard, developed by the AIA (Automated Imaging Association) for high performance machine vision cameras, optimised for the transfer of large amounts of image data.

 

    •  GigE Vision bases on the network structure of Gigabit Ethernet and includes a hardware interface standard (Gigabit Ethernet) and communication protocolls as well as standardised communication- and controlmodi for cameras.

 

    •  The GigE Vision camera control is based on a command structure named GenICam.

 

    •  This establishes a common camera interface to enable communication with third party vision cameras without any customisation.

 

ImageBLITZ automatic trigger

    •  To capture an unpredictable or unmeasurable event for “inframe” triggering purpose, Mikrotron invented the ImageBLITZ operation mode.

 

    •  In most cases no further equipment or elaborate trigger sensing devices for camera control are needed, the picture itself is the trigger.

 

    •  Within certain limits the ImageBLITZ is adjusted to react only to the expected changes in a predefined area of the picture.

 

Multi Sequence Mode

    •  In this mode the available memory of the camera is divided into many individual sequences. Following each trigger event (e.g. keystroke or a light barrier is set off) a predefined number of frames is saved.

 

    •  In repeatedly occuring events the different variations can be compared and provide a valuable base for the analysis of malfunctions or technical processes.

 

    •  Even a previously determined amount of frames before and after the trigger event can be saved within every recorded sequence.

 

Sobel Filter

    •  In several machine vision applications as motion analysis, positioning or pattern matching it is essential to determine certain edges, outlines or coordinates.

 

    •  The Sobel filter uses an edge-detection algorithm to detect just those edges and produces a chain of pixels (just on/off) that resembles the edges.

 

    •  This process allows to cut down the data stream already in the FPGA-chip of the camera for more than 80%. Less data has to be transferred and processed, the transfer rate rises considerably.

 

Suspend to Memory Mode

    •  The operation of a camera is reduced to the preservation of recorded images.

 

    •  Due to resulting low power consumtion the charge of the storage battery lasts significantly longer.

 

    •  This mode is activated either automatically after recording or manually by pressing a button.

 

  •  Thus the recording memory can be preserved for 24 hours.

 

TO KNOW MORE ABOUT MIKROTRON HIGH SPEED CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – MIKROTRON.DE

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

VISION SYSTEM INSPECTS X-RAY DOSIMETER BADGES – HELMHOLTZ-ZENTRUM

mvasia-logo

In Germany, the inspection of x-ray dosimeters worn by people who may be exposed to radiation is a governmental responsibility. Only a handful of institutions are qualified to perform such tasks. One of which, the Helmholtz-Zentrum (Munich, Germany) is responsible for the analysis of approximately 120,000 film badge dosimeters a month.

Previously these 120,000 film badges were evaluated manually. To speed this inspection and increase reliability, the Helmholtz-Zentrum has developed a machine-vision system to automatically inspect these films. The film from each dosimeter badge is first mounted on a plastic adhesive foil, which is wound into a coil. This coil is then mounted on the vision system so that each film element can be inspected automatically (see figure). To analyze each film, a DX4 285 FireWire camera from Kappa optronics (Gleichen, Germany) is mounted on a bellows stage above the film reel.

Data from this camera is then transferred to a PC and processed using HALCON 9.0 from MVTec Software (Munich, Germany). Resulting high-dynamic-range images are then displayed using an ATIFire GL V3600 graphic board from AMD (Sunnyvale, CA, USA) on a FlexScan MX 190 S display from Eizo (Ishikawa, Japan). Before the optical density of the film is measured, its presence and orientation must be determined. As each film moves under the camera system’s field of view, this presence and orientation task is computed using HALCON’s shape-based matching algorithm.

Both the camera and a densitometer are used to measure the optical density of the film. The densitometer measures the brightness at each of seven points on the film in high precision and is used to calibrate the camera measurement for every film image. To increase the dynamic range of the gray-level image of the film, two images with different exposure times are computed and combined into a high-dynamic-range image. Because the background lighting is not homogenous, shading correction is performed to eliminate any lighting variation. Any lens vignetting and variations caused by pixel-to-pixel sensitivity variation is eliminated by flat-field correction. The optical density is converted into a photon dose using a linear algebraic function to calculate the x-ray dose to which the film was exposed.

Every film reading must be correlated with the unique specimen number associated with each badge. Since these numbers are deposited onto the film material, approximately 10,000 characters needed to be trained and saved to an OCR database using HALCON. After the film is identified, the system must also detect which type of dosimeter cassette has been used to house the film. Since each cassette uses a different x-ray filter, the shadow cast on the film can be either rectangular or round. Thus, a grayscale analysis of these shadows can be used to detect the differences between the different types of cassettes that were used to house the film. To pinpoint the specific causes of x-ray exposure, the system is also programmed to detect whether any potential exposure is caused by errors in film developing or x-ray contamination. If the imaging system detects contamination events, these are then reported manually.

 

TO KNOW MORE ABOUT MACHINE VISION SYSTEM IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

 

Source – MVTEC.COM

 MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

INDUSTRIAL CAMERAS – LETTING ROBOTIC ARMS SEE

mvasia-logo

Robotic arms are widely used in industrial automation. They complete tasks which humans cannot accomplish, are considered too time consuming or dangerous, or which require precise positioning and highly repetitive movements. Tasks are completed in high quality with speed, reliability and precision. Robotic arms are used in all areas of industrial manufacturing from the automobile industry to mold manufacturing and electronics but also in fields where the technology might be less expected such as agriculture, healthcare and service industries.

ROBOTIC ARMS “SEE” WITH MACHINE VISION

Like humans, robotic arms need “eyes” to see and feel what they grasp and manipulate: machine vision makes this possible. Industrial cameras and image processing software work together to enable the robot to move efficiently and precisely in three dimensional space which enables them to perform a variety of complex tasks: welding, painting, assembly, picking and placing for printed circuit boards, packaging and labeling, palletizing, product inspection, and high-precision testing. Not all industrial cameras are compatible with or can be installed in robotic arms, but The Imaging Source’s GigE industrial cameras provide an optimal solution.

GIGE INDUSTRIAL CAMERAS FROM THE IMAGING SOURCE – THE COST EFFECTIVE AND HIGHLY VERSATILE IMAGING SOLUTION

THE IMAGING SOURCE’S GIGE INDUSTRIAL CAMERAS are best known for their outstanding image quality, easy integration and rich set of features. They are shipped with highly sensitive CCD or CMOS sensors from Sony and Aptina, which offer very low noise levels, provide multiple options in terms of resolution and frame rate, guarantee precise positioning capture and output first-rate image quality. External Hirose ports make the digital I/O, strobe, trigger inputs and flash outputs easily accessible. Binning and ROI features (CMOS only) enable increased frame rates and improved signal to noise ratios. The cameras’ extremely compact and robust industrial housing means straightforward integration into robotic assemblies.

In addition, The Imaging Source’s GigE industrial cameras are shock-resistant, so camera-shake and blurred images can be avoided. The cameras are shipped with camera-end locking screws, and the built-in Gigabit Ethernet interface allows for very long cable lengths (up to100 meters) for maximum flexibility.

The Imaging Source’s GigE industrial cameras come bundled with highly compatible end-user software and SDKs which makes the setup and integration with robotic arms fast and simple. Trained personnel without extensive robot programming experience can reprogram the cameras to complete new tasks in a snap. These camera characteristics, along with their competitive price, make The Imaging Source GigE industrial cameras the perfect solution for robotic arm applications.

Suitable cameras for robotic arms:

  • GigE color industrial cameras
  • GigE monochrome industrial cameras

TO KNOW MORE ABOUT IMAGING SOURCE MACHINE VISION CAMERAS, SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – THEIMAGINGSOURCE.COM

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

VISION HELPS SPOT FAILURES ON THE RAIL – NETWORK RAIL LTD.

mvasia-logo

An automated vision inspection system relieves rail workers from the task of manually inspecting rail infrastructure

Traditionally, rail infrastructure has been inspected manually by foot patrols walking the entire length of a rail network to visually determine whether any flaws exist that could result in failures. Needless to say, the method is extremely labor intensive and time consuming.

To minimize the disruption to train services, the manual inspection process is usually performed overnight and at weekends. However, due to the increase in passenger and freight traffic on rail networks, the time that can be allocated to access the rail infrastructure by foot patrols is now at a premium. Hence rail infrastructure owners are under pressure to find more effective means to perform the task.

To reduce the time required to inspect its rail network, UK infrastructure owner Network Rail (London, England; http://www.networkrail.co.uk) is now deploying a new vision-based inspection system that looks set to replace the earlier manual inspection process. Not only will the system help to increase the availability — and assure the safety — of its rail network, it will also enable the organization to determine the condition of the network with greater consistency and accuracy.

Developed by Omnicom Engineering (York, UK; http://www.omnicomengineering.co.uk), the OmniVision system has been designed to automatically detect the same types of flaws that would be spotted by foot patrols. These include missing fasteners that hold the rail in place on sleepers and faults in weak points in the infrastructure such as at rail joints where lengths of rail are bolted together. The system will also detect the scarring of rail heads, incorrectly fitted rail clamps and any issues with welds that join sections of rail together to form one continuous rail.

SYSTEM ARCHITECTURE

The OmniVision system comprises an array of seven 2048 x 1 pixel line scan cameras, four 3D profile cameras, a sweeping laser scanner and two thermal cameras. Fitted to the underside of a rail inspection car, the vision system illuminates the rail with an array of LED line lights and acquires images of the track and its surroundings as the car moves down the track at speeds of up to 125mph (Figure 1). The on-board vision system is complemented by an off-train processing system located at the Network Rail in Derby that processes the data to determine the integrity of the rail network.

For every 0.8mm that the inspection vehicle travels, a pair of three line scan cameras housed in rugged enclosures capture images of each of the rail tracks. Two vertically positioned cameras image the top surface or the head of each of the rails, while the other four are positioned at an angle to capture images of the web of the rail. A seventh centrally-located line scan camera captures images of the area between the two rails from which the condition of the ballast and the rail sleepers and the location and condition of other rail assets that complement the signaling system can be determined.

The cameras transfer image data to frame grabbers in a PC-based 19in rack system on board the train over a Camera Link interface. The frame grabbers were designed in-house to ensure that the data transfer rate from the cameras could be maintained at a rate of approximately 145MBytes/s and that no artifacts within the images are lost through compression. Once captured, the images from each of the cameras are then written to a set of 1TByte solid state drives.

Within the same rugged enclosure as the line scan cameras, the pair of thermal cameras mounted at 45° angles point to the inside web of each of the rails. Their purpose is to capture thermal data at points such as rail joints which can expand and contract depending on ambient temperature. Both the thermal cameras are interfaced via GigE connections to a frame grabber in the on-board 19in rack and the images from them are also stored on 1TByte solid state drives.

Further down the inspection vehicle, two pairs of 3D profile cameras capture a profile of the rails and the area surrounding them for every 200mm that the vehicle travels. Data from the four cameras are transferred to the 19in rack-mounted system over a GigE interface to a dedicated frame grabber and the data again stored on TByte drives. Data acquired by the cameras is used to build a 3D image of the rails and the fasteners used to hold the rails to the sleepers and the ballast around them.

In addition to the line scan, thermal and 3D profile cameras, the system also employs a centrally-mounted sweeping laser scanner situated on the underside of the inspection vehicle which covers a distance of 5m on either side of the rails. Data from the laser scanner – which is transferred to the 19in rack-mounted system over an Ethernet interface and also stored on a set of Terabyte drives – is used to determine whether or not the height of the surrounding ballast on the rail is either too high or deficient.

PROCESSING DATA

In operation, a vehicle fitted with the imaging system acquires around 5TBytes of image data in a single shift over a distance of around 250 miles. Once acquired, the image data from all the cameras is indexed with timing and GPS positional data such that the data can be correlated prior to processing. Data acquired from the cameras during a shift is then transmitted to the dedicated processing environment at Network Rail, where it is transferred onto a 500TByte parallel file storage system at an aggregate data rate of around 2GB/s for a single data set.

Because the image data is tagged with the location and time at which it was acquired, it is possible to establish the start and end of a specific patrol, or part of a single shift. The indexed imagery associated with each patrol is then subdivided into sections representing several hundred meters of rail infrastructure, after which it is farmed out to a dedicated cluster set of Windows-based servers, known as the image processing factory. Once one set of image data relating to one section of rail has been analyzed by the processing cluster of 20 multi-core PC-based servers and the results returned, a following set of data is transferred into the processors until an entire patrol has been analyzed.

To process the images acquired by the cameras, the OmniVision system uses the image processing functions in MVTec’s (Munich, Germany; WWW.MVTEC.COM) HALCON software library. Typically, the images acquired by the line scan cameras are first segmented to determine regions of interest – such as the location of the rail. Once the location of the rail has been found, it is possible to establish an area of interest around the rail where items such as fasteners, clamps and rail joints should be located. A combination of edge detection and shape-based matching algorithms are then used to determine whether a fastener, clamp or rail joint has been identified by comparing the image of the objects with models stored on the database of the system (Figure 2).

To verify that objects such as fasteners or clamp are present, missing, or being obscured by ballast, a more detailed analysis is performed on the data acquired by the 3D profile cameras as a secondary check. To do so, the 3D profile data is analyzed using HALCON’s 3D pattern matching algorithm to determine the 3D position and orientation of the objects even if they are partially occluded by ballast (Figure 3). Should the software be unable to match the 3D data with a 3D model of the object, the potential defect – known as a candidate – is flagged for further analysis and returned to a database for manual verification.

The system can also determine the condition of welds in the rail. As the vision inspection system moves over each of the welds, the line scan cameras capture an image of each one. From the images, the software can perform shape-based matching to identify locations where a potential joint failure may exist. Any potential failure of the weld is also flagged as a potential candidate for further investigation. Similarly, the 3D-based model created from data captured by the laser scanner can also be analyzed by the software to determine if the height of the ballast in and around the track is within acceptable limits.

IDENTIFYING DEFECTS

Through OmniVision’s Viewer application – which runs on a set of eight PCs connected to the server – track inspectors are visually presented with a breakdown of the defects along with the images associated with them. This allows them to navigate through, review and prioritize any defects that the system may have detected. Once a defect has been identified, the operators can then schedule the necessary repairs to be carried out manually by on-track teams.

To date, three Omnicom vision systems have been fitted to Network Rail inspection vehicles and effectively used to determine the condition of the UK’s West Coast mainline network. Currently, two additional systems are being commissioned and by the end of this year, Network Rail plans to roll the system out to cover the East Coast main line between London and Edinburgh and the Great Western mainline from London to Wales. When fully operational, the fleet of inspection vehicles will inspect more than 15,000 miles of Network Rail’s rail network per fortnight, all year round.

TO KNOW MORE ABOUT MACHINE VISION SYSTEM SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – MVTEC.COM

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

3D-SHAPE GMBH – AT THE FRONTIERS OF FEASIBILITY

mvasia-logo

Why 3D-Shape GmbH equips white light interferometers with MIKROTRON CAMERAS

How do you monitor the topography of micro parts when the demands of efficient production require short cycle times and high quality? 3D-Shape GmbH performs complex surface measurements using the principle of white light interferometry, advanced sensors, 3D image processing and powerful camera technology from Mikrotron GmbH.

Fast three-dimensional image processing technologies are becoming more and more important for quality control of complicated components. They are now superior to tactile measurement systems in speed, flexibility, precision and analytical possibilities. In keeping with the adage “a picture is worth a thousand words,” image analysis allows you to discover complex connections and many object parameters at a single glance.

A few years ago, high precision measurements within industrial production lines were unimaginable. Today, reliable quality control with measurement uncertainties of only a few nanometers are possible even with short cycle times. This is true for applications in the electronics, aircraft and automotive industries through to the mold construction of micro parts with highest precision.

THE MEASURING PRINCIPLE OF WHITE LIGHT INTERFEROMETRY

Through rapid innovation cycles in processor and camera technology as well as in precision optics and image processing software, interferometry is increasingly coming into focus. With white light interferometry, the topographies of both rough and smooth objects can be measured and captured in a very precise way. Simply put, the measurement subject and a reference mirror are illuminated by a light source. This is separated into two parts by a semi-transparent mirror (beam splitter).

As the process continues, this results in brightness variations, which are recorded on the image sensor of the camera. These are analyzed by special software and each pixel is assigned a height value. This then creates a highly differentiated profile height in the nanometer range. If the process is carried out at various layers, complex structures are recorded in their full height.

PERFORMING HIGH-SPEED MEASUREMENTS WITH THE HIGHEST PRECISION

Due to its compact design, the Korad3D system can be directly integrated into the production line.

The KORAD3D sensor product family, produced by 3D-Shape GmbH, utilizes the principle of white light interferometry. Their ability to measure fields of 0.24 × 0.18 mm at minimum to 50 × 50 mm at maximum, means they are compact, can be integrated into the production line systems and cover a wide range of applications. They determine flatness and roughness on sealing surfaces, provide 3D imaging of milling and drilling tools, give information on wear on cutting inserts and check the period length and step height of the smallest contacts in electronic devices. The achievable accuracy is directly dependent on the required measurement field size, the optics used and the camera resolution.

The most important factor influencing the measurement accuracy and measurement speed of a KORAD3D system is the performance of the built-in camera. Larger measurement fields are advantageous in ensuring the system can be used for a variety of applications. However, the greater the measurement field, the more inaccurate the measurement. A key requirement for the camera is therefore the megapixel resolution. This is, of course, in addition to other important aspects of image quality such as contrast and noise behavior and the sensitivity of the camera. At the same time, the camera must be able to deliver a high frame rate. In many applications, the entire structure is recorded layer-by-layer, and at very short cycle times within the production line. Doubling the measuring depth, however, also causes twice the measuring time. The resulting large amounts of data need to be addressed. This can only be achieved by a camera that captures and transfers images in real time.

MONITORING WITH KORAD3D IN THE ΜM RANGE

In order to continue processing the ball-grid arrays without errors, it is important to ensure that they are all placed with their top ends inside one area. The bumps in the arrays, arranged like a “nail board,” can be checked with the KORAD3D for different characteristics up to the µm range.

Every single contact pin in the ball-grid array is precisely checked in size and shape to the µm range. In just about one second, the topography of the entire group is captured.

CONVINCING ON ALL LEVELS

When 3D-Shape were looking for a camera that meets all these requirements best, only a few were shortlisted. The Erlangen-based company operates along the frontier of the physically possible and therefore needed a camera with the latest technology. They got a crucial tip from a sensor manufacturer. According to the head of development, ultimately the Mikrotron EoSens® was the only camera that met all the requirements. Therefore, the company decided to equip the KORAD3D measuring systems family with this camera.

At a full-screen resolution of 1,280 × 1,024 pixels, the camera delivers up to 500 images per second via the high performance base/full-camera link (160/700 MB/second) interface. This specification convinced the Erlangen-based company. So they could monitor the as yet unfitted circuit boards at a frame rate of 180 fps (frames per second) for a customer in the electronics industry. But even higher frame rates of up to 500 fps are used in applications.

Another important argument for the EoSens® was its outstanding light sensitivity of 2,500 ASA. It is based among other things on the large area of a single pixel of 14 × 14 µm and the high pixel fill factor of 40%. The investment required for lighting systems was thus reduced and a higher range of brightness and contrast for image processing could be set.

In addition to this there was the switchable exposure optimization. It adapts the usually linear image dynamics of the CMOS sensors to the nonlinear dynamics of the human eye at two freely selectable levels. The bright areas are thereby suppressed and details can be extrapolated even with extreme light-dark differences in all areas. In the most demanding image processing tasks, this is a great advantage.

Given the cycle times the KORAD3D system has to maintain, each contribution to the acceleration of the data processing is important. This includes the ROI function, which can be defined and customized freely to fit the size and location of individual tasks or the receptive field to be evaluated. The amounts of data are thus reduced and the analysis is accelerated. This simultaneously allows extremely increased frame rates. The built-in multiple ROI function allows the user to define up to three different image fields in the overall picture. 3D-Shape GmbH is not making use of this in current applications, but is already looking at interesting solutions applying this in future.

To keep the measurement accuracy of the topographies created by the KORAD3D system in a narrow range, a number of features of the imaging quality must work together to form a performance-boosting whole. The global shutter of the EoSens® completely freezes the captured frame and stores it in real time, while the next image is already being exposed. This provides images of dynamic processes free of distortion and smear effects. In addition to the C-Mount lens mount there is also the F-Mount option. The latter allows the operator to connect the camera and lens to a fixed calibrated unit, which increases the precision of the analysis. In addition to this range of outstanding performance data the compact design of the camera, which simplifies system integration, wins customers over.

TO KNOW MORE ABOUT MIKROTRON HIGH SPEED CAMERA, SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – MIKROTRON.DE

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

XENICS AT BIOS AND PHOTONICS WEST 2017

mvasia-logo

Leuven, 28 January 2017 Xenics, Europe’s leading developer and manufacturer of advanced infrared detectors, cameras and customized imaging solutions from the SWIR to the LWIR realm, comes to BiOS and Photonics West 2017 with a host of new developments. One of them being the new Tigris-640 cooled MWIR camera which will be demonstrated at their booth. This Stirling cooled midwave infrared camera is the successor of the Onca MWIR camera and is designed for high-end thermography and thermal imaging in R&D environments. In addition to theTigris-640, the company’s new (extended) SWIR camera, XEVA-2.5-320, will be demonstrated at the Xenics booth. Xenics exhibits in booths 8620 (BiOS) and 2237 (Photonics West) at the Moscone Center.

NEW COOLED MWIR CAMERA – TIGRIS-640

Xenics launches its new Tigris-640 MWIR camera series at BiOS and Photonics West. The Tigris series will replace the Onca MWIR cameras which are end-of-life. The Tigris-640 aims at applications where high speed, high thermal sensitivity, on-board thermography or broadband detectors are required.

The Tigris-640 is a cooled midwave infrared (MWIR) camera equipped with a state-of-the-art InSb or MCT detector with 640 x 512 pixels and pixel pitch of 15 µm. Both detectors are optionally available as BroadBand (BB) detectors, meaning that their spectral sensitivity is extended into the SWIR band. The Tigris-640 comes with a motorized filter wheel and is equipped with a variety of interface including GigE Vision, CameraLink, analog out, HD-SDI and a configurable trigger in- or output.

Main difference between the 2 available detectors, apart from the detector material, is their A to D conversion and their speed. The Tigris-640-MCT camera offers 14 bit images at a maximum full frame rate of 117 Hz. The Tigris-640-InSb comes with a digital detector that works in 13-, 14- or 15-bit mode at a maximum full frame rate of 250 Hz. Both frame rates can be increased by using a Window-Of- Interest (WOI).

Xenics Infrared Camera Distributor Singapore

EXTENDED SWIR INGAAS CAMERA UP TO 2.5ΜM

The new XEVA-2.5-320 is on display at BiOS and Photonics West 2017. The Xeva-2.5-320 is a SWIR camera designed for use in R&D applications like laser beam analysis and profiling, semiconductor inspection, hyperspectral imaging etc. where an extended SWIR range up to 2.5 µm is necessary.

The Xeva-2.5-320 SWIR camera is equipped with a Type 2 Super Lattice (T2SL) detector that is sensitive from 1.0 to 2.5 µm. It features a resolution of 320 x 256 pixels with a 30 µm pixel pitch. It outputs 14-bit data and is available in a 100 Hz or 350 Hz version. The Xeva-2.5-320 is equipped with a TE4 cooler. Together with its excellent thermos-mechanical design the operating temperature can be brought down to 203 K, guaranteeing low noise and dark current values, and resulting in excellent image quality. Other features include standard CameraLink or USB 2.0 interfaces, user-friendly Xeneth software, and an optional software development kit.

TO KNOW MORE ABOUT XENICS INFRARED CAMERA DISTRIBUTOR, SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM

Source – XENICS.COM

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

Mikrotron High Speed Camera

MVAsia – Menzel Vision and Robotics Private Limited – Authorized dealer for Mikrotron High Speed Camera.
The precision engineered range of Mikrotron high speed camera is manufactured with supreme quality raw material. The clarity and sensitivity of the picture is truly defined by Mikrotron high speed camera. The camera is highly elegant and beautiful to look at and gives the resolution of 3 megapixels. The range can be availed at highly competitive prices. The exclusive range includes EiSens 3CL High Speed CMOS Camera, Mikrotron CMOS high speed camera and many more.

Features

  • Resolution of 3 Megapixels upto 523 frames per second at 1696 ()H *1710 (V) pixel resolution
  • Excellent in flexibility of speed and resolution
  • Effective photo sensitivity
  • 1.5 seconds onboard recording memory at full resolution and speed
  • GigE Vision
  • Multi sequence board
  • Burst trigger mode
  • Stepless adjustable framerate up to 285,000 frames per second at reduced resolution
  • Pixel based fixed pattern noise correction
  • 1200 ASA monochrome, 1000 ASA RGB
  • Crash proof up to 100 g
MIKROTRON has also been active in the field of machine vision. With steady regard to the requirements on the market, industrial machine vision components have been developed ever since. In order to be able to offer fulfillment overall machine vision, high-speed cameras and high-speed recording systems for all types of application have been developed since the year 2000 to complete the product portfolio in this segment.
Our high-speed cameras meet the highest demands in quality and reliability, due to innovation in development, high standards in production and strict quality checks. Furthermore, the close cooperation of all divisions under the same roof gives us the opportunity to react to the wishes and demands of our customer and business partners with great flexibility and on short notice.
Mikrotron Range of Products:

–          Mikrotron High-Speed Recording Systems

–          Mikrotron High-Speed Recording Cameras

–          Mikrotron Machine Vision Cameras

–          Mikrotron Frame Grabber

–          Mikrotron Accessories

 
MotionBLITZ® high-speed recording cameras Cube and mini can be found among the world’s most compact high-speed recording cameras in their category. Cube and mini give precise insight in fast motion processes, especially, in cases where there is only limited space to place a high-speed recording camera. Their outstanding performance features make them efficient analysis tools to monitor and optimize processes. In extreme situations such as difficult lighting conditions, varying temperatures, vibrations or jolting, not only in case of space limitations, they reliably deliver comprehensive pictures.
With its built-in display, the high-speed recording camera eosens TS3 offers a user-friendly operation as handheld high-speed snapshot camera.Due to a substantial product range and the various performance characteristics of the high-speed cameras, it is possible to find exactly the appropriate device for every individual application.

Our customers use MIKROTRON high-speed camera in all areas of industrial development and production, in natural sciences and research as well as in sports sciences.
–          MotionBLITZ EoSens® mini: extremely compact high-speed recording camera for high-speed recording of up to 6.5 seconds
–          MotionBLITZ® Cube: compact high-speed recording camera for high-speed recording of up to 13 seconds
–          eosens TS3: handheld high-speed recording camera for high-speed recording of up to 13 seconds
To Know More Mikrotron High Speed Camera Contact us at
MV ASIA INFOMATRIX PTE LTD
3 Raffles Place, #07-01 Bharat Building,Orchard Road
Singapore – 048617
Tel: +65 63296431
Fax: +65 63296432
E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

Adept Robotic Singapore – Automation Products Specifically Designed for Food Packaging Applications

 

As an innovator of industrial automation products, Menzel has consistently led the industry in the development of innovative and powerful industrial automation products for manufacturing, packaging, material handling and factory automation. We pioneered Direct-drive robots, Flexible feeding, Database-driven applications software, Integrated vision and Conveyor tracking, Digital servo control networks, and other products and technologies critical to the flexible automation industry.

Systems Help Food and Beverage Companies Meet New Government Safety and Sanitation Mandates, Lower Labor Costs and Future-Proof Packaging Lines

Adept Technology, Inc, a leading provider of intelligent robots and autonomous mobile solutions and services, today announced the availability of primary and secondary food packaging robots and peripherals that are designed specifically to help food processing companies to meet new government safety and sanitation mandates, lower plant operating costs, decrease waste and future-proof packaging lines.

 Robotic automation is a proven solution for both primary and secondary food-packaging lines, which typically require rapid, repetitive, labor-intensive product handling. A primary or secondary packaging line that combines Adept’s robotic components, application software, and game-changing SoftPIC gripper/grasper technology can yield significant productivity gains while improving safety and sanitary conditions.

“Food processors are looking to robotic automation with integrated vision and conveyor control to provide the speed and efficiency of hard automation but with flexibility for quick changeover between products,” states Glenn Hewson, Adept Senior Vice President of Marketing. Our game-changing SoftPIC grippers are designed specifically for primary packaging of raw protein and fruits and vegetables, and when combined with Adept’s industry-leading vision guidance and conveyor-tracking enable a variety of products to be packaged on the same line.”

Hewson further adds “Robotic automation can also help processors dramatically lower distribution labor costs by packing mixed-product cases for specific store orders on the secondary packaging line and eliminating repacking at distribution warehouses.”

Adept’s differentiating food packaging innovations include the Adept Quattro™ s650HS, the world’s only USDA-approved robot; PackXpert™ a powerful software solution for the rapid development of robotic packaging applications; SoftPIC™ advanced gripping/grasping technology that supports handling different products in a variety of packaging patterns from a single production line; and industry-leading integrated vision and conveyor tracking, for applications where robots must package products that are presented randomly on a moving conveyor belt.

 

Know More About Adept Robotic Singapore -Automation Products

 

About MV Asia Infomatrix Pte Ltd

Menzel Infomatrix  is a one point source for not only world-class complete imaging solutions but also for separate components to integrate a solution. We offer proven solutions for image monitoring, processing and analysis needs like automatic machine vision systems, robotic machine vision software, inspection machine vision systems, open source machine vision software, etc.

We also export research microscopes, image analysis systems, zoom lenses and telescopes. Our products like automatic machine vision systems, robotic machine vision software, inspection machine vision systems, open source machine vision software, etc., are widely used in areas like forensics aerial surveillance, medical imaging, particle sizing and counting, object recognition, pharmaceutical research etc.

 

Contact Us:- 

MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,

Orchard Road

Singapore – 048617

Tel: +65 63296431 

E-mail: info@mvasiaonline.com / menzinfo@starhub.net.sg

Two-Link Frame Grabber Dealer Singapore – Karbon-CXP2

BitFlow has a frame grabber model for almost every camera manufactured. Whether your camera is Camera Link, Differential (LVDS or RS422) or even analog, BitFlow can provide the interface. At the high end is the Karbon-CL, which is a quad Base CL frame grabber that can take in up to 160 bits at 85 MHz, and DMA at speeds of up to 2.0 GB/S.

BitFlow Inc. has introduced the Karbon-CXP2 two-link frame grabber for one two-link camera or up to two single-link cameras in smaller vision systems.

The device offers video acquisition speeds up to 6 Gb/s and will send control commands and triggers at 20 Mb/s over a single piece of 75-Ω coaxial cable in lengths up to 135 m. A maximum 13 W of power can be transmitted to the camera along the cable. Engineers leveraging the Karbon-CXP2 can repurpose existing coaxial infrastructure to reduce system complexity.

Cameras can be synchronized or unsynchronized, and a separate I/O is in place for each camera. The system supports serial communications to all cameras.

BitFlow has a Analog frame grabber model for almost every camera manufactured. Whether your camera is Camera Link, Differential (LVDS or RS422) or even analog, BitFlow can provide the interface. At the high end is the Karbon-CL, which is a quad Base CL frame grabber that can take in up to 160 bits at 85 MHz, and DMA at speeds of up to 2.0 GB/S.

About MV Asia Infomatrix Pte Ltd

Menzel Infomatrix is a one point source for not only world-class complete imaging solutions but also for separate components to integrate a solution. We offer proven solutions for image monitoring, processing and analysis needs like automatic machine vision systems, robotic machine vision software, inspection machine vision systems, open source machine vision software, etc.

We also export research microscopes, image analysis systems, zoom lenses and telescopes. Our products like automatic machine vision systems, robotic machine vision software, inspection machine vision systems, open source machine vision software, etc., are widely used in areas like forensics aerial surveillance, medical imaging, particle sizing and counting, object recognition, pharmaceutical research etc.