Image-Sensor-Based Fast Industrial-Robot Positioning System for Assembly Implementation

In this paper, an innovative image-technique-based fast industrial-robot positioning system for assembly implementation is presented. Two laser projectors, that is, a laser pointer and a line laser, and an image sensor are used to construct the innovative system to scan and obtain a 3D point cloud model of the object to be assembled. The system is then integrated with a 6-axis industrial robot to define the insertion position in a semi-autonomous manner. This system can save up to 80% of the manual teaching time for defining robot insertion positions by using a teach pendant. The whole system has been tested on four insertion tasks that require high precision. The results of experiments confirm the effectiveness and efficiency of the system for high-precision tasks. This innovative system can be used in many industrial-robot-based automated assembly tasks.


Introduction
Industrial robots have been used in many industries for decades.(3)(4)(5)(6) The engineer uses the teach pendant to move the end effector of the robot to the required positions with specified orientation in a pointto-point manner.The manual robot teaching job is very time-consuming and it will be effective only if the taught tasks are to be repeatedly performed many times in a long production life cycle, such as in car assembly.Some advanced robots with embedded force control modules can be moved manually, which saves time.However, precise alignment, which requires both position and orientation matching between the part in the gripper of the robot and the workpiece that receives the part is difficult to achieve because of the small tolerance in most assembly tasks of 3C (Computer, Communication, and Consumer Electronics) products.Therefore, tremendous amounts of time will be used in defining part insertion positions if the traditional teaching method is adopted.
The difficulties of precise alignment comprise two major factors: nonintuitive and changing multi-DOF with efficiency and inaccurate observation and tuning for alignment when viewed manually from the side (not a bird's-eye view along the insertion direction).
In order to reduce the time needed to teach robot insertion positions, we present an innovative vision-technique-based semi-autonomous positioning system for assembly implementation.The proposed system comprises a number of vision subsystems, including an image sensor to capture the background image of the workpiece, a point laser projector and its pairing camera that form a distance measurement device, and a line laser projector and its pairing camera that form a 3D object scanning device.The proposed semi-autonomous positioning system is used to find the final insertion position of the end effector via a series of vision subsystem operations with the advantage of a reduction in the teaching time by up to 80%.

System Setup
Figure 1 illustrates the setup of the proposed system in which a Logitech Webcam camera is combined with laser projectors to form a noncontact 3D measurement system, and the whole system is attached to the end effector of a Denso 6-axis robot.The combination of a gripper and an attached Basler camera is also included in the end effector to conduct semi-autonomous positioning for insertion tasks.The camera installed at the center of the gripper provides visualization to the user.In this system, the task of determining the precise position of the laser segments on the captured image is very important and is the key to obtaining a precise scanned point cloud model.In order to enhance the segmentation of laser rays, a laser filter is used to bypass unwanted light, allowing only laser light of specific wavelengths to reach the camera's image sensor.The filter is fixed in front of the camera entrance, as shown in Fig. 1.Both lasers used in this experiment have a wavelength of 650 nm; therefore, an IR long-pass filter is used to allow only the IR light of 600 nm wavelength and longer to pass, as can be seen in Fig. 2.

Image sensor calibration
The camera used in this experiment is the Logitech HD Webcam, which is a pinhole camera.For each pixel in the camera image, there is a line from that pixel that passes through the pinhole.
To obtain those lines, we need the intrinsic values of the camera; therefore, we used Zhang's  Sensors and Materials, Vol. 29, No. 7 (2017) 937 method of camera calibration and the backprojection method from OpenCV. Figure 3 shows the calibration process, and the obtained intrinsic values of the camera are listed in Table 1.
The backprojection from the pixel value to the line of 3D points is then calculated using Since the intrinsic values of the camera sensor are constant, for every single pixel (u, v), we can find a line of infinite 3D points where the position of each point is determined by the value of scale factors.

Hand-eye calibration
Since the camera is fixed on the robot end effector, the camera calibration results can be used to obtain the transformation from the camera coordinate to the robot's end-effector coordinates or the robot's base coordinates. (7)In this hand-eye calibration, the calibration board is fixed at one position to act as the world coordinates.The same calibrating process is applied to 13 different camera positions in the world coordinates.By pairing these camera positions and orientations in the world coordinates with their corresponding end-effector positions and orientations in the robot's base coordinates, we can generate the transformation.Figure 4 shows the calibration process.The camera position and orientation in the end-effector coordinate are listed in Table 2.
The transformation matrix shown in Eq. ( 2), used to transform a 3D point from the camera's coordinates to the end-effector's coordinates, is constant and can be derived from values in Table 2.

Laser segmentation
The image obtained by the camera with the laser filter must undergo the segmentation process to separate the laser pixels from the background.This process is done by the adaptive threshold method from OpenCV, in which pixels are classified as either "dark" or "light".The after-threshold image of a line laser will have lines wider than one pixel.Since a single-pixel line is needed to calculate the laser's 3D position, we implemented the thinning operation to reduce lines to be one pixel wide.In an image sensor frame, the thinning operation uses a 3 by 3 structure element matrix to remove the outer pixel of the line laser, and the process is applied repeatedly until a single pixel line remains.Figure 5 shows a frame with sequential processes from the threshold method to thinning.The image after thinning is used to detect lines by the detection method from OpenCV. (8)

3D Measurement with laser pointer
Figure 6 shows the concept of using a laser pointer to transform a 2D point to a 3D point with distance information.Since the camera and laser pointer are fixed with a distance h between their centerlines, the distance d from the laser projector to its projection point on the object surface can be determined by the pixel value of the pointer on the image.After receiving the pixel value of the pointer projection using the above laser segmentation method, we can easily compute the distance d: The 3D coordinates of the laser projection are defined by x = 0, y = 0, and z = D; therefore, a transformation from the laser coordinates to the robot's base coordinates is applied to the point P(x, y, z) = (0, 0, d) to obtain the final 3D point with respect to the robot's base P B .Equation (4) shows the transformation where (ϕ, θ, ψ) is the rotation vector for transformation from the base coordinates to the end-effector coordinates. (9)39 The precisions of x and y only depend on the robot arm precision.In this study, we used the Denso 6-axis robot arm with a precision of ±0.2 mm.The depth precision is shown in Fig. 7 and Table 3.

3D scanner with line laser
The line laser projects a plane of light that intersects the object surface and forms projection points.As shown in Fig. 8, since we fixed the laser and the camera on the robot's end effector, the laser plane is also fixed on the camera coordinates.To calculate the laser plane equation, we used 2 calibration boards to project the line laser and then the backprojection method from the camera calibration to find the 3D positions of intersection points between the laser and calibration boards, which correspond to the camera coordinates.Three randomly selected points are used to calculate the plane equation.Figure 9 shows an example of obtaining a laser plane. (10)fter obtaining the plane equation, we applied Eq. ( 1) to every single pixel of the laser projection (shown in red) on the image to obtain the array of light.The 3D position of the laser projection on the object will be the intersection between the laser plane and the light array (see Fig. 9).
The final 3D position of a point belonging to the laser projection corresponding to the camera coordinates can be derived by the following equation.To verify the result of the scanner, we measured the real size of the scanned object and compared it with the scanned result.Figure 10 shows scanned results of the model object and the corresponding error.
Using the hand-eye calibration result, the obtained 3D points can now be transformed from the camera coordinates to the robot's base coordinates with the following equation.
Figure 11 shows some scanning results corresponding to the robot's base coordinates.

Implementation
In order to test the robustness and reliability of the developed system for industrial robot tasks, we conducted four parts insertion experiments in four different objects.These tasks all require high precision of 0.25-0.5 mm in position.

Dual in-line memory module (DIMM) insertion
DIMM insertion requires the industrial robot to insert DIMM into a DIMM socket in a circuit board, randomly placed on the work space.First of all, the laser pointer is used to locate some points on the circuit board surface, to calculate the normal vector of this surface, and then to guide the robot's end effector to the desired position.The line laser then scans through the board to locate the DIMM positions as well as determine the DIMM height.The shape-based matching (SBM) method from Halcon (11) is then applied to precisely match the DIMM heads and locate the final insertion position from the scanned point cloud, as shown in Fig. 12.
The result showed a successful implementation of a system for a socket insertion task.The system can deal with the high precision application with a position error of ±0.5 mm.The GUI system can be seen in Fig. 13.

Dot matrix LED insertion
This task requires the insertion of a dot matrix LED into an 8051MCU board, as shown in Fig. 14.Line laser scanning generated the point cloud model, as shown in Fig. 15.The back of the dot LED has two rows of 12 pins.The middle point is located at the center between the 6th and 7th pins.Therefore, to determine the insertion position, two pins on each side are selected.The SBM method is then applied to precisely match the pair of circular pinholes and locate the final insertion position from the scanned point cloud, as shown in Fig. 16.

Clock oscillation insertion
This task is the insertion of a 4-pin clock oscillator into a motor control board, both of which are shown in Fig. 17

Conclusions
We presented a noncontact 3D measurement system comprising an image sensor combined with a line laser projector, and the system was integrated with a 6-axis industrial robot.The system can be used to reduce the manual robot teaching time by up to 80% by using the image-techniqueenhanced autonomous positioning system.The results from experiments on 4 insertion tasks indicated that the proposed system can successfully work in assembly tasks with the tolerance level of ±0.5 mm, which satisfies the requirement of most insertion tasks of most 3C products.The system error may become larger when the laser segmentation is affected by undesirable light exposure or light reflection.The system used half of the maximum image sensor resolution since our computing system cannot catch up to the maximum load associated with the maximum resolution.In future work, we aim to improve the computing system and further raise the ability of laser segmentation to gain higher precision and stable point cloud models.

Fig. 5 .
Fig. 5. (Color online) Laser segmentation after applying an adaptive threshold (upper right) and the thinning process (lower left).
. The line laser scanning and the point cloud model are shown in Fig. 18.The

Fig. 18 .
Fig. 18. (Color online) Line laser scanning and the point cloud model.

Fig. 22 .
Fig. 22. (Color online) Line laser scanning and the point cloud model.

Fig. 23 .
Fig. 23.CAD file of the flash memory IC socket.

Table 1
0 Scale factor in x-axis (m x ) 8.3 nm Scale factor in y-axis (m y ) 8.3 nm Principal point in x-axis (u 0 ) 314.962 Principal point in y-axis (v 0 ) 232.344 Fig. 3. (Color online) Camera calibration process.

Table 2
Camera in tool pose.