Application of Real-time Positioning System with Visual and Range Sensors to Security Robot

The objective of this study is to apply a global navigation satellite system (GNSS)–realtime kinematic (RTK) positioning system, StarGazer, a visual sensor, and a range sensor to a wheeled mobile robot (WMR) for security service. The GNSS–RTK positioning system can provide the outdoor location of the WMR and record the path through which WMR moves. StarGazer, an infrared positioning system, is applied for indoor navigation. We use a webcam to search for people who appear in the patrol path. An intruder’s face is recognized using the Fisherface algorithm. A laser sensor is utilized to detect an obstacle’s position, and a fuzzy controller is applied to obstacle avoidance control. Experimental results show that the proposed control scheme can make the WMR perform surveillance tasks in outdoor and indoor environments as well as security patrol.


Introduction
To move towards industry 4.0, many countries actively promote intelligent automation and robotic technologies. In robotic-related industries, to meet the needs of people, many types of robots have been developed, such as sweeping robots, industrial robots, military robots, and medical robots. In these robots, many advanced technologies are integrated in their systems, such as autonomous mobile technology, perceptual technology, and network communication technology. These advanced technologies can help robots complete the task that is given by humans. It is seen that robots have begun to be gradually involved in our daily lives. However, there is still much room for improvement; thus, improving robot control is one of the most interesting topics in recent research. In addition, because of the low birth rate and aging population in many countries, such countries face the problem of labor shortage, which will lead to increasing reliance on robots to carry out works that require manpower. Many communities or companies have security guards who are responsible for protecting the safety of people and facilities. The drawback is that hiring a security guard is not only expensive but a security guard also cannot perform patrol tasks 24/7.
In mobile robot research, wheeled mobile robots (WMRs) are most commonly used. The advantages of WMRs are their easy control and fast movement; thus, a WMR is used in this study. Therefore, designing an intelligent control system that makes the WMR independently assist human patrolling in different environments is the main motivation of this study. An intelligent robot system includes a combination of many advanced techniques and theories, such as path planning, image processing, and obstacle avoidance technology. For path planning, much research effort devoted to the problem of robotic path planning has produced solutions, such as the Dijkstra algorithm, A* algorithm, and Ant algorithm. (1)(2)(3) In the majority of studies on obstacle avoidance, fuzzy controllers are used to control WMRs. (4)(5)(6)(7) There are many kinds of sensors, such as infrared sensors, ultrasonic sensors, and laser sensors that can be used to detect obstacles. In this study, we integrate the global navigation satellite system (GNSS)-realtime kinematic (RTK) positioning system, StarGazer, a laser sensor, A* algorithm, a fuzzy controller, and image processing into an intelligent system for a WMR to perform security patrol. The security robot designed in this study can reduce the labor cost of hiring a security guard.
The GNSS-RTK positioning system is used to provide the current position and set the goal position of a WMR. This system can receive Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) observation data; the number of useful satellites is more than twice the number of single-satellite systems, so the positioning accuracy has improved significantly. (8)(9)(10) A laser sensor (11) and a fuzzy controller are used to detect the surroundings and avoid obstacles in real time. A webcam (12) and image processing are used to acquire people's images during the patrol process. The proposed system can detect an intruder who appears in the patrol path and capture the photo of the intruder. If the WMR detects an intruder along its patrol path, it will capture the photo of the intruder using the webcam and obtain the location information of the intruder using the positioning system. It will then send the location information and the photo of the intruder to the control center by Wi-Fi and ask for help. In image processing, the method of detecting the upper-body region of people is applied. (13) We use a human-machine interface with LabVIEW 2010 to integrate MATLAB R2013a codes to achieve the control scheme that we proposed in this study.

WMR System
In this study, the hardware structure includes a WMR, a GNSS-RTK positioning system antenna, a GNSS-RTK positioning system receiver, a Microsoft LifeCam studio webcam, and a SICK LMS100 2D laser scanner. Ihomer is used as the WMR in this study. (7) The WMR is 480 mm in length, 455 mm in width, 250 mm in height, 17 kg in weight. The WMR is a twowheeled differential driven robot. The two wheels located on the left and right sides under the body of the WMR are the driving wheels, and the two small wheels located at the front and rear under the body of the WMR are the casters. The WMR consists of a digital signal processor (DSP) development board, a pointer voltmeter, six ultrasonic sensors, eleven shortrange infrared sensors, four long-range infrared sensors, two 18 V DC motors, and two 12 V batteries. The DSP development board is the control board of the WMR, which is used to control its movement. The two 12 V batteries provide power to the WMR, and the pointer voltmeter displays the power state of the WMR. In this study, we use laser sensors instead of the ultrasonic and infrared sensors. Compared with the ultrasonic and infrared sensors, laser sensors have the advantages of fast scanning, high resolution, and immunity to surrounding effects, and have been widely used in recent years. The two 18 V DC motors are used to drive the two driving wheels of the WMR. Each motor controls one driving wheel and the maximum speed of the robot is 1.6 m/s. The WMR is a nonlinear system, so we assume that the WMR is located on the Cartesian coordinate system (global coordinate system). Figure 1 shows a model of the WMR that moves to a certain destination. Here, {X, Y} is the initial frame, and {X R , Y R } is the WMR frame. The coordinate of the current position of the WMR is (x, y), the coordinate of the destination position of the WMR is (x d , y d ), and d is the distance between the current position and the destination position of the WMR. (6) The position errors are The distance d between the current position of the WMR and the destination position of the WMR is The angle difference relative to the destination is By calculating the center speed of the robot for each movement and angular speed, we can obtain every movement of the WMR. The WMR can be driven to the expected position and direction when the movement of the robot is computed at each iteration.
The Microsoft LifeCam studio 1080P Full-HD webcam (12) is used to search for people who appear in the patrol path. The webcam is installed on the top of the WMR. The laser sensor used in this study is the SICK LMS 100 2D laser scanner, which is put on top of the WMR; its scanning angle is 270°, operating range is 0.5-20 m, angular resolution is 0.5°, scanning frequency is 50 Hz, and operating voltage range is 10.8-30 V DC. Ethernet is used to connect the SICK LMS 100 2D laser scanner to a notebook computer. An indoor navigation positioning system called StarGazer is installed on top of the WMR. StarGazer and landmarks are two major components of the localization system, as shown in Fig. 2. StarGazer transmits and receives infrared signals that can be used to obtain the coordinate location and heading angle of the WMR. Passive landmarks are installed on the ceiling. The applicable range of each landmark is 3 m. The advantage of StarGazer is that the computation time is short; it can process twenty data every second. The coordinate error is 2 cm and the angular error is 1 degree. The localization system can provide a positioning message to control the process in real time. We do not need to rely on encoders to provide the positioning message. The WMR can be localized in an absolute reference frame by this localization system. The localization system transmits the location and direction from the receiver to a computer through a USB connection. (14) For outdoor patrol, a GNSS-RTK positioning system is used in this study. The GNSS-RTK positioning system consists of two parts, one is a GNSS receiver and the other is a GNSS antenna, which are placed on top of the WMR. It uses a low-cost GNSS chip, and the price is about one-tenth of that of a traditional RTK system. In addition, the system uses a GPS/BDS dual-satellite system. Because the number of useful satellites is more than twice the number of single-satellite systems, it can provide highly accurate observations and can very effectively solve the problem of satellite signals that are partially blocked when the vehicle is moving. The measurement equation of the carrier phase can be written as (8)(9)(10) f is the carrier frequency (Hz); c is the speed of light (m/s); g i R is the geometric range from the i receiver to the g satellite (m); Δt i and Δt g are the i receiver and g satellite clock errors, respectively (s); a g is the satellite frequency of transmitting the timing deviation amount; , / is the influence of the distance between the ionosphere and the troposphere (m); N i g is the phase ambiguity (cycles); and g i v ϕ is the phase noise (cycles). The principle of differential mode for original observations uses a combination of linear equations to eliminate the common mode error. The definition of the second-order difference equation of the carrier phase measurement at the same time t, with two receivers i and j on the same two satellites g and h, respectively, consisting of a ground differential phase difference measurement equation, is (8)(9)(10) The geometry straight-line distance determined by the satellite g to the receivers i and j is The distance between the ionosphere and the troposphere differential impact is The phase difference between the ambiguous components of the receivers i and j is The difference between the observed error components of the receivers i and j is For static positioning, we placed the receiver and antenna at a fixed point for 1 h, and the experimental results show that the average error of the x-axis and y-axis coordinates is about 1 cm. For dynamic positioning, we moved the WMR at a speed of 0.46 m/s in a 4 × 4 m 2 environment, and the experimental results show that the average error of the x-axis and y-axis coordinates is about 0.5 m, as shown in Fig. 3.

Image Processing
For people detection, the important information includes the human head, face, torso, and skin. Among most studies, the majority use the human face to detect people. In our study, we used the method of detecting the upper-body region of people, in which the upper-body region is defined as the head and shoulder area. This method is more robust against changes in pose, such as head rotations or tilts. Figure 4 shows the set of feature types (13) where the black and white rectangles correspond to positive and negative weights, respectively. The feature types consist of four different edge features, eight line features, and two center-surround features. Each feature is computed by summing up pixels within smaller rectangles (13) Figure 5 shows the results of detections based on the face of a person, and Fig. 6 shows the results of detections based on the upper-body region. As can be seen from the results, the method of detecting the upper-body region of a person shows higher capabilities than the method of detecting the face of people. The former is more robust against changes in pose; it consistently detects all faces at all pose changes, while the latter is directly affected by pose changes that make it fail to detect the side and back views. We use the method of detecting the upper-body region of people to detect 10 front views of people and obtain a detection rate of about 90%.
The face capture uses the Haar-like method; (15) then, boosted classifiers can have a higher success rate for detecting the face. The Haar-like method uses a rectangle frame and its size and rotation angle can be variable. Assume that the first rectangle encompasses the black and white rectangles and the second rectangle represents the black area. The black areas have negative weights and the white areas have positive weights. Haar-like features are shown in Fig.  7 with a special diagonal line feature used in Refs. 14, 16, and 17.  The Fisherface algorithm (18) is applied to face recognition. First, image preprocessing using wavelet transform is carried out to significantly increase the recognition ability; then the histogram equalization increases the global contrast of many images. Second, find the eye's location, calculate the distance, and cut the image to an appropriate size. Third, identify the face by principal component analysis (PCA) (19) and linear discriminant analysis (LDA) (18) of the Fisherface algorithm. The flowchart is shown in Fig. 8.
Face recognition is shown in Figs. 9 and 10. The training samples are the faces of four persons, as shown in Fig. 9. Each person has twenty images captured at different times. When the webcam detects someone's face, it becomes the test sample to be compared with the training samples. If the classification error value exceeds a threshold, it will be determined as an intruder. Then, a warning message will be sent to the control center.   Figure 11 shows the flowchart of the control sequence in our study. First, we set the coordinate location of the goal of the WMR, and then the positioning system receives the coordinate location of the WMR. Second, we use a webcam and a laser sensor to search the environment so that we can obtain environment information, and then the WMR will determine the obstacles or people that appear along its patrol path. If obstacles appear along its patrol path, the fuzzy controller enables the WMR to avoid them until it finishes patrolling. If people appear along the patrol path, the WMR will capture and save their photos, send them to the control center by Wi-Fi, and ask for help.

Control Scheme
For path planning, the A* algorithm (1)(2)(3) is one of the most efficient shortest-path algorithms. Compared with the Dijkstra algorithm, the A* algorithm is a heuristic search algorithm, so the number of search nodes is smaller and the efficiency is much higher. To apply the A* algorithm to the path planning, the cost evaluation function must be given. (3) Starting from the idea of finding a minimum cost path, we apply the Euclidean distance heuristic intuitively. To show the effectiveness of the A* algorithm, we carry out some simulation experiments using MATLAB software. We randomly generate two simulation environments, 20 × 20 and 40 × 40, with a single starting point and a single goal point, as shown in Fig. 12. Here, the sign "o" represents the goal, the sign "x" represents the obstacles, and the sign "*" represents the starting point.
In order to make the WMR perform its patrol task successfully, it must have the ability to avoid obstacles and not be influenced by obstacles in the patrol process. We use the laser rangefinder to detect an obstacle that is in the patrol path and the fuzzy controller to make the WMR avoid obstacles. The SICK LMS100 laser rangefinder is placed on top of the WMR. The angle 0° of the laser rangefinder on the WMR is marked as R, and the measured distance is dr. The angle 45°of the laser rangefinder on the WMR is marked as FR, and the measured distance is dfr. The angle 90° of the laser rangefinder on the WMR is marked as F, and the measured distance is df. The schematic diagram is shown in Fig. 13.
The three inputs of the fuzzy controller are the angles of the laser rangefinder, which are 0, 45, and 90°. The input variable 0° is represented by four linguistic labels: "near", "medium", "far", and "very far". The input variable 45° is represented by two linguistic labels: "medium" and "far". The input variable 90° is represented by three linguistic labels: "near", "medium", and "far". The output of the fuzzy controller is the turning angle. The output variable is represented by seven linguistic labels: "TLVL (turn left very large)", "TLL (turn left large)", "TL (turn left)", "TZ (go forward)", "TR (turn right)", "TRL (turn right large)", and "TRVL (turn right very    Rule 1: If the input variables 0°, 45°, and 90° are near, medium, and near, respectively, then the output is TLL. Rule 2: If the input variables 0°, 45°, and 90° are near, medium, and medium, respectively, then the output is TL. Rule 3: If the input variables 0°, 45°, and 90° are near, medium, and far, respectively, then the output is TZ. Rule 4: If the input variables 0°, 45°, and 90° are near, far, and near, respectively, then the output is TLL. Rule 5: If the input variables 0°, 45°, and 90° are near, far, and medium, respectively, then the output is TL. Rule 6: If the input variables 0°, 45°, and 90° are near, far, and far, respectively, then the output is TZ. ... Rule 22: If the input variables 0°, 45°, and 90° are very far, far, and near, respectively, then the output is TLL. Rule 23: If the input variables 0°, 45°, and 90° are very far, far, and medium, respectively, then the output is TRL. Rule 24: If the input variables 0°, 45°, and 90° are very far, far, and far, respectively, then the output is TRVL.

Experimental Results
At the beginning, after we set the goal position of the WMR at 25.14963°N, 121.77742°E, the system that we used in this study starts to receive satellite data using the GNSS-RTK positioning system, as shown in Fig. 15. At the same time, the WMR also starts to patrol. The experiment environment is set on the top floor of the recreation building of National Taiwan Ocean University. Figure 16 shows that the WMR patrols along a building until it reaches the goal position. Figure 17 shows the patrol trajectory. The images of the patrol process will be transferred to the control center, as shown in Fig. 18.    When the WMR patrols along a building, it detects whether there is an intruder who appears along its patrol path. If an intruder appears along its patrol path, it will capture the photo of the intruder using the webcam and obtain the location information of the intruder using the GNSS-RTK positioning system, and then it will send the photo and the location information to the control center via Wi-Fi and ask for help. The experiment results are shown in Fig. 19.

Conclusions
In this study, we proposed a control scheme that can control the WMR to perform security service. We successfully integrated the GNSS-RTK positioning system, StarGazer, laser rangefinder, webcams, fuzzy controller, and image processing into a WMR. The security robot designed in this study can reduce labor cost. The positioning system was used to provide the current position of WMR and set the goal position of WMR. The GNSS-RTK positioning system can receive GPS/BDS observation data. The number of useful satellites is more than twice the number of single-satellite systems, so the positioning accuracy has improved significantly. The laser rangefinder and fuzzy controller were used to detect the surroundings and avoid obstacles in real time. Several image processing methods were applied to acquire the images in the patrol task, detect the intruder who appeared along the patrol path, recognize the face, and capture the photo of the intruder. In image processing, we used the method of detecting the upper-body region of people, and it is more robust against pose changes. Our experiment results showed that the WMR can complete the patrol task, successfully detect an intruder in the patrol process, and validate the effectiveness of the control scheme proposed in this study.