Smart Trash Can Robot System with Integration of Internet of Things and Mobile Applications

People have become busier in the modern world with both work and housework. In addition to the different types of robotic cleaner, it would be quite helpful to have a smart robotic trash collection and dumping system that provides on-call services so that a user does not need to physically get up and put trash into a trash can. As the bin reaches its maximum capacity, it also dumps the trash automatically without the user’s instructions. Available smart trash cans are usually focused on determining if the trash is recyclable. Most trash cans also do not have the capability to move autonomously. In this research, we utilize fingerprint mapping for wireless indoor positioning towards the implementation of an autonomous vehicle with a mounted trash can. The user may make an indoor call to the trash-can-mounted vehicle via a mobile application under an IoT and cloud computing environment. The vehicle can position itself in front of the user for trash collection through automatic obstacle avoidance navigation with smart path planning from deep learning. The system can also monitor the amount of accumulated trash and dump trash at a fixed location before returning to the start point. This research on a smart trash-collecting robot can provide significant assistance to people who are busy, those with impaired or limited movements, and the elderly.


Introduction
With the significant development in communication technologies, mobile applications and application services for the internet of things (IoT) (1) have become important technologies for information and communication technology (ICT) applications in i-enabled services. Massive amounts of data can be imported into an online cloud and processed with deep learning towards big data analytics for applications in knowledge discovery. The economic and social effects derived from such processes are gradually changing the shape of i-enabled services. Current mobile and IoT applications are mostly limited to the development of new business models since such novel models are successfully applied to expanding markets and creating high popularity and output value. Nevertheless, as the market for application level matures, the development of higher application levels or newer devices (such as smart automatic device) (2) or new concepts (such as the internet of everything, IoE) (3) becomes important for future mobile applications and IoT applications. If deep knowledge can be imported into high-level smart living applications, the present mobile applications and IoT applications can be expanded to another new level to create new business opportunities and provide significant social, industrial, and national contributions.
With the modern lifestyle, people who are busy at work may not have the time or energy to clean their home environment, and cleaning robots become good assistants in performing cleaning chores around the household. More and more families use cleaning robots instead of manually cleaning the house. In addition, many households have trash cans in and around the house, such as in the kitchen, living room, and bathroom. Cleaning robots mainly collect dust and small debris from the floor. Larger objects such as tissues, straws, and fruit peels need to be picked up by the user and placed in the trash can, as there is not yet a trash collection service robot for such tasks as shown in Fig. 1. Furthermore, a cleaning robot may not collect over a certain amount of trash. Thus, when the trash load is almost full, the cleaning robot cannot empty its content, and the user needs to empty the trash can manually into a larger bin to allow continued operation of the cleaning robot. Therefore, a smart trash-collecting robot to replace manual trash collection and dumping is a necessary addition to a modern household. The goal of this research is to provide an autonomous trash collection/dumping robot, iTrashCan. The iTrashCan robot can be summoned with one key press, upon which it automatically detects the user's location (4) and moves to the user's location by automatic obstacle avoidance navigation. The robot automatically opens/closes the lid for trash disposal through installed sensors. After trash collection, the robot returns to its starting position to recharge. With the iTrashCan robot, the user does not have to get up and walk to the trash can when he or she is otherwise occupied, and the elderly or the disabled can also dispose of trash without effort. As a result, the iTrashCan robot can improve the user's quality of life. Finally, to increase the effectiveness of path finding for the iTrashCan, in this research, we apply a deep learning mechanism for the robot to remember all the traversed paths and derive the best path through long short-term memory (LSTM), such that the destination can be reached in the shortest time. The integration of an autonomous, trash-can-carrying vehicle with smart path finding mechanisms provides convenience brought about by communication technology and facilitates the effective cleaning of the environment. The iTrashCan robot designed in this work can be used in trash collection/dumping applications in public areas, (5) for example, in department stores, supermarkets, parks, streets and places where there is a high demand for trash collection. (6) Trash collection/dumping can be performed automatically without human interventions by placing several large-scale iTrashCan robots in designated areas. Apart from conserving human resources, the proposed system can significantly contribute to maintaining the cleanliness of the environment.

Related Work
In recent years, the concept of Industry 4.0 has begun another wave of industrial revolution. The goal is achieved by performing real-time big data collection through the Internet, (7) connecting to the core network via base stations or gateways of peripheral networks, (8) passing the data into the cloud computing platform and applying deep learning from artificial intelligence for big data analytics. (9) The obtained information and knowledge are provided as feedback to terminals within industries and manufacturing plants for optimal control. (10) In this century, diverse information and communication application products have been developed to cater to various jobs, and this trend has generated huge demands for smart robots that can deal with various tasks in place of a human being. Such smart robots are expected to conserve human resources and contribute to the successful completion of required tasks. The smart trash collection/dumping robot, iTrashCan, can eliminate the inconvenience and low efficiency associated with the trash collection/dumping owing to space or time constraints. It can also provide convenient trash collection/dumping services to households with elderly or disabled members. Apart from having sweeping and cleaning robots in the house, the trash collection/ dumping robot helps with the cleaning of the household and the surrounding environment, improves the quality of life, and realizes the philosophy that "the convenience of technology leads to the power of happiness".
Commercially available smart trash cans are mostly focused on the recognition of recyclable trash and do not include the function to automatically open/close the lid. The advantage of the proposed research lies in utilizing the concepts of IoT technologies and indoor positioning to provide an autonomous trash collection/dumping robot, iTrashCan, which can be summoned with one key press from a mobile application. iTrashCan eliminates the need to place multiple trash cans in different rooms within a household. The robot can accurately locate the user's location through indoor positioning and move to the user's location by automatic obstacle avoidance navigation. The iTrashCan robot can also open the lid for the user to dispose of the trash. Each traversed path is stored in a cloud database. With the deep learning LSTM mechanism, (11) an optimized path can be derived from the collection of paths stored in the memory, such that the destination can be reached in the shortest time. In locations with the disposal requirements of large amounts of trash, such as department stores, markets, parks, and streets, the placement of several large-scale iTrashCans can reduce the need for human operators, since the robots can handle the trash collection/dumping by themselves.

IoT and cloud platform
In this work, we utilized the open-source software tool, PhpMyadmin, as the service platform for background cloud computing. Through the cloud platform, (12) the user can control the iTrashCan via the control panel application on a mobile device. The control panel application uploads corresponding values to the cloud platform for storage, and the iTrashCan continuously reads the data from the cloud to perform corresponding tasks and uploads through the updates. The iTrashCan reads the data according to the most recent key pressed by the user, as shown in Fig. 2, arrows 1 and 2.
An ultrasonic sensor is installed on the lid of the iTrashCan to detect the amount of trash in the trash can, and the values are continuously updated onto the cloud platform via Wi-Fi. As the amount of trash in the iTrashCan increases, the detected values decrease. The iTrashCan also uploads its movement status onto the cloud platform. The user can obtain the iTrashCan's real-time status through the control panel application and see if the robot is moving or standing by, as shown in Fig. 2, arrows 3 and 4.

Indoor positioning
In this work, we used the received signal strength indicator (RSSI) based on the Wi-Fi signal strength (13,14) with multiple ESP8266 Wi-Fi receivers to obtain the thermoelectric signal strength from the user's mobile device to perform RSSI-based indoor positioning; (15) RSSI is given in decibel-milliwatts (dBm), which represents the ratio of a given power to 1 mW. The RSSI value, RSSI i (dBm), for a given power P i (mW) is calculated using Eq. (1), where there are n Wi-Fi base stations and m mesh grid points. We can obtain a certain mesh grid point L i given by RSSI i as shown in Eq. (2), where RSSI ij _min stands for Wi-Fi base station i that received the minimum RSSI emitted from mesh grid point j and RSSI ij _max represents the maximum RSSI. Finally, solving Eq. (3) provides a validated mesh grid point g, and it turns out that the user's mobile device is located at a mesh grid point L g . For example, Fig. 3 illustrates an instance with n = 4 and m = 9.
To use the RSSI-based indoor positioning, the dimensions of the indoor space, such as the length and width, must be measured beforehand, reference points within the space need to be specified and marked, and multiple Wi-Fi receivers need to be installed (in the example, the space is rectangular with four receivers and divided by 9 reference points, as shown in Fig. 3). The strength information from the four receivers is obtained at each reference point by utilizing the concept of fingerprint. Since the signal strength fluctuates, the maximum and minimum values are collected. The collected information is saved as reference data in the cloud server and referred to as the fingerprint map, as shown in Fig. 3. When the user is performing positioning at a particular reference point, the obtained RSSI can be matched to the reference data, thus determining the user's location with respect to the reference points. Since the distances and angles between the points are fixed, the iTrashCan can go from its standby position to the user's reference point accurately once the user's position is known, thus achieving the action of summoning the trash can.

Hardware architecture of iTrashCan
The overall architectural diagram of iTrashCan is shown in Fig. 4. By connecting to the cloud database with the ESP8266 Wi-Fi microchip, iTrashCan performs indoor positioning to determine the relative distance and orientation of the user's current location. The iTrashCan can accurately rotate and move towards the user's vicinity using the wheel speed sensor and the compass sensor attached to the bottom of the system. If obstacles are encountered en route, the robot stops and uses the ultrasonic ranging sensor to scan 0 to 180° directly ahead to obtain the distance data, as well as the angle required to avoid the obstacle. The obstacle avoidance steps are shown in Fig. 5. With this approach, the user does not need to be concerned with the iTrashCan getting trapped on the way to its destination owing to unforeseen obstacles.
When iTrashCan arrives at the destination, it begins reading the ultrasonic sensor on the lid. The ultrasonic sensor is triggered when an object approaches, the servo motor then opens the lid so that the user can dispose of the trash as shown in Fig. 6. Once the user has disposed of the trash and the ultrasonic sensor has not been triggered for a certain period of time, the lid closes and the ultrasonic sensor inside the lid is triggered to detect the amount of trash within the iTrashCan and the trash quantity is uploaded to the cloud as shown in Fig. 7. In this manner, the user can dispose of the trash without touching the lid. The user can also monitor the amount of trash via the control panel application at any time, thus avoiding the situation in which the user opens the lid only to find that the trash can is full.
As shown in Fig. 8, the utilization of the servo motor in conjunction with the supporting structure makes the trash can tip slightly forward to collect the trash, making it easier for the user to dispose of the trash. The configuration can also shake the trash can after trash disposal to evenly distribute the trash within the container to avoid erroneous readings of the trash amount. When the user determines that the amount of trash exceeds the maximum threshold, he or she can command the iTrashCan to dump the trash at designated locations. As shown in Fig. 9, the lid of the trash can opens and the container tilts forward at larger angles and shaken to dislodge the content, thus accomplishing the task of dumping the trash. The user is spared from the tasks of taking out and dumping the trash can manually. At the present stage, the servo motors are used for lid opening and trash dumping; in the future, we hope to replace the servo motors with hydraulic motors, so that the lid can be opened and closed smoothly, along with the increased capacity for larger trash loads when dumping the trash. Figure 10 shows the main page of the control panel application for the iTrashCan. Before receiving any command, the status on the application shows the "Standby" icon. The user can press the bottom left button to call iTrashCan when he or she wishes to dispose of the trash, then the iTrashCan begins to detect the user's location. At this stage, the status on the control panel application displays the "detect user position" icon, as shown in Fig. 11(a). Once the iTrashCan has calculated the user's position through cloud computing in conjunction with automatic object avoidance, it starts to move towards the user's direction and the control panel    application displays the "moving to destination" icon as shown in Fig. 11(b). At this stage, the iTrashCan automatically moves to the front of the user, so that the user can dispose of the trash. When using for the first time, the indoor positioning must be corrected at each mesh grid point on a fingerprint map. A single calibration will take 60 s to continuously detect each emitted radio signal strength from the corresponding Wi-Fi base station and update the indoor positioning table in the in-cloud database at once, as shown in Fig. 11(c). Through the control panel application, the user can manually control the iTrashCan using the up, down, left, right  and stop buttons, as shown in Fig. 12. When the iTrashCan has low battery power, the user can press the top left button on the control panel application to send the iTrashCan back to the charging station to recharge its battery in standby mode. The iTrashCan utilizes ultrasonic sensors to detect the amount of trash within the container and the value is continuously updated and sent to the cloud storage. The control panel application reads the most recent trash amount and displays it on the user interface, enabling the user to monitor the iTrashCan's trash amount in real-time using a mobile device, as shown in Fig. 13. In Fig. 13, from the left to the right, are the displays for increasing amounts of trash within the trash can. When the user finds out that the trash can is almost full from the control panel application, he or she can press the bottom right button on the control panel application and command the iTrashCan to automatically dump the trash at a designated location. (16)

Deep learning for optimized path
In this work, we propose the addition of deep learning (17) from machine learning. (18) With the repeated calls made by the user to the iTrashCan, the paths from each call can be stored to the cloud database with four input parameters, namely, the execution time, start point, end point and path. Using the LSTM approach, (19) an optimal output can be derived, that is, a predicted optimal traversal path, (20) as shown in Fig. 14. As shown in Fig. 15, when the iTrashCan detects   the user's position, the LSTM approach derives five previously recorded paths that can lead to the user. Among these paths, red path 1 has the shortest expected execution time. Therefore, the iTrashCan selects red path 1 as the traversal path to reach the destination in the shortest time.
Therefore, in this work, we proposed the addition of the LSTM approach as follows. Once the shortest distance has been determined after indoor positioning, the system checks to see if the location was visited previously; if so, then the paths for the same destination are retrieved and recorded. An optimal path for the same position can be calculated by continuously accumulating the path information as shown in Fig. 16. In this manner, the path selection for the iTrashCan is optimized, and the process becomes faster and more independent. At the present stage, the deep learning part is performed by computer simulation through a GPU workstation before the result is transferred to the iTrashCan for actual testing. Progressive modifications are performed to achieve the required goal.

Technical requirements of system
As shown in Fig. 17, the technological background (21,22) for the iTrashCan encompasses the following four areas: the IoT, mobile application and database system, virtualization management, and deep learning.

Experiment setup
The experiment location is a hallway with dimensions of 12.65 m in length by 9.7 m in width. The space is divided by 9 mesh grid points that are numbered and referred to as reference points in the following experiments. Each division is 3.16 m on the long axis and 2.43 m on the short  Figs. 18 and 19. The ESP8266 receiver modules are mounted 60 cm above ground as shown in Fig. 20. Since the human body and other structures in the environment interfere with the Wi-Fi signal, to reduce the interference and distortion, the mobile device used as the signal emitter is mounted on a tripod and located 120 cm above ground to simulate a user holding the mobile device at the chest level, as shown in Fig. 21.      Each reference point needs to have its fingerprint collected and stored in a table on the cloud server. The data are collected by placing the tripod with the mobile phone at every reference point for 3 min. Since the signal strength fluctuates, the maximum and minimum detected strengths are collected, as shown in Table 1.

Experimental results
The four ESP8266 receivers mounted on the walls continuously upload the RSSI values to the cloud. If the tripod with the mobile phone is placed at a reference point and a call is made to the robot, then the current RSSI values are compared with the table containing the reference data to determine which reference point the mobile device is located. For example, Table 2 contains the RSSI data obtained by placing the mobile device at reference point 1, where the mobile device uploads the RSSI signal strengths to the cloud, and then the cloud returns the calculated results to the mobile device.

Experiment 1:
In this experiment, reference point 3 is set as the standby location for the iTrashCan robot, and reference point 9 is the trash-gathering location. In the mobile control of a robot, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 1, the robot moves from the standby position towards the direction of reference point 1 and opens the trash can lid once it has reached the user, as shown in Fig. 22. The robot can also move from the standby position to reference point 9 to dump the trash as shown in Fig. 23.

Experiment 2:
In this experiment, reference point 3 is set as the standby location for the iTrashCan robot, and reference point 9 is the trash-gathering location. In the robot-controlling mobile application, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 4, the robot moves from the standby position towards the direction of reference point 4 and opens the trash can lid once it has reached the user, as shown in Fig. 24. The robot can also move to reference point 9 to dump the trash as shown in Fig. 25. Furthermore, reference point 3 is set as the standby location for the iTrashCan robot, and reference point 1 is the trash-gathering location. In the robot-controlling mobile application, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 5, the robot moves from the standby position towards the direction of reference point 5 and opens the trash can lid once it has reached the user, as shown in Fig. 26. The robot can also move to reference point 1 to dump the trash as shown in Fig. 27.

Experiment 3:
In this experiment, reference point 3 is set as the standby location for the iTrashCan robot and reference point 6 is the trash-gathering location. In the robot-controlling mobile application, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 7, the robot moves from the standby position towards the direction of reference point 7 and opens the trash can lid once it has reached the user, as shown in Fig. 28. The robot can also move to reference point 6 to dump the trash as shown in Fig. 29.

Experiment 4:
In this experiment, the standby location for the iTrashCan robot is set at reference point 3, and an obstacle is located at reference point 2. In the robot-controlling mobile application, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 1, the robot moves from the standby position towards the direction of reference point 1. The robot encounters the obstacle at reference point 2 and performs automatic obstacle avoidance as shown in Fig. 30. Once the robot has bypassed the obstacle, it moves to the front of the user and opens the lid to collect the trash, as shown in Fig. 31.    The standby location for the iTrashCan robot is set at reference point 3 and an obstacle is located at reference point 5. In the robot-controlling mobile application, after the call button is pressed, the robot starts to calculate the reference point where the user is located. When the robot has successfully determined that the user is located at reference point 7, the robot moves from the standby position towards the direction of reference point 7. The robot encounters the obstacle at reference point 5 and performs automatic obstacle avoidance as shown in Fig. 32. Once the robot has bypassed the obstacle, it moves to the front of the user and opens the lid to collect the trash, as shown in Fig. 33.

Experiment 5:
Some variables affect the robot's movement status in the environment, such as obstacles and the floor texture. Even though the robot can automatically avoid obstacles and arrive in front of the user, the action of obstacle avoidance requires significant time to detect the obstacle and to turn in time so that the robot can evade the obstacle. Various floor textures have different fraction factors, which may cause skidding or reduce speed owing to high resistance. Therefore, in this work, we use LSTM to plan the optimal path. Among the nine reference points, reference point 1 is set as the start point and reference point 9 as the end point, with obstacles placed at reference points 5 and 6.
With the deep learning LSTM model, the four input parameters are the execution time, start point, end point, and path. The output is the predicted optimal path and the predicted execution time. Ten LSTM cells have been established with the learning speed set at 0.006, cross entropy is used as the loss function, and the Adam optimizer is used as the optimizer. The experimental settings are shown in Table 3.   For the training, once the start and end points have been set, different paths are generated manually, as shown in Fig. 34. The execution time is recorded once the end point has been reached, and the values are uploaded to the cloud database. Once the number of manually generated paths is over 100, LSTM can be used to perform the training. During the training stage, the root-mean-square error (RMSE) is used to calculate the training error. The predicted data and the actual data are compared and the errors are calculated for each group of 5 training results. The test errors are greater than 0.5 at the start, but after 100 sets of training data, the training errors converge to 0.05. Random placement testing is performed 10 times around reference point 1, and in nine predictions, the selected optimal path is path 7.

Conclusion
Long-term care issues have been the focus of attention in recent years. In this work, we have combined mobile application with IoT technology to integrate a trash can with an autonomous vehicle. The system with the automatic trash collection/dumping function can provide convenience to people who are busy with work and the elderly or people with movement difficulties. In this work, we utilized fingerprint maps and Wi-Fi indoor positioning to significantly improve the accuracy in determining the target location for autonomous vehicles and increase the overall efficiency. In addition, the iTrashCan incorporates deep learning mechanism; through the LSTM model, the system can automatically remember frequently used paths and perform arcing of paths, such that the movements are more humanlike. New paths are also recorded for future path planning, which can significantly reduce the moving time and enables iTrashCan to reach the destination faster to increase its overall performance efficiency. In the future, the system aims to integrate more functions, such as trash recognition to automatically classify and compress trash, trash bag replacement and sealing functions, as well as increased capacity for the trash cans. The system can be used in public areas with large trash processing demands, such as department stores, parks, markets, and streets, where largescale autonomous vehicles mounted with trash cans can perform trash collection and dumping without manual intervention. Apart from reducing the need for manual labor, the proposed system can also contribute to the maintenance of a clean environment.