Young Researcher Paper Award 2023
🥇Winners

Notice of retraction
Vol. 34, No. 8(3), S&M3042

Notice of retraction
Vol. 32, No. 8(2), S&M2292

Print: ISSN 0914-4935
Online: ISSN 2435-0869
Sensors and Materials
is an international peer-reviewed open access journal to provide a forum for researchers working in multidisciplinary fields of sensing technology.
Sensors and Materials
is covered by Science Citation Index Expanded (Clarivate Analytics), Scopus (Elsevier), and other databases.

Instructions to authors
English    日本語

Instructions for manuscript preparation
English    日本語

Template
English

Publisher
 MYU K.K.
 Sensors and Materials
 1-23-3-303 Sendagi,
 Bunkyo-ku, Tokyo 113-0022, Japan
 Tel: 81-3-3827-8549
 Fax: 81-3-3827-8547

MYU Research, a scientific publisher, seeks a native English-speaking proofreader with a scientific background. B.Sc. or higher degree is desirable. In-office position; work hours negotiable. Call 03-3827-8549 for further information.


MYU Research

(proofreading and recording)


MYU K.K.
(translation service)


The Art of Writing Scientific Papers

(How to write scientific papers)
(Japanese Only)

Sensors and Materials, Volume 36, Number 2(3) (2024)
Copyright(C) MYU K.K.
pp. 729-743
S&M3557 Research Paper of Special Issue
https://doi.org/10.18494/SAM4788
Published: February 29, 2024

A Smart Assembly Line Design Using Human–Robot Collaborations with Operator Gesture Recognition by Decision Fusion of Deep Learning Channels of Three Image Sensing Modalities from RGB-D Devices [PDF]

Ing-Jr Ding and Ya-Cheng Juang

(Received September 3, 2023; Accepted February 19, 2024)

Keywords: assembly line, human–robot collaboration, RGB-D image sensor, operator gesture recognition, deep learning, decision fusion

Machine vision with image sensors has been employed in smart manufacturing such as the popular automatic optical inspection (AOI) by deploying an image acquisition camera to optically scan the target device for quality defects. With the rapid progress of image sensor techniques, the RGB-D image sensor device that can capture operator assembly gesture actions to make intelligent interactions between a robot and an operator has been developed. In this study, we propose a smart assembly-line design for intelligent manufacturing or factory applications where a working mode of human–robot collaboration (HRC) will be incorporated. In the proposed HRC assembly line, the operator and manipulator (robotic arm) will co-work with each other where the appropriately deployed RGB-D image device (the well-known Intel RealSense camera in this work) is used to acquire assembly gesture data of the operator to further perform operator gesture recognition. The manipulator will then perform the corresponding feedback action according to the recognized operation gesture (e.g., grabbing the scissors and then moving to the operator if the gesture of winding the tape is recognized). For operator gesture recognition, we first construct three different sensing modalities of deep learning recognition channels, which are the RGB convolution neural network (CNN)-long short-term memory (LSTM) channel with RGB gesture image inputs, the depth CNN-LSTM channel with depth gesture image inputs, and the 3D-(x, y, z) LSTM raw channel with skeleton raw data inputs. A decision fusion scheme is then developed for hybridizations of recognition decision outputs of these three separated deep learning gesture recognition channels with different gesture sensing modalities. In this work, various weight combination strategies to achieve the decision fusion of three deep learning recognition channels are used to evaluate the effectiveness of operator gesture recognition. Experiments on classifications of ten categories of operator assembly gestures show that the half-quarter-quarter strategy with the setting of (wRGB, wDepth, w3D) = (0.5, 0.25, 0.25) for weight allocations of channel decisions can achieve the highest recognition accuracy.

Corresponding author: Ing-Jr Ding


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Cite this article
Ing-Jr Ding and Ya-Cheng Juang, A Smart Assembly Line Design Using Human–Robot Collaborations with Operator Gesture Recognition by Decision Fusion of Deep Learning Channels of Three Image Sensing Modalities from RGB-D Devices, Sens. Mater., Vol. 36, No. 2, 2024, p. 729-743.



Forthcoming Regular Issues


Forthcoming Special Issues

Applications of Novel Sensors and Related Technologies for Internet of Things
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Cheng-Fu Yang (National University of Kaohsiung)
Call for paper


Special Issue on Advanced Data Sensing and Processing Technologies for Smart Community and Smart Life
Guest editor, Tatsuya Yamazaki (Niigata University)
Call for paper


Special Issue on Advanced Sensing Technologies and Their Applications in Human/Animal Activity Recognition and Behavior Understanding
Guest editor, Kaori Fujinami (Tokyo University of Agriculture and Technology)
Call for paper


Special Issue on International Conference on Biosensors, Bioelectronics, Biomedical Devices, BioMEMS/NEMS and Applications 2023 (Bio4Apps 2023)
Guest editor, Dzung Viet Dao (Griffith University) and Cong Thanh Nguyen (Griffith University)
Conference website
Call for paper


Special Issue on Piezoelectric Thin Films and Piezoelectric MEMS
Guest editor, Isaku Kanno (Kobe University)
Call for paper


Special Issue on Advanced Micro/Nanomaterials for Various Sensor Applications (Selected Papers from ICASI 2023)
Guest editor, Sheng-Joue Young (National United University)
Conference website
Call for paper


Copyright(C) MYU K.K. All Rights Reserved.