Pada hari Kamis dan Jum’at, 22-23 Agustus 2024 Departemen Teknik Biomedik menyelenggarakan Student Session dengan Prof Akihiko Hanafusa dan Prof. Nobuo Watanabe. Kegiatan ini berlangsung di ruang AJ201. Berikut dokumentasi kegiatan Student Session di Departemen Teknik Biomedik :
Embedded Image Processing for Facial Expression Feature Extraction in Driver Drowsiness Detection System
Tazkia Ghifara Aulia, Achmad Arifin, Norma Hermawan, Sukma Firdaus Biomedical Engineering Department Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Abstract
Traffic accidents in Indonesia mostly occur due to drivers lack of vigilance, such as drowsiness. A system that can detect early signs of driver drowsiness is needed to prevent accidents. Previous studies have shown correlation between the percentage of eye closure (PERCLOS) and driver drowsiness and have utilized it for detection and prediction. However, real-world driving is complex mechanism, making detection using a single feature prone to misclassification. This study designs a system for drowsiness detection based on facial expressions with features from the eyes, eyebrows, and mouth. The eyes will be used for calculating the Percentage of Eye Closure (PERCLOS) and detecting microsleeping. Additionally, landmarks around the eyebrows and mouth will be utilized to calculate the Percentage of Eyebrow Raising (PEBR) and the number of yawns. The system is applied to subjects who have been driving for approximately 3 hours. Deep learning is used to understand various patterns of driver drowsiness and classify the subjects drowsiness levels. The system was tested on 10 subjects who performed a 3 hours driving simulation and 3 subjects who drove in real condition. It was found that drivers exhibited increasing PERCLOS values as their drowsiness levels increased. However, in some subjects, there was a decrease in PERCLOS values accompanied by an increase in PEBR values, indicating a condition of resistance to drowsiness. Yawning was more frequent at drowsiness levels 1 and 3, occurring when drivers began to feel bored and fatigued, eventually leading to drowsiness. A drowsiness classification system model was produced with an accuracy of up to 0.89. The model accurately detects drowsiness in three out of four direct testing scenarios. The final data can be developed into IoT drowsiness detection system with server uploads and integration with other parameters such as postural movements and physiological conditions to enhance safety driving systems.
Embedded System for Drowsiness Detection of Drivers Based on Image Processing and Head Movement Analysis for Safety Driving
Athira Ramadhanova Putri Sanfa, Achmad Arifin, Norma Hermawan, Sukma Firdaus Biomedical Engineering Department Sepuluh Nopember Institute of Technology, Surabaya, Indonesia
The primary cause of traffic accidents in Indonesia is human error, with driver drowsiness due to fatigue being a significant factor. To mitigate such accidents, an effective system for detecting driver fatigue is essential. Prior research has demonstrated a correlation between postural parameters and driver drowsiness, though these studies relied on sensors affixed to the back of the driver’s head. This study aims to enhance driver comfort and system efficacy by proposing a non-contact drowsiness detection system based on image processing. The primary postural parameter analyzed is head movement. Data for model training were collected using a Portable Driving Simulator (PDS), while system testing was conducted both on the PDS and in real-world driving conditions. The Yunet machine learning model was employed for its optimal and rapid facial detection capabilities, tailored for embedded systems. Subsequently, Euler’s formula was applied to estimate head pose, which was then used for classifying head movements. These movements were detected as indicators of drowsiness using a dynamic threshold, specifically the mean plus the standard deviation for each driver. The study identified yaw and pitch angles as significantly correlated with drowsiness, with correlation values of 0.54 and 0.41, respectively. The Long Short-Term Memory (LSTM) model used for classification achieved an accuracy of 0.64. The results of the system, in the form of arrays of yaw and pitch percentages, can be combined with the outputs from other drowsiness detection methods to enhance accuracy and be integrated into a comprehensive modern electronic safety driving system.
Keywords: Drowsiness Detection, Fatigue, Head Movement Image Based, Image Processing, Safety Driving Assist.
Design of Hearing Impairment Safety Helmet for Deaf People
Willy Wijaya, Achmad Arifin, Fauzan Arrofiqi Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Sensory disabilities, especially hearing disabilities, have a significant prevalence worldwide. In Indonesia, approximately 7.03% of people with disabilities are deaf. They face accessibility barriers in the use of private vehicles, especially motorcycles, due to the Driver’s License (SIM) requirement that requires hearing ability. In an effort to improve the mobility and safety of deaf people, research has been conducted that designs assistive devices to convert car horn sounds into vibrations. This research proposes designing a special helmet for deaf motorcyclists that can detect horns and the distance of other motorists on the highway. This helmet is designed to process the sound signal obtained from the sensor before entering the microcontroller, which will then produce an in the form of vibration through a vibrating motor. The signal processing process involves a non-inverting amplifier, bandpass filter, full-wave rectifier, and LPF envelope. Testing of the helmet was conducted involving five normal subjects who were conditioned as deaf. The test data showed that the accuracy of the ultrasonic sensor reading reached 90%, while the accuracy of the sound sensor reached 70%. And for the road testing the accuracy reached 40%. This research aims to provide accessibility solutions for deaf motorcyclists, improve their quality of life, and provide a sense of security for other motorists. With this approach, it is hoped that deaf people can be more independent in using their personal vehicles. This assistive helmet innovation not only improves the mobility and safety of deaf riders, but also demonstrates a commitment to inclusion and accessibility in the transportation sector. This research is expected to be the foundation for further research so that it can be further developed so that it can work more adaptively to the environment and can be directly tested on deaf subjects.
Keywords: Deaf, Driving Safety, Hearing Impairment Helmet, Motorcycle Rider, Sound Sensor, Ultrasonic Sensor.
Hybrid Rehabilitation System of Exoskeleton and Functional Electrical Stimulation (FES) for Restoring Elbow Joint Movements with Fuzzy Co-activation Model
Sirsta Hayatu Elparani1, Achmad Arifin1, Fauzan Arrofiqi1, Andra Risciawan2, Muhammad Lukman Hakim2, Moh. Ismarintan Zazuli2 1Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia 2Manufaktur Robot Industri, Surabaya, Indonesia
People who have suffered a stroke and have upper limb disabilities require rehabilitation to restore motor functions. Common rehabilitation methods involve the use of exoskeletons and Functional Electrical Stimulation (FES). This study combines FES and a robotic exoskeleton system for upper limb rehabilitation. The control system used includes a proportional controller to regulate the exoskeleton motor in producing flexion-extension movements of the elbow joint, and Fuzzy Coactivation Model to control FES in generating stimulation intensity for the Biceps brachii (elbow flexor) and Triceps brachii (elbow extensor) muscles. The stimulation intensity is automatically determined through this model based on muscle function, coactivation variables, and feedback control managed by the PID controller. Muscle stimulation is applied simultaneously at certain points where coactivation is induced by FES, occurring when the subject’s arm movement reaches maximum flexion and extension angles. Coactivation causes the trajectory of the FES system to be more stable, so that the arm movement will be more precise. Experiments were conducted on six normal subjects performing flexion-extension movements along a sinusoidal trajectory. The results demonstrate the successful integration of the hybrid system and Fuzzy Coactivation Model in generating the desired stimulation intensity. Testing revealed that the hybrid system with sub-threshold stimulation achieved an RMSE of 1.57 ± 0.004 degrees without any induced movement of the elbow joint due to FES. Meanwhile, the hybrid system with supra-threshold stimulation achieved an RMSE of 1.57 ± 0.01 degrees, detecting elbow joint movement due to FES without causing disturbances to the exoskeleton. Further development can involve subjects with upperlimb paralysis, by adjusting the stimulation intensity and Range of Motion (ROM) of the subject.
Keywords: Post-stroke, Hybrid Exoskeleton-FES, Fuzzy Coactivation Map, PID Controller.
Fuzzy System for Two-Muscle Stimulation Intensity Control of Hybrid FES – Robotic Rehabilitation System: An Experimental Test in Controlling Shoulder Joint Movement
Nabila Shafa Oktavia1, Achmad Arifin1, Fauzan Arrofiqi1, Andra Risciawan2, Muhammad Lukman Hakim2, Moh. Ismarintan Zazuli2 1Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia 2Manufaktur Robot Industri, Surabaya, Indonesia
Stroke is a disease caused by the blockage or rupture of blood vessels in the brain. One in four people worldwide is at risk of having a stroke, and more than two-thirds of stroke survivors experience motor impairment, particularly in the upper body, which can hinder daily activities. One of the rehabilitation methods for post-stroke patients currently often used is the hybrid exoskeleton-FES method. However, previous studies still use separate stimulation methods for agonist-antagonist muscles, whereas normal agonist-antagonist muscles typically work together with a coactivation method. This study aims not only to design a fuzzy control method for muscle coactivation in hybrid exoskeleton-FES rehabilitation but also to propose a method that can reduce constant parameters from previous studies that yielded varying results across subjects. The exoskeleton will provide physical therapy by exerting external force and be controlled using a proportional controller to follow the trajectory. Meanwhile, functional electrical stimulation (FES) will be controlled using fuzzy logic to provide neuromuscular therapy to both muscles with a coactivation model for shoulder abduction-adduction movements. FES in this system is provided in the form of sub-threshold and supra-threshold stimuli and tested three times each on six subjects within a hybrid system with a sinusoidal path. Analysis is conducted on the RMSE and torque generated in each trial to detect movements and additional forces on the subjects. The test results show that in the hybrid system with sub-threshold stimulation, the RMSE obtained and compared with the RMSE obtained from testing with the exoskeleton without stimulus resulted in values that were not significantly different, indicating that no movement in the shoulder joint was detected as a response to the stimulus. Meanwhile, in the hybrid system with supra-threshold stimulation, a more stable system with a lower RMSE than the testing with sub-threshold stimulus was achieved. Additionally, through the torque graph comparison between the tests with both stimuli, it was found that testing with supra-threshold stimulus was able to reduce oscillations and improve system stability at the determined target angle with the coactivation model.
Keywords: Hybrid Exoskeleton-FES, Coactivation, Shoulder Joint, Fuzzy, Post Stroke
Experimental Study of Hip-Knee Joint Movement Interaction with Cycle-to-Cycle Control Method for Hemiplegia Patients
Ni Luh Putu Nadila Aprilia Eka Natasa Putri, Achmad Arifin, Fauzan Arrofiqi Biomedical Engineering Department Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Stroke is the second leading cause of death and the third leading cause of disability worldwide, with an increase of 10.9 per thousand population in Indonesia in 2018. Hemiplegia due to stroke results in asymmetrical and unstable gait disturbances. Rehabilitation is crucial to help hemiplegic patients regain their walking ability. Previous studies have shown that electrical stimulation is effective for recovering lower body mobility but have focused only on the hip and ankle joints. This study examines an electrical stimulation-based rehabilitation system on the thigh segment, demonstrating the interaction between the hip and knee joints in supporting body weight and maintaining balance while walking. Using Functional Electrical Stimulation (FES) with cycle-to-cycle control-based Fuzzy Logic Controller (FLC), this study helps hemiplegic patients regain physical movement and muscle contraction in muscles such as Iliopsoas, Rectus Femoris, Vastus Lateralis, Bicep Femoris Long Head, and Bicep Femoris Short Head. FLC1 regulates hip flexion with a Maximum Hip Flexion in the Swing Phase (MHFsw) of 27.6±2.3° and hip flexion at initial contact (HIC) of 25.2±1.5°. FLC2 regulates knee extension with a Maximum Knee extension in the Swing Phase (MKEsw) of 15.1±1.7° and Maximum Knee flexion in the Swing Phase (MKFsw) of 55.3±1.9°. Out of 30 cycles, the achievement rates for hip flexion cycle, hip joint angle at initial contact, knee extension, and knee flexion are 66.67%, 56.67%, 73.33%, and 50%, respectively. Further development could involve using SMD components for a more compact system, employing supportive footwear for better gait detection in hemiplegic subjects, and integrating continuous stimulation to mitigate muscle spasticity.
Keywords: Functional Electrical Stimulation, Hip Joint, Knee Joint, Cycle-to-Cycle, Fuzzy Logic Control
Experimental Study on Cycle-to-Cycle Control of Knee-Ankle Joint Interaction Movement Using Wearable Fuzzy FES for Hemiplegic Gait
Yasmin Nur Fadhila Hutahaean, Achmad Arifin, Fauzan Arrofiqi, M. Adib Syamlan Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Stroke is a disease often caused by the narrowing of blood vessels. It can significantly impact the motor functions of its sufferers. One condition resulting from a stroke is hemiplegia, a post-stroke symptom characterized by muscle weakness on one side of the body, making it difficult to move that side. The gait of hemiplegia sufferers is marked by abnormal walking patterns, such as foot circumduction, which impairs normal walking. One rehabilitation technique commonly used to treat hemiplegia is Functional Electrical Stimulation (FES). FES stimulates specific muscles with electric pulses to help them function according to desired goals. Several sensors, such as Force Sensitive Resistors (FSR), accelerometers, and gyrometers, are used to capture data on body movement during walking. By using a cycle-to-cycle method and a Fuzzy Logic Controller (FLC), the duration of stimulation bursts delivered by the FES is determined based on the error and desired range values from the previous cycle. In this study, the stimulated muscles include the vastus muscle, which affects knee extension; the medial gastrocnemius muscle, which affects knee flexion and ankle dorsiflexion; the anterior tibialis muscle, which affects ankle plantarflexion; and the soleus muscle, which also affects ankle dorsiflexion. The experiment involved 30 cycles for each gait pattern. The effectiveness of the system was evaluated by measuring the angle reached by each joint. Based on the research results, the Maximum Knee Extension at Swing Phase (MKESw) parameter achieved a stimulation result of (4.5° ± 6.3°), the Maximum Knee Flexion at Swing Phase (MKFSw) parameter achieved (33.5° ± 7°), the Maximum Ankle Dorsiflexion at Swing Phase (MADSw) parameter achieved (5.1° ± 4.1°), and the Maximum Ankle Plantarflexion at Swing Phase (MAPSw) parameter achieved (-17° ± 1.2°), successfully reaching the target for each movement.
Keywords: Hemiplegic, Rehabilitation, Functional Electrical Stimulation, Cycle-to-cycle control, Knee-Ankle Joint.
Experimental Study of Electrical Wheelchair Usage Using Myoelectric Signals Control for Users with Disabilities
Muhammad Rafli Ramadhani1, Achmad Arifin1, Fauzan Arrofiqi1, Andra Risciawan2, Muhammad Lukman Hakim3 1Biomedical Engineering Department, ITS, Surabaya, Indonesia. 2Manufaktur Robot Industri, Surabaya, Indonesia 3Industrial Mechanical Engineering Department, ITS, Surabaya, Indonesia
Paralysis is a condition that refers to the loss of motor or sensory function. Electric wheelchairs are used to assist the mobility of people with paralysis to be more independent. As technology advances technological advancements, electric wheelchairs can be controlled using muscle or myoelectric signals. Research has been developed regarding electric wheelchairs an electric wheelchair with navigation using myoelectric signals from the carpi radialis muscle with subject intention speed control by Elvina et al in 2020. This research This research is focused on the application of an electric wheelchair with subject intention using myoelectric signals in patients with paralysed leg disabilities. For that for this reason, observation and tapping of myoelectric signals in the carpi radialis muscle of the muscle of the subject while stretching and flexing. Then tested the performance adaptive threshold on myoelectric signal processing, myoelectric signal classification performance, myoelectric signal classification performance for wheelchair motion, and wheelchair performance on disabled user interaction with the system. Subjects were asked to perform a series of scenarios aimed at moving the wheelchair. The results showed that the system’s ability to classify the wheelchair movement commands using myoelectric signals reached 92.22%. Based on tests conducted on the performance of the wheelchair, it shows that the PID control system can balance the motion of the right and left motors in the wheelchair. However, the difference in the characteristics of the motor used, causes spikes and instability in the motor rotation speed, causing a pounding when the wheelchair moves. Data and conclusions the data and conclusions can be utilised for the future development of electric wheelchairs. electric wheelchair for patients with disabilities.
Keywords: Electric Wheelchair, Myoelectric, Subject Intention, Disability.
Sensor and Computer Vision-Based Wheelchair Navigation System for Detecting Obstacle and Tactile Paving
Fathin Hanum Al’alimah1, Achmad Arifin1, Norma Hermawan1, Andra Risciawan2, Muhammad Lukman Hakim3, Nabila Alya Rahma1 1Biomedical Engineering Department, ITS, Surabaya, Indonesia 2Manufaktur Robot Industry (MRI), Surabaya, Indonesia 3Industrial Mechanical Engineering Department, ITS, Surabaya, Indonesia
Many individuals with visual impairments use white canes to navigate tactile pathways and detect obstacles in their surroundings. However, using a white cane as a navigation aid for visually impaired individuals who also use a wheelchair requires significant effort to maneuver the wheelchair while holding the cane. This study proposes a wheelchair navigation system based on sensors and computer vision, equipped with obstacle and tactile path detection. Ultrasonic sensors HC-SR04 are employed to predict the distance between the wheelchair and surrounding obstacles, contributing to safe maneuvering decisions. The computer vision system uses OpenCV and TensorFlow for object detection, achieving a mean Average Precision (mAP50) of 94.86% across five classified classes. System testing shows that using the white cane results in the longest completion time of 104.86 seconds and the highest number of navigation errors, totaling eight. The NASA-TLX score of 79.16 indicates the highest workload in this condition. The designed system demonstrates significant performance improvement with a completion time of 44.65 seconds, which is close to the normal condition time of 42.38 seconds. However, there were still four navigation errors, and the NASA-TLX score of 48.89 indicates a low workload, though slightly higher than the normal condition. The combination of the white cane and the designed system resulted in a completion time of 70.12 seconds with six navigation errors, and the NASA-TLX score in this condition was 70, indicating a medium workload. This research demonstrates that the designed system can reduce the workload of using a wheelchair for visually impaired people. This research is expected to serve as a foundation for future studies in utilizing parallel processing to enable more efficient execution of multiple tasks concurrently.
Keywords: visual impairment, tactile paving, wheelchair navigation, ultrasonic sensor, computer vision
Development of an Interactive Learning Assistive System for Students with Intellectual Disabilities Using Robot with Audiovisual Intervention
Welly Herlambang Kusuma, Achmad Arifin, Fauzan Arrofiqi Biomedical Engineering Department Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Intellectual disabilities cause students to experience difficulties in the learning process. To fulfill the right to education, the government has established policies for the implementation of Special Schools. However, this does not guarantee smooth learning processes, as students often struggle to focus and are easily distracted by visual disturbances. Previously, an interactive learning system based on robots was developed and proven to improve students’ understanding. However, the system lacked support for two-way interaction between the robot and the students. This research aims to develop the system by adding a two-way interaction feature through the enhancement of verb spelling materials and the use of answer cards. During the learning process using the robot, students showed interest in the robot’s movement capabilities and the audiovisual materials presented through the robot. The subjects paid close attention to the robot. The learning outcomes showed a significant increase in the average accuracy rate of letter arrangement by the subjects, from 25.55% during conventional learning to 63.86% with the robot-assisted learning. The standard deviation for robot-assisted learning was 26.54, compared to 18.47 for conventional learning. The learning outcomes with the robot were effective for subjects 1 and 3 and fairly effective for subject 2. Future research plans include real-time data collection and processing with IoT and the enhancement of material levels, such as arithmetic and social skills in using money.
Keyword: Intellectual Disabilities, Fuzzy Logic, Audiovisual Intervention, Robotics, Special Needs Education
Design Of Two-Way Indonesian Sign Language System based on Smart-Glove with Artificial Neural Network Classification Method
Veny Eka Nurfita Sari, Achmad Arifin, Fauzan Arrofiqi Biomedical Engineering Department Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Humans carry out communication activities in carrying out their daily activities, but not everyone can communicate easily as experienced by deaf and hard of hearing people. The Indonesian Sign Language System (SIBI) is a sign language standard used by deaf-mute in Indonesia, unfortunately SIBI used conventionally is considered less effective. In previous research, a Smart-Glove system has been developed that can be used to interpret SIBI. The system is a wearable glove used by deaf-mute where this tool will interpret the alphabet in SIBI into voice or text. The Smart-Glove system is able to translate many alphabetic characters correctly and has good interpretation into voice and text. However, there are certain alphabetic characters that have a low level of translation accuracy. Therefore, in this research, the same tool are made but the classification method has changed. The classification method to be used is based on Artificial Neural Network (ANN). The results of this study are in the form of text that appears on the smartphone screen connected to Bluetooth with gloves. The accuracy rate of letter detection reached 87.61%, while numbers reached 80.56%. In addition to text, another output in the form of a voice implemented using text-to-speech has an accuracy rate of 100%. Feedback from the interlocutor can also use the speech-to-text feature which has an accuracy rate of 100%. This tool can help deaf-mute to be able to communicate with others and receive feedback from the communication made, especially in learning activities using SIBI. The next development plan can focus on interpretation purposes, such as interpreting verbs.
Keywords: deaf-mute, smart-glove, Indonesian Sign Language System (SIBI), sensor, sign language.
Development of Toileting Detection System for Children with Cerebral Palsy (CP) Based on Body Gestures Using Deep Learning
Muhammad Asyarie Fauzie Musthofa, Achmad Arifin, Nada Fitrieyatul Hikmah
Children with disabilities often have difficulty expressing their desire to go to the toilet. This can lead to health problems and other problems such as defecating out of place. In the research conducted, a toileting prediction system for children with disabilities was designed based on body gestures using deep learning. This research uses a camera to capture videos of children with disabilities. The camera will be installed in front of the subject during school activities. The data taken from the subject is data in the form of images of landmark points that form a gesture and the results of feature extraction. Each will be processed with different machine learning. The CNN model used is the VGG 16 Model which shows the best performance with 96.3% validation accuracy and 92.3% training accuracy. While the logistic regression model used is a model with a dataset division of 281 training data and 121 testing data with an accuracy of 0.86 and a loss of 0.341. Both machine learning models are able to classify different gestures with different classes. However, there are differences in classification results between CNN and logistic regression on the same data because of the different ways of processing data. CNN recognizes visual patterns in images, whereas logistic regression uses a linear combination of features that may not be as complex as the non-linear approach used by CNN. The toileting detection system based on body gestures with deep learning still has some room for development. One area that can be developed is to find other parameters that can describe signs of wanting to go to the toilet. In addition, it is also to increase the accuracy of the system. Future development plans are to learn more about the subject’s habits and disabilities, add more subjects with disabilities, and develop the system into real time.
Keywords: Disabled Child Toileting, Disabilities Gesture, Deep Learning, Gesture Classification, body landmark, and computer vision.
Toileting Expression Detection System for Children with Multidisability Based on Feature Extraction using Support Vector Machine
Salsabiela Khairunnisa Siregar, Achmad Arifin, Nada Fitrieyatul Hikmah Biomedical Engineering Department, ITS, Surabaya, Indonesia
Children with disabilities often face challenges in expressing their needs, including when they wish to use toilet facilities. These barriers can lead to health issues or other problems, such as inappropriate toileting behaviors. Toilet training helps children learn to control their urges to urinate or defecate, and its success relies on gradual teaching methods tailored to the child’s abilities. Therefore, this study aims to identify toileting parameters based on the facial expressions of children with disabilities using a camera system. The camera is positioned in front of the subjects during school activities to capture expression changes. The acquired images are selected to create a dataset based on the expression changes observed during toileting events. Feature extraction is performed using 51 facial landmark points to obtain angles, distances, and inclinations between these points. TOP5 and TOP10 datasets are generated using features with the highest Pearson correlation values. The classification process using Support Vector Machine (SVM) demonstrated that the model with the TOP5 dataset achieved the highest accuracy of 96%, with parameter settings of , , and utilizing 5-fold cross validation. This model exhibits robust performance with high precision, recall, and F1-score metrics. Despite this, the toileting expression detection system faces challenges in data collection, such as the extensive conditioning required for data collection and some misclassification errors that need to be addressed for improved results. To enhance the system’s capability and generalization in detecting toileting expressions, it is essential to increase the number and diversity of datasets by involving subjects with various types of disabilities.
Keywords: Toileting for Disabilities, Disability Behavior, Facial Landmarks, Disability Expression Classification, Computer Vision, Support Vector Machine.
Detection System for Autism Spectrum Disorders in Children Through Facial Landmarks and Machine Learning Methods on Emotional Response Images
Indah Nabila Rachmah, Achmad Arifin, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Despite the increasing prevalence of Autism Spectrum Disorder (ASD) globally, current screening methods for early detection remain time-consuming and costly, highlighting a critical research gap. This study aims to develop a novel detection system for ASD in children, utilizing facial landmarks and machine learning methods to analyze emotional response images. Our approach involves capturing spontaneous facial expressions using non-invasive imaging sensors, followed by the extraction of key facial landmarks. Two classification models, Support Vector Machine (SVM) and Convolutional Neural Network (CNN), are implemented to analyze and classify these facial landmarks. The SVM model achieved an accuracy of 92.54%, precision of 93.43%, recall of 92.56%, and an F1 score of 92.42%, while the CNN model achieved an accuracy of 84.16%, precision of 83.06%, recall of 85.83%, and an F1 score of 84.42%. This study demonstrates the potential of integrating advanced machine learning techniques with facial landmark extraction for reliable, non-invasive early detection of ASD. The promising performance of the SVM and CNN models suggests that our approach could significantly improve early diagnosis and intervention strategies for children with ASD. Furthermore, the use of diverse, real-time datasets and effective pre-processing steps ensures high accuracy, reliability, and generalizability of the detection system. The proposed system potential for practical application in early ASD diagnosis is enhanced by the successful integration of non-invasive imaging techniques and the development of a user-friendly GUI, paving the way for real-time detection and automatic reporting, ultimately contributing to better developmental outcomes and quality of life for affected children.
Keywords: Autism Spectrum Disorder (ASD), Facial Landmarks, Machine Learning, Emotional Response Images, Non-Invasive Imaging
Digital Infant Scale with Fuzzy Decision Support for Growth Monitoring
Syania Fadhillah, Achmad Arifin, Nada Fitrieyatul Hikmah Biomedical Engineering Department Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Monitoring children’s growth and development is crucial, especially during the first 1000 days of life, due to the rapid changes that occur. Various factors, including nutrition and environmental conditions, significantly impact early childhood growth and development. Nutritional issues often arise from an imbalance between intake and the body’s needs, which can lead to stunted growth and low birth weight. Anthropometric analysis, particularly the measurement of body weight and length, is essential to assess the nutritional status of children. However, this data has not been optimally utilized in the field for further analysis. This study focuses on developing a digital scale that combines a load cell sensor connected to the ADS1232 module for weight measurement and a rotary encoder for length measurement. Further analysis was conducted on a personal computer (PC) to calculate Body Mass Index for Age (BMI/A), Weight for Age (W/A), and Length for Age (L/A) using the Fuzzy Decision Support System (FDSS), with a focus on children aged 0-12 months. The results of the scale measurements showed high accuracy, with an error rate of 33.86 grams for body weight and 0.26 cm for body length. Among the 49 study participants, normal nutritional status was found in 44 subjects based on W/A, 41 subjects based on L/A, and 41 subjects based on BMI/A. These classification results adhere to applicable anthropometric standards, thus confirming the effectiveness of FDSS in assessing infant conditions. Further development of this study includes improving the design and materials used to meet commercial standards, integrating Internet of Things (IoT) technology to support monitoring and classification systems, and focusing more deeply on measuring and monitoring infant growth and development.
Keywords: Monitoring Child’s Growth, Fuzzy Decision, Body Weight, Body Length, Baby Weighing Scale
Speech Reconstruction of Cleft Lip and Palate Subjects Using Linear Prediction Coding and Neural Network Based on Desktop Application
Maula Rifqi Setiawan1, Achmad Arifin1, Fauzan Arrofiqi1, Akhmad Anggoro2 1Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia 2Kementrian Komunikasi dan Informatika, Jakarta, Indonesia
Individuals born with Cleft Lip and Palate often face various challenges, especially in the articulation of voice formation. With these problems, individuals with CLP find it difficult to socialize with their surroundings. Therefore, a voice reconstruction system is needed for people with CLP. One method that is widely used in voice reconstruction systems is Linear Predictive Coding (LPC) and Neural Networks (NN). In performing voice reconstruction in this study, two systems are used, namely a voice recognition system and a voice conversion system. Voice recognition can be done using LPC to extract features in the voice signal, then the LPC coefficients will be recognized using NN that has been trained. Voice conversion can be made by mapping coefficients, where the LPC coefficient from the source will be changed or brought closer to the LPC coefficient from the target using the training results from NN. The results of the voice recognition system carried out have an average accuracy of 22.6%, this shows that the proposed system still cannot be realized due to the distribution of features used cannot represent all the words to be recognized. Then, the results of voice conversion system have a fairly good mean squared error with an average obtained of 0.00353, but the results of the Mel Cepstral Distance calculation obtained still cannot approach the targeted sound with an average distance from the sound that has been converted to the original sound of 1233.61 dB. This is caused by several factors. To improve accuracy and reduce the possibility of not detecting features for speech recognition, feature extraction can be done with other methods that can represent features of each word well. To improve the sound quality of voice conversion, further research can be done on pitch extraction and determination of voiced and unvoiced characters using machine learning.
Keywords: Cleft lip and palate, LPC, Neural Network, Speech Reconstruction
Design of Chatbot for Early Detection and Monitoring for Type 2 Diabetes Mellitus Based on Lifestyle
Muhammad Thoriq Afif Ramadhan1, Norma Hermawan1, Achmad Arifin1 1Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Type 2 diabetes mellitus (T2DM) is a global health problem with a high prevalence, accounting for 90%-95% of all diabetes cases. T2DM diagnosis is often delayed due to undetected early symptoms, and an unhealthy lifestyle increases the risk of this disease. This research aims to design a WhatsApp-based chatbot for early detection and monitoring of T2DM using the Random Forest (RF) algorithm. The methods used include literature review, chatbot system design, educational website creation, and monitoring website for calorie intake, blood sugar, blood pressure, cholesterol, and physical activity. The chatbot uses a microservice architecture in Docker Container, integrating chatbot service, website service, and RF prediction model. The RF prediction model was tested with 1 to 21 Principal Components (PC), and the RF model with 8 PCs was used because performance improvements were not significant after the 8th PC. System usability testing using the System Usability Scale (SUS) with 17 subjects showed an average SUS score of 76.875, indicating high usability. Younger subjects showed good understanding, while older subjects experienced some difficulties but still showed a fairly high level of confidence. The chatbot and monitoring website system were well received by various age groups. The implementation of the chatbot is expected to raise user awareness of a healthy lifestyle in preventing T2DM and facilitate more effective health monitoring. To improve system quality, it is recommended to add references related to monitoring data for integration into the dataset as additional features. The early detection system for T2DM can be developed with a voice note feature for verbal interaction and integration of food or drink calorie data, as well as collaboration with clinical or hospital systems for patient monitoring.
Keywords: Chatbot, Early detection, Lifestyle Monitoring, Random Forest, Type 2 Diabetes Mellitus.
Digital Microscope Design with Image Processing Features for Acute Lymphoblastic Leukemia Cell Detection Using Raspberry Pi
Angeline Dwi Sanjaya, Rachmad Setiawan, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Blood holds extensive information about an individual’s health, which can be analyzed through blood cell morphology and concentration analysis. Accurate blood cell analysis is crucial for precise diagnostic results, especially in identifying conditions like Acute Lymphoblastic Leukemia (ALL), the most prevalent childhood cancer in Indonesia, according to the Indonesian Pediatric Society (IDAI). A common approach to detecting ALL involves analyzing blood smears obtained through blood tests and bone marrow aspiration. These smears are then examined using light or digital microscopes to assess cell morphology. However, manual examination methods can suffer from biased accuracy and lengthy processing times, while digital microscopes often come with high implementation costs and complex systems. Based on the shortcomings of the existing examination methods, this study presents a digital microscope system equipped with a YOLOv4 (You Only Look Once) algorithm for detecting ALL cells. Our system aims to enhance detection accuracy and speed while balancing complexity and cost. The microscope employs a 10X eyepiece lens, a 100X objective lens, and a Raspberry Pi camera module. The resulting system, called “RNA BioLens,” was further deployed as an application that achieved a detection accuracy of 97.67% for ALL cells in blood smear images.
Keywords: Digital Microscope, Acute Lymphoblastic Leukemia (ALL), Raspberry Pi, You Only Look Once (YOLOv4)
Classification of Brain Tumor Types Meningioma, Glioma, and Pituitary Based on Hybrid VGG-16 and SVM for Preoperative Diagnosis
Ariq Dreiki Hajjanto, Tri Arief Sardjono, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
According to the 2020 Global Cancer Statistics, there were 308,102 new cases of brain and CNS tumors, resulting in 251,329 deaths worldwide. In Indonesia alone, the estimated incidence and mortality in 2016 were 6,337 and 5,405 cases, respectively. The various types of brain tumors, with variations in location, size, and malignancy levels, make localization and classification complex for conventional medical professionals, leading to errors in brain tumor determination due to the need to analyze a vast amount of imaging results. Conventional classification accuracy can be influenced by factors such as individual subjectivity in recognizing tumor locations, timing, precision, fatigue, and other human factors. Therefore, a method is needed to produce accurate tumor diagnoses using machine learning. However, research using machine learning approaches is highly susceptible to overfitting due to the lack of datasets or the architectural models used and the lengthy computational processes required. Hence, this study proposes a hybrid classification system with the assistance of machine learning, utilizing the VGG-16 architecture/model and Support Vector Machine (SVM). VGG-16 excels in hierarchical feature extraction and spatial invariance, enabling higher accuracy in tumor identification. The output features of brain tumor types from VGG-16 are reduced using Principal Component Analysis (PCA) and then classified with the help of SVM, optimized by testing combinations of kernels and hyperparameters. The architecture performance is evaluated using performance metrics and comparison with previous models, allowing for an objective assessment of the achieved results. The study results for each metric of accuracy, precision, recall, f1-score, and specificity were 96.9%, 97.3%, 96.67%, 96.67%, and 99.97%, respectively, using the polynomial kernel with hyperparameters C, degree, and coef0 of 10, 3, and 0.5.
Keywords: VGG-16, Support Vector Machine, Principal Component Analysis, Brain Tumor, Disease
Non-contact Obstructive Sleep Apnea Detection Mattress Using Supervised Machine Learning Based On Heart Rate Variability And Respiratory Rate Analysis
Rima Amalia, Rachmad Setiawan, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Obstructive Sleep Apnea (OSA) is a serious and potentially life-threatening sleep disorder traditionally diagnosed using polysomnography (PSG), a complex and time-consuming process. This study introduces an innovative approach utilizing electrocardiogram (ECG) signals for a simpler, non-invasive OSA detection method. The proposed system employs a mattress embedded with non-contact electrodes made of conductive fabric to monitor patient ECG signals while sleeping in various positions. Signal processing is conducted using the discrete wavelet transform (DWT), leading to heart rate variability (HRV) outputs. HRV features, including SDNN, RMSSD, SDSD, pNN50, LF/HF ratio, SD1, and SD2, are analyzed across time, frequency, and non-linear domains. Additionally, RR-interval mapping is processed using DWT, followed by a Moving Average (MAV) filter and peak-to-peak detection to extract respiratory rate features. These features are then processed using the k-Nearest Neighbors (KNN) algorithm for OSA detection. The system demonstrates high performance, achieving an accuracy of 98.33%, specificity of 100%, and sensitivity of 96%. This research successfully develops a highly accurate and efficient OSA detection system that is non-invasive and comfortable for users, offering a promising alternative for home-based OSA monitoring and diagnosis.
Keywords: ECG, Mattress, Non-contact, Obstructive sleep apnea, Heart Rate Variability, Respiratory Rate.
Vest For Detecting The Risk Of Sudden Cardiac Death In Ventricular Fibrillation Using An Artificial Neural Network
Retno Wulandari, Rachmad Setiawan, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Sudden Cardiac Death (SCD) is a tragic event that occurs suddenly due to heart failure, with the main symptom being Ventricular Fibrillation in the heart. SCD, if not treated immediately, can result in death. To address this challenge, wearable ECG monitoring technology needs to be developed. This wearable ECG monitoring system uses non-contact electrodes on a vest with three leads. Two materials tested are conductive textile and copper foil tape. Copper foil tape produces ECG signals with lower noise than conductive fabric but is less ergonomic for long-term use. The non-contact ECG signal capture circuit includes an amplifier, Low Pass Filter, High Pass Filter, Notch Filter, and Non-Inverting Adder connected to a Mikromedia 7 FPI Capacitive with an STM32F746ZG chipset. Six parameters used for detection are RR interval, TpTe/QT, JTp/JTe, TpTe/JTp, TpTe/QRS, and TpTe/(QT×QRS). Data processing using a five-level Discrete Wavelet Transform is employed to obtain these parameters. Testing was conducted on several subjects aged 20-50 years. The classification method using Artificial Neural Network (ANN) showed an accuracy of 97.4%, sensitivity of 96%, dan specificity of 98.4%, and precision of 97.9%. Sitting and lying supine positions produce the best ECG signals because the pressure between the skin and electrodes is more consistent. The cotton material provides the least noise compared to polyester, linen, and knitwear. The research results show that the designed vest can effectively detect the risk of SCD, making a significant contribution to cardiac health and the prevention of sudden cardiac death.
Keywords: Artificial Neural Network (ANN), contactless electrocardiogram, Sudden Cardiac Death (SCD) prediction, ventricular arrhythmic markers.
Integrated ECG and EEG Signals as Biofeedback and Biomarker for Stress Assessment
Yasmin Fakhira Ichsan, Rachmad Setiawan, Nada Fitrieyatul Hikmah Biomedical Engineering Department, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Stress is a physiological response to negative emotions or threatening situations that can significantly impact both physical and mental well-being. Elevated stress levels are associated with complex mental health issues and various physical diseases. While previous research has proven the effectiveness of physiological parameters in stress detection, the primary focus has often been limited to stress detection without considering the potential benefits as biomarkers and biofeedback in stress assessment. This study takes a holistic approach by exploring stress detection through Electrocardiography (ECG) and Electroencephalography (EEG), connected to Mikromedia 7 FPI Capacitive and STM32F746ZG chipset. ECG signals are captured using surface electrodes following the Einthoven’s Triangle rule, processed into Heart Rate Variability (HRV), and evaluated in time domain, frequency domain, and non-linear analysis. EEG signals are acquired via an EEG headband with electrode insertion points at Fp1 and F3, evaluated in the frequency domain. Testing was conducted on a group of healthy males and females aged 18-25 years, inducing stress stages validated by the STAI-Y1 questionnaire. HRV and EEG features were evaluated using Analysis of Variance (ANOVA) for feature selection before inputting into an Artificial Neural Network (ANN) for stress level classification (low, medium, high). The ANN classification method with Stratified K-Fold achieved 98,964% accuracy in training data and 98,913% in testing data, with consistency accuracy between objective and subjective stress classification 94% by the STAI-Y1 questionnaire and 80% by the DASS21 questionnaire. This research is expected to serve as a practical biomarker for mental health professionals and provide effective biofeedback for individuals in managing stress.
Keywords: Stress Assessment, ECG, EEG, Artificial Neural Network.
Departemen Teknik Biomedik, Institut Teknologi Sepuluh Nopember menyelenggarakan acara kuliah tamu “Anatomi dan Fisiologi – Medical Ethics” pada: Hari/Tanggal
Pada hari Kamis dan Jum’at, 24-25 Oktober 2024 Departemen Teknik Biomedik menerima Kunjungan Prof Ching-Wei Wang dari National Taiwan
Rapat Kerja (Raker) yang digelar oleh Departemen Teknik Biomedik merupakan salah satu wadah penting dalam upaya meningkatkan kualitas pelaksanaan