Options
Latifah Munirah Kamarudin
Preferred name
Latifah Munirah Kamarudin
Official Name
Kamarudin, Latifah Munirah
Alternative Name
Kamarudin, Latifah Munirah
Kamarudin, Latifah M.
Kamarudin, L. M.
Kamarudin, Munirah L.
Kamarudin, L.
Main Affiliation
Scopus Author ID
57192974774
Researcher ID
G-8267-2016
Now showing
1 - 10 of 18
-
PublicationHuman Location Classification for Outdoor Environment( 2019-12-03)
;Talib M.T.M. ;Nishizaki H.Outdoor localisation can offer great capabilities in security and perimeter surveillance applications. The localisation of people become more challenges when involving with the nonlinear environment. GPS and CCTV are two localisation techniques usually use to localise human in an outdoor environment. However, they have weaknesses which result in low localisation accuracy. Therefore, the application of Device-free localisation (DFL), together with the Internet of things (IoT) is more appropriate due to their capability to detect the human body in all environmental conditions, and there is no problem losing signals as faced by GPS. This system offers excellent potential in humans localisation because humans can be detected wirelessly without any tracking device attached. In developing the DFL system, the main concern is the localisation accuracy. Although the existing DFL system gives significant result to the localisation, the accuracy is still low due to the large variation in RSSI values. Hence, a Radio Tomographic Imaging-based ANN classification (RTI-ANN) approach is proposed to increase the localisation accuracy. This Artificial Neural Network (ANN) is designed to learn the Radio Tomography imaging (RTI) input for classification purpose. Even though the RTI gives a good result to the localisation, however, it suffers from smearing effect. To eliminates this smearing area and background noise, pre-processing of the RTI image is required. Thus, extracting the valuable information technique from the RTI image has been proposed. By extracting the valuable information data from the RTI image, about 61% to 66% of the smearing noise is removed depending on the size of the RTI image. Only data directly associated with human attenuation used for training and learning of ANN. The experimental results show ANN system can localise human in the right zone for a given dataset. -
PublicationAn Experimental Study of Deep Learning Approach for Indoor Positioning System Using WI-FI System( 2021-01-01)
;Sa’ahiry A.H.A.Nishizaki H.Global navigation satellite system (GNNS) is known for its capability to detect the whereabouts of any desired target such as vehicles and places. However, there is some disadvantage of these technologies as it can only get a precise location outside of the building because as the signal goes to indoor building, the signal becomes weaker due to attenuation. The Wi-Fi systems are the best alternative to GNNS in an indoor environment since the architecture is massively deployed in many recent buildings. However, Wi-Fi also has its disadvantage where its signal is non-linear due to various factors such as multipath and signal blockage indoors thus limiting the system accuracy. In this paper, a deep learning approach with standalone Wi-Fi technologies will be used to have a precise indoor positioning by using the fingerprinting method. The overall result shows that the average distance error between actual and estimated location is 20-cm and the highest error is 62-cm in an experimental area of 180-cm and 120-cm in x and y axis. This shows that deep learning is a possible method to have accurate and precise indoor positioning.1 -
Publication3D grape bunch model reconstruction from 2D images( 2023-12-01)
;Woo Y.S. ;Li Z. ;Tamura S. ;Buayai P. ;Nishizaki H. ;Makino K.Mao X.A crucial step in the production of table grapes is berry thinning. This is because the market value of table grape production is significantly influenced by bunch compactness, bunch form and berry size, all of which are primarily regulated by this task. Grape farmers must count the number of berries in the working bunch and decide which berry should be eliminated during thinning, a process requiring extensive viticultural knowledge. However, the use of 2D pictures for automatic berry counting and identifying the berries to be removed has limitations, as the number of visible berries might vary greatly depending on the direction of view. In addition, it is extremely important to understand the 3D structure of a bunch when considering future automation via robotics. For the reasons stated, obtaining a field-applicable 3D grape bunch model is needed. Thus, the contribution of this study is a novel technology for reconstructing a 3D model of a grape bunch with uniquely identified berries from 2D images captured in the real grape field environment.1 -
PublicationPredictive Analysis of In-Vehicle Air Quality Monitoring System Using Deep Learning Technique( 2022-10-01)
;Cheik Goh Chew ;Mao X. ;Nishizaki H.In-vehicle air quality monitoring systems have been seen as promising paradigms for monitoring drivers’ conditions while they are driving. This is because some in-vehicle cabins contain pollutants that can cause drowsiness and fatigue to drivers. However, designing an efficient system that can predict in-vehicle air quality has challenges, due to the continuous variation in parameters in cabin environments. This paper presents a new approach, using deep learning techniques that can deal with the varying parameters inside the vehicle environment. In this case, two deep learning models, namely Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) are applied to classify and predict the air quality using time-series data collected from the built-in sensor hardware. Both are compared with conventional methods of machine learning models, including Support Vector Regression (SVR) and Multi-layer Perceptron (MLP). The results show that GRU has an excellent prediction performance with the highest coefficient of determination value (R2) of 0.97.2 -
PublicationRssi-based for device-free localization using deep learning technique( 2020-06-01)
;Sukor A.S.A. ;Rahim N.A. ;Sudin S.Nishizaki H.Device-free localization (DFL) has become a hot topic in the paradigm of the Internet of Things. Traditional localization methods are focused on locating users with attached wearable devices. This involves privacy concerns and physical discomfort especially to users that need to wear and activate those devices daily. DFL makes use of the received signal strength indicator (RSSI) to characterize the user’s location based on their influence on wireless signals. Existing work utilizes statistical features extracted from wireless signals. However, some features may not perform well in different environments. They need to be manually designed for a specific application. Thus, data processing is an important step towards producing robust input data for the classification process. This paper presents experimental procedures using the deep learning approach to automatically learn discriminative features and classify the user’s location. Extensive experiments performed in an indoor laboratory environment demonstrate that the approach can achieve 84.2% accuracy compared to the other basic machine learning algorithms.9 1 -
PublicationNon-Contact Breathing Signal Classification Using Hybrid Scalogram Image Representation Feature( 2022-01-01)
;Muhammad Husaini ;Nishizaki H. ;Kamarudin I.K. ;Ibrahim M.A. ;Toyoura M.Mao X.When monitoring human vital signs, breathing is one of the most critical physiological metrics. In areas with limited resources and a shortage of trained medical professionals, automated analysis of abnormal breathing patterns may prove advantageous to healthcare systems. In this paper, we implemented the architecture of five transfer learning models to classify individuals' breathing patterns using our proposed method which uses hybrid scalogram image-based features. We implemented the Sleep Breathing Detection Algorithm (SBDA) for extracting the actual breathing signals from ultra-wideband (UWB) radar for the pre-processing method. Later, the signals were converted to hybrid scalogram image-based representations before being classified using the VGG16, DenseNet, Xception, ResNet, and MobileNet models. The performance of the proposed method was validated using two other image representations: a standard image and a spectrogram image. The overall result showed that the proposed method obtained the highest classification accuracy on the test set for all pre-trained models.1 -
PublicationRssi-based for device-free localization using deep learning technique( 2020-06-01)Nishizaki H.Device-free localization (DFL) has become a hot topic in the paradigm of the Internet of Things. Traditional localization methods are focused on locating users with attached wearable devices. This involves privacy concerns and physical discomfort especially to users that need to wear and activate those devices daily. DFL makes use of the received signal strength indicator (RSSI) to characterize the user’s location based on their influence on wireless signals. Existing work utilizes statistical features extracted from wireless signals. However, some features may not perform well in different environments. They need to be manually designed for a specific application. Thus, data processing is an important step towards producing robust input data for the classification process. This paper presents experimental procedures using the deep learning approach to automatically learn discriminative features and classify the user’s location. Extensive experiments performed in an indoor laboratory environment demonstrate that the approach can achieve 84.2% accuracy compared to the other basic machine learning algorithms.
1 17 -
PublicationNon-Contact Breathing Monitoring Using Sleep Breathing Detection Algorithm (SBDA) Based on UWB Radar Sensors( 2022-07-01)
;Muhammad Husaini ;Kamarudin I.K. ;Ibrahim M.A. ;Nishizaki H. ;Toyoura M.Mao X.Ultra-wideband radar application for sleep breathing monitoring is hampered by the difficulty of obtaining breathing signals for non-stationary subjects. This occurs due to imprecise signal clutter removal and poor body movement removal algorithms for extracting accurate breathing signals. Therefore, this paper proposed a Sleep Breathing Detection Algorithm (SBDA) to address this challenge. First, SBDA introduces the combination of variance feature with Discrete Wavelet Transform (DWT) to tackle the issue of clutter signals. This method used Daubechies wavelets with five levels of decomposition to satisfy the signal-to-noise ratio in the signal. Second, SBDA implements a curve fit based sinusoidal pattern algorithm for detecting periodic motion. The measurement was taken by comparing the R-square value to differentiate between chest and body movements. Last but not least, SBDA applied the Ensemble Empirical Mode Decomposition (EEMD) method for extracting breathing signals before transforming the signal to the frequency domain using Fast Fourier Transform (FFT) to obtain breathing rate. The analysis was conducted on 15 subjects with normal and abnormal ratings for sleep monitoring. All results were compared with two existing methods obtained from previous literature with Polysomnography (PSG) devices. The result found that SBDA effectively monitors breathing using IR-UWB as it has the lowest average percentage error with only 6.12% compared to the other two existing methods from past research implemented in this dataset.1 8 -
PublicationTwo-stream deep convolutional neural network approach for RGB-D face recognition( 2021-07-21)
;Shunmugam P.Nishizaki H.Two-dimensional face recognition has been researched for the past few decades. With the recent development of Deep Convolutional Neural Network (DCNN) deep learning approaches, two-dimensional face recognition had achieved impressive recognition accuracy rate. However, there are still some challenges such as pose variation, scene illumination, facial emotions, facial occlusions exist in the two-dimensional face recognition. This problem can be solved by adding the depth images as input as it provides valuable information to help model facial boundaries and understand the global facial layout and provide low-frequency patterns. RGB-D images are more robust compared to RGB images. Unfortunately, the lack of sufficient RGB-D face databases to train the DCNN are the main reason for this research to remain undiscovered. So, in this research, new RGB-D face database is constructed using the Intel RealSense D435 Depth Camera which has 1280 x 720-pixel depth. Twin DCNN streams are developed and trained on RGB images at one stream and Depth images at another stream, and finally combined the output through fusion soft-max layers. The proposed DCNN model shows an accuracy of 95% on a newly constructed RGB-D database.1 -
PublicationReal-time in-vehicle air quality monitoring system using machine learning prediction algorithm( 2021-08-01)
;Goh C.C. ;Nishizaki H. ;Mao X. ;Kanagaraj E.Elham M.F.This paper presents the development of a real-time cloud-based in-vehicle air quality monitoring system that enables the prediction of the current and future cabin air quality. The designed system provides predictive analytics using machine learning algorithms that can measure the drivers’ drowsiness and fatigue based on the air quality presented in the cabin car. It consists of five sensors that measure the level of CO2, particulate matter, vehicle speed, temperature, and humidity. Data from these sensors were collected in real-time from the vehicle cabin and stored in the cloud database. A predictive model using multilayer perceptron, support vector regression, and linear regression was developed to analyze the data and predict the future condition of in-vehicle air quality. The performance of these models was evaluated using the Root Mean Square Error, Mean Squared Error, Mean Absolute Error, and coefficient of determination (R2 ). The results showed that the support vector regression achieved excellent performance with the highest linearity between the predicted and actual data with an R2 of 0.9981.1