Options
Haniza Yazid
Preferred name
Haniza Yazid
Official Name
Haniza, Yazid
Alternative Name
Yazid, H.
Yazid, Haniza
Main Affiliation
Scopus Author ID
22137274300
Researcher ID
D-3830-2015
Now showing
1 - 10 of 32
-
PublicationAutomated Microaneurysms Detection and Classification using Multilevel Thresholding and Multilayer Perceptron( 2020-04-01)
;Mazlan N. ; ;Arof H.Mohd Isa H.Purpose: The purpose of this paper is to propose an automatic detection of microaneurysms (MAs) in the fundus retina images. In this work, E-optha database of 100 images were utilised to test the performance of the proposed method. The approach covers pre-processing, segmentation, post-processing, feature extraction and classification phases. Methods: In pre-processing, the images were filtered and the contrast enhanced. Then, the images were segmented using H-maxima and thresholding technique. Morphological operation was carried out to enhance the images before feature extraction and MAs candidate detection. The detected MAs candidates were classified using multilayer perceptron (MLP). After that, the detected MAs were classified into three classes including background (B), MAs and retinal blood vessels (RBVs). Results: The performances of the classifiers were evaluated in terms of accuracy, sensitivity and specificity. The MLP classifier achieved a better performance than the support vector machine with the highest accuracy of 92.28% under condition 2. Conclusion: This study demonstrated a methodology for automatic detection of MAs using MLP. The proposed methodology successfully classify the MAs, B and RBVs and was reasonably fast to be implemented in real time. -
PublicationAnalysis of feature representation in dictionary learning and sparse coding algortihms for low resolution image( 2020-07-09)
;Mun Ng S.Super-Resolution (SR) is used to recover a high-resolution (HR) image from the image with low-resolution (LR). SR is important in the biometric identification and the face recognition is an area that bring attention to people. However, the performance of the current systems is affected by the resolution of the input images. Thus, this paper is focusing on the analysis of feature representations in dictionary learning and sparse coding methods for LR image. The input image is the Lena image in grey scale. A total number of 23 features were extracted from the image patches to develop different learned dictionaries using the k-singular value decomposition (k-SVD) algorithm. The denoised images were then produced by using the Douglas-Rachford algorithm. Most of the feature representations were able to produce a final image with Peak-to-Signal Noise Ratio (PSNR) and Structural Similarity Index Matric (SSIM) values of approximately 29 dB to 30 dB and 0.8300 to 0.8600 respectively. However, the denoised image produced with gradient direction obtained only 27.6676 dB and 0.7881 for PSNR and SSIM. Therefore, when different features were extracted for conducting the dictionary learning and sparse coding algorithm, denoised image with different PSNR and SSIM were produced at the end of the process. -
PublicationPerformance analysis of Otsu thresholding for sign language segmentation( 2021-06-01)
;Tan Z.Y. ; ;Sign language recognition system generally consists of three main processes, which are segmentation, modelling, and classification. Image segmentation plays a crucial role as the initial step in sign language recognition. Despite the many sign language recognition system algorithms proposed in the literature and their well-understood usage, their performance analyses are relatively limited. As such, the main motivation of this paper is to critically analyse the feasibility of successful sign language segmentation under variation of dynamic scene parameters such as noise, hand size, and intensity difference between hand and background. The focus is on image thresholding using Otsu technique, since it is the most commonly used in initial process of sign language segmentation. The analysis of this work was developed based on Monte Carlo statistical method, which showed that the success of sign language segmentation depends on hand size, hand background intensity difference, and noise measurement. The result showed that the sign alphabets with handheld shape like A, E, I, M, N, S, and T is easier to segment, while sign alphabets with finger-extend shape like C, D, F, G, H, K, L, P, R, U, V, W, and Y is harder to segment. Experiment using real images demonstrate the capability of the conditions to correctly predict the outcome of sign language segmentation using Otsu technique. In conclusion, the success of sign language segmentation could be predicted beforehand with obtainable scene parameters. -
PublicationPerformance analysis of diabetic retinopathy detection using fuzzy entropy multi-level thresholding(Elsevier Ltd, 2023)
;Mohammed Saleh Ahmed Qaid ; ; ; ;Diabetic Retinopathy (DR) is one of the major causes of blindness. Many DR detection systems were developed to segment and determine the type and number of lesions that appeared on retinal images and used to classify DR and its severity level. Even though several researchers have already proposed many automated diagnosis systems with different image segmentation algorithms, their accuracy and reliability are generally unexplored. The accuracy of an automated diagnosis system usually depends on the segmentation techniques. The accuracy of this system is heavily dependent upon the retinal and image parameters, which have intensity level difference between background (BG)-blood vessels (BV), BV-bright lesions, BV-dark lesions, and noise levels. In this work, the automated diagnosis system accuracy has been analysed to successfully detect DR and its severity levels. The focus is on fundus image modalities segmentation based on fuzzy entropy multi-level thresholding. The analysis aimed to develop conditions to guarantee accurate DR detection and its severity level. Firstly, a retinal image model was developed that represents the retina under the variation of all retinal and image parameters. Overall, 45,000 images were developed using the retinal model. Secondly, feasibility and consistency analysis were performed based on a specific design Monte Carlo statistical method to quantify the successful detection of DR and its severity levels. The conditions to guarantee accurate DR detections are: BG to BV > 30% and BV to the dark lesions (MAs) >15% for mild DR, BG to BV > 40% and BV to the dark lesions (MAs and HEM) > 20% for moderate DR, and BG to BV > 30% and BV to the dark lesions (MAs and HEM) > 15%, and BV to the bright lesions (EX) > 55% for severe DR. Finally, the validity of these conditions was verified by comparing their accuracy against real retinal images from publicly available datasets. The verification results demonstrated that the condition for the analysis could be used to predict the success of DR detection. -
PublicationPerformance analysis on denoising filters with new edge-directed interpolation for fingerprint imagesIn image processing such as image interpolation, denoising filters are one of the most fundamental requirements. It is important in order to preserve the image quality and its edges’ properties. In this paper, we analyse the performance of seven different image denoising filters on fingerprint images. The fingerprint images were filtered and interpolated using New Edge-Directed Interpolation (NEDI) method. The image quality assessment (IQA) metrics that were used to assess the denoising filters quality are Edge-Based Image Quality Assessment (EBIQA), Non-Shift Edge Based Ratio (NSER), and Gradient Conduction Mean Square Error (GCMSE). The best image denoising filter is the Conservative Smoothing filter while the worst is the Laplacian filter.
-
PublicationEffect of direct statistical contrast enhancement technique on document image binarization( 2022-01-01)
; ; ;Alkhayyat A. ;Background: Contrast enhancement plays an important role in the image processing field. Contrast correction has performed an adjustment on the darkness or brightness of the input image and increases the quality of the image. Objective: This paper proposed a novel method based on statistical data from the local mean and local standard deviation. Method: The proposed method modifies the mean and standard deviation of a neighbourhood at each pixel and divides it into three categories: background, foreground, and problematic (contrast & luminosity) region. Experimental results from both visual and objective aspects show that the proposed method can normalize the contrast variation problem effectively compared to Histogram Equalization (HE), Difference of Gaussian (DoG), and Butterworth Homomorphic Filtering (BHF). Seven (7) types of binarization methods were tested on the corrected image and produced a positive and impressive result. Result: Finally, a comparison in terms of Signal Noise Ratio (SNR), Misclassification Error (ME), F-measure, Peak Signal Noise Ratio (PSNR), Misclassification Penalty Metric (MPM), and Accuracy was calculated. Each binarization method shows an incremented result after applying it onto the corrected image compared to the original image. The SNR result of our proposed image is 9.350 higher than the three (3) other methods. The average increment after five (5) types of evaluation are: (Otsu = 41.64%, Local Adaptive = 7.05%, Niblack = 30.28%, Bernsen = 25%, Bradley = 3.54%, Nick = 1.59%, Gradient-Based = 14.6%). Conclusion: The results presented in this paper effectively solve the contrast problem and finally produce better quality images.1 21 -
PublicationContrast Correction Using Hybrid Statistical Enhancement on Weld Defect Images( 2022-01-01)
; ; ;Alkhayyat A. ; ;Salimi M.N.Luminosity and contrast variation problems are among the most challenging tasks in the image processing field, significantly improving image quality. Enhancement is implemented by adjusting the dark or bright intensity to improve the quality of the images and increase the segmentation performance. Recently, numerous methods had been proposed to normalise the luminosity and contrast variation. A new approach based on a direct technique using statistical data known as Hybrid Statistical Enhancement (HSE) is presented in this study. The HSE method uses the mean and standard deviation of a local and global neighbourhood and classified the pixel into three groups; the foreground, border, and problematic region (contrast & luminosity). The datasets, namely weld defect images, were utilised to demonstrate the effectiveness of the HSE method. The results from the visual and objective aspects showed that the HSE method could normalise the luminosity and enhance the contrast variation problem effectively. The proposed method was compared to the two (2) populor enhancement methods which is Homomorphic Filter (HF) and Difference of Gaussian (DoG). To prove the HSE effectiveness, a few image quality assessments were presented, and the results were discussed. The HSE method achieved a better result compared to the other methods, which are Signal Noise Ratio (8.920), Standard Deviation (18.588) and Absolute Mean Brightness Error (9.356). In conclusion, implementing the HSE method has produced an effective and efficient result for background correction and quality images improvement.2 28 -
PublicationAnalysis of Optical Character Recognition using EasyOCR under Image Degradation( 2023)
;Muhamad Aqil Mirza Salehudin ; ; ; ; ;Khairul Azami SidekThis project explores EasyOCR's performance with Latin characters under image degradation. Variables like character-background intensity difference, Gaussian blur, and relative character size were tested. EasyOCR excels in distinguishing unique lowercase and uppercase characters but tends to favor uppercase for similar shapes like C, S, U, or Z. Results showed that high character-background intensity differences affected OCR output, with confidence scores ranging from 3 % to 80%. Higher differences caused confusion between characters like o and 0, or i and 1. Increased Gaussian blur hindered recognition but improved it for certain letters like v. Image size had a significant impact, with character detection failing as sizes decreased to 40% to 30% of the original. These findings provide insights into EasyOCR's capabilities and limitations with Latin characters under image degradation.7 29 -
PublicationObject detection using image processing techniques: coconut as a case study( 2007)The use of computers to analyze images has many potential but, the variability of the objects makes it a challenging task. In this thesis, the main idea is to detect an object (coconut) from an image. Several techniques have been utilized namely, the separable filter, Circular Hough Transform (CHT), chord intersection and moment invariant. Before applying these techniques, the preprocessing and image segmentation steps need to be performed in priori. Histogram equalization is utilized in preprocessing step meanwhile edge detection and morphological filtering have been employed in image segmentation step. Single object has been experimented to evaluate the two (2) techniques, CHT and the chord intersection. Based on the results obtained from single object detection, the CRT achieves higher percentage, 87.5% than chord intersection technique, 85%. For multiple objects detection, the CHT technique has been used and the highest detection for the first object is 87.5% followed by 92.5% for the second object, 77.5% for the third object and the last object is 67.5%. The moment invariant technique has been used to extract the shape of the object and detect its presence. From 50 images that have been experimented, 90% show positive result. This research can be adopted for climbing robotic system that can automatically pluck the coconut from a tree. Using image processing techniques, the gripping process will be easier and convenient than manual plucking.
12 1 -
PublicationPerformance analysis of multi-level thresholding for microaneurysm detection( 2022-09-01)
;Choong K.H. ; ; ; ;Diabetic retinopathy (DR) – one of the diabetes complications – is the leading cause of blindness among the age group of 20–74 years old. Fortunately, 90% of these cases (blindness due to DR) could be prevented by early detection and treatment via manual and regular screening by qualified physicians. The screening of DR is tedious, which can be subjective, time-consuming, and sometimes prone to misclassification. In terms of accuracy and time, many automated screening systems based on image processing have been developed to improve diagnostic performance. However, the accuracy and consistency of the developed systems are largely unaddressed, where a manual screening process is still the most preferred option. The main contribution of this paper is to analyse the accuracy and consistency of microaneurysm (MA) detection via image processing by focusing on Otsu’s multi-thresholding as it has been shown to work very well in many applications. The analysis was based on Monte Carlo statistical analysis using synthetic retinal images of retinal images under variation of all stages of DR, retinal, and image parameters – intensity difference between MAs and blood vessels (BVs), MA size, and measurement noise. Then, the conditions – in terms of obtainable retinal and image parameters – that guarantee accurate and consistent MA detection via image processing were extracted. Finally, the validity of the conditions to guarantee accurate and consistent MA detection was verified using real retinal images. The results showed that MA detection via image processing is guaranteed to be accurate and consistent when the intensity difference between MAs and BVs is at least 50% and the sizes of MAs are from 5 to 20 pixels depending on measurement noise values. These conditions are very important as a guideline of MA detection for DR.5 44