Options
Amiza Amir
Preferred name
Amiza Amir
Official Name
Amiza, Amir
Main Affiliation
Scopus Author ID
36170326400
Researcher ID
EKV-8568-2022
Now showing
1 - 3 of 3
-
PublicationImage classification for snake species using machine learning techniques( 2017-01-01)This paper investigates the accuracy of five state-of-the-art machine learning techniques — decision tree J48, nearest neighbors, knearest neighbors (k-NN), backpropagation neural network, and naive Bayes — for image-based snake species identification problem. Conventionally, snake species identification is conducted manually based on the observation of the characteristics such head shape, body pattern, body color, and eyes shape. Images of 22 species of snakes that can be found in Malaysia were collected into a database, namely the Snakes of Perlis Corpus. Then, an intelligent approach is proposed to automatically identify a snake species based on an image which is useful for content retrieval purpose where a snake species can be predicted whenever a snake image is given as input. Our experiment shows that backpropagation neural network and nearest neighbour are highly accurate with greater than 87% accuracy on CEDD descriptor in this problem.
-
PublicationEvaluating Tree-based Ensemble Strategies for Imbalanced Network Attack Classification( 2024-01-01)
;Soon H.F. ;Nishizaki H.With the continual evolution of cybersecurity threats, the development of effective intrusion detection systems is increasingly crucial and challenging. This study tackles these challenges by exploring imbalanced multiclass classification, a common situation in network intrusion datasets mirroring realworld scenarios. The paper aims to empirically assess the performance of diverse classification algorithms in managing imbalanced class distributions. Experiments were conducted using the UNSW-NB15 network intrusion detection benchmark dataset, comprising ten highly imbalanced classes. The evaluation includes basic, traditional algorithms like the Decision Tree, KNearest Neighbor, and Gaussian Naive Bayes, as well as advanced ensemble methods such as Gradient Boosted Decision Trees (GraBoost) and AdaBoost. Our findings reveal that the Decision Tree surpassed the Multi-Layer Perceptron, K-Nearest Neighbor, and Naive Bayes in terms of overall F1-score. Furthermore, thorough evaluations of nine tree-based ensemble algorithms were performed, showcasing their varying efficacy. Bagging, Random Forest, ExtraTrees, and XGBoost achieved the highest F1-scores. However, in individual class analysis, XGBoost demonstrated exceptional performance relative to the other algorithms. This is confirmed by achieving the highest F1-scores in eight out of the ten classes within the dataset. These results establish XGBoost as a predominant method for handling multiclass imbalance classification with Bagging being the closest feasible alternative, as Bagging gains an almost similar accuracy and F1-score as XGBoost.3 -
PublicationAnalysis of the effectiveness of Metaheuristic methods on Bayesian optimization in the classification of visual field defects(MDPI, 2023)
;Masyitah Abu ;Fumiyo Fukumoto ;Yoshimi SuzukiAzhany YaakubBayesian optimization (BO) is commonly used to optimize the hyperparameters of transfer learning models to improve the model’s performance significantly. In BO, the acquisition functions direct the hyperparameter space exploration during the optimization. However, the computational cost of evaluating the acquisition function and updating the surrogate model can become prohibitively expensive due to increasing dimensionality, making it more challenging to achieve the global optimum, particularly in image classification tasks. Therefore, this study investigates and analyses the effect of incorporating metaheuristic methods into BO to improve the performance of acquisition functions in transfer learning. By incorporating four different metaheuristic methods, namely Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) Optimization, Harris Hawks Optimization, and Sailfish Optimization (SFO), the performance of acquisition function, Expected Improvement (EI), was observed in the VGGNet models for visual field defect multi-class classification. Other than EI, comparative observations were also conducted using different acquisition functions, such as Probability Improvement (PI), Upper Confidence Bound (UCB), and Lower Confidence Bound (LCB). The analysis demonstrates that SFO significantly enhanced BO optimization by increasing mean accuracy by 9.6% for VGG-16 and 27.54% for VGG-19. As a result, the best validation accuracy obtained for VGG-16 and VGG-19 is 98.6% and 98.34%, respectively.