Researchers are occasionally required to work with small sample-sized (SSS) data. SSS data tends to undertrain a machine learning algorithm, making it useless due to its extremely small sample size. A low instance-to-feature ratio, or an excessively large number of features relative to a small number of instances, will cause the classification algorithm to overfit in some extreme circumstances of SSS issues. This research proposed two approaches: Random Subspace Oracle (RSO) through the hybridization of Random Subspace Method (RSM) and Random Linear Oracle (RLO) ensembles, and a novel implementation of the dropout approach with an extreme learning machine ensemble (DropELE), both of which focused on solving small sample-sized classification problems. According to the non-parametric Wilcoxon signed-ranks test, the experimental findings showed that the RSO ensemble performed better than a single decision tree and linear discriminant classifier, with a significance level of 0.05. In situations involving other single classifiers, the RSO ensemble's performance was not optimal. The comparison between established ensemble methods was conducted in the experiment that followed. The results showed that the RSO model exhibits comparable performance to the other ensemble approaches, with decision trees as the base classifiers. The DropELE algorithm was proposed as a solution to prevent overfitting difficulties that are frequently encountered in SSS classification problems by reducing the complexity of each base classifier in the ensemble. The research showed that a higher dropout ratio and a properly defined instance-to-feature ratio can greatly improve the performance of the proposed algorithm. According to experimental data, testing accuracy increases significantly from 88.9% to 98% when the dropout ratio is raised from 0.1 to 0.9. It is important to remember that the number of hidden neurons and ensemble size can have a significant impact. In this study, the DropELE algorithm shows that increasing the number of hidden neurons up to a threshold has a positive effect on classification performance. However, beyond this threshold, classification accuracy begins to decline. A similar observation can be made regarding the ensemble size. The performance can be enhanced by increasing the number of base classifiers in the ensemble pool. It is important to acknowledge that the improvement will gradually diminish as more base classifiers are added to the pool. Four real-world medical datasets were used to assess the performance of the two proposed algorithms. The DropELE method shows competitive classification accuracy in real-world datasets, performing well in two datasets and demonstrating comparable or superior diversity to AdaBoost in the other two datasets. The RSO model, while not the top performer, delivers satisfactory results compared to AdaBoost and generally surpasses RSM, RLO, and Bagging ensembles.