Ensemble learning is a multiple-classifier machine learning approach which produces collections and ensembles statistical classifiers to build up more accurate classifier than the individual classifiers. Bagging, boosting and voting methods are the basic examples of ensemble learning. In this study, a novel boosting technique targeting to solve partial problems of AdaBoost, a well-known boosting algorithm, is proposed. The proposed system finds an elegant way of boosting a bunch of classifiers successively to form a "better classifier" than each ensembled classifiers. AdaBoost algorithm employs a greedy search over hypothesis space to find a "good" suboptimal solution. On the hand, the system proposed employs an evolutionary search with genetic algorithms instead of greedy search. Empirical results show that classification with boosted evolutionary computing outperforms the classical AdaBoost in equivalent experimental environments.