IEEE Access, cilt.14, ss.309-325, 2026 (SCI-Expanded, Scopus)
The use machine learning-assisted optimization methods in the design of antennas have been increasing. Although neural networks (NNs) and Gaussian process regression (GPR) are widely used, their scalability to higher dimensions poses several challenges, such as the requirement for excessive data, extensive hyper-parameter tuning, and longer training times. In contrast, gradient-boosted decision trees (GBDTs) exhibit superior performance with limited training data, along with faster training and more efficient hyper-parameter tuning. Motivated by these advantages, we introduce a GBDT-assisted optimization (GBDTO) algorithm tailored for high dimensional problems. Beginning with an initial sample set, GBDTO builds a GBDT model and sequentially samples the input parameter space while searching for an optimal objective value. Compared to Bayesian optimization (BO) and NN-assisted optimization (ONN), GBDTO achieves faster convergence and superior objective values, as demonstrated through benchmarks using the Black-Box Optimization Benchmarking test suite, and several antenna designs. Numerical experiments across 480 instances of 12-dimensional 24 functions demonstrate 13% and 31% improvement in mean rank count over BO and ONN, respectively. Moreover, high dimensional antenna design examples indicate more than 50% faster convergence for a given optimization target and 7 − 54% improvement in the objective value for a fixed number of iterations, compared to BO and ONN. In addition to its optimization efficiency, GBDTO offers inherent and efficient feature importance analysis, which is extremely useful for user guidance.