Neural Computing and Applications, 2024 (SCI-Expanded)
Neural networks (NNs) have proven a useful surrogate model for the design and optimization of high frequency structures including antennas. Black-box NNs are known to have scalability and accuracy problems as the dimension of the problem increases. This study proposes knowledge-based regularization methods, referred to as derivative, spectral, and magnitude regularization to address these issues. The proposed methods utilize the functional properties of S-parameters to improve the accuracy and prevent unphysical predictions. The NNs are trained and tested using a data set of 5000 samples generated by Latin Hypercube Sampling and simulated by Ansys HFSS. The goodness of fit is evaluated using Relative Squared Error. Derivative and spectral regularizations reduce the RSE loss from 0.052 to 0.046 and 0.043, respectively. When combined with magnitude regularization, up to 17% and 88% reduction in loss and passivity violations can be achieved, at the expense of a 37% increase in training time. Moreover, 25% less data is required, to maintain a similar loss to the reference NN.