Winter Simulation Conference (WSC), Nevada, United States Of America, 3 - 06 December 2017, pp.1300-1311
There are often concerns about the reliability of simulation results due to improper design of experiments, limited support in the execution and analysis of experiments, and lack of integrated computational frameworks for model learning through simulation experiments. Such issues result in flawed analysis as well as misdirected human and computational effort. We put forward a methodological basis, which aims to (1) explore the utility of viewing models as adaptive agents that mediate among domain theories, data, requirements, principles, and analogies, (2) underline the role of cognitive assistance for model discovery, experimentation, and evidence evaluation so as to differentiate between competing models and to attain a balance between model exploration and exploitation, and (3) examine strategies for explanatory justification of model assumptions via cognitive models that explicate coherence judgments.