Transparency in AI


AI and Society, 2023 (ESCI) identifier

  • Publication Type: Article / Article
  • Publication Date: 2023
  • Doi Number: 10.1007/s00146-023-01786-y
  • Journal Name: AI and Society
  • Journal Indexes: Emerging Sources Citation Index (ESCI), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Library, Information Science & Technology Abstracts (LISTA)
  • Keywords: Black box approach, Distributed compositionality, Explainable AI, Neural semantic parsers
  • Middle East Technical University Affiliated: Yes


In contemporary artificial intelligence, the challenge is making intricate connectionist systems—comprising millions of parameters—more comprehensible, defensible, and rationally grounded. Two prevailing methodologies address this complexity. The inaugural approach amalgamates symbolic methodologies with connectionist paradigms, culminating in a hybrid system. This strategy systematizes extensive parameters within a limited framework of formal, symbolic rules. Conversely, the latter strategy remains staunchly connectionist, eschewing hybridity. Instead of internal transparency, it fabricates an external, transparent proxy system. This ancillary system’s mandate is elucidating the principal system’s decisions, essentially approximating its outcomes. Leveraging natural language processing as our analytical lens, this paper elucidates both methodologies: the hybrid method is underscored by the compositional vector semantics, whereas the purely connectionist method evolves as a derivative of neural semantic parsers. This discourse extols the merits of the purely connectionist approach for its inherent flexibility and for a pivotal delineation: segregating the explanatory apparatus from the operational core, thereby rendering artificial intelligence systems reminiscent of human cognition.