A linear approximation for training Recurrent Random Neural Networks


Halici U., Karaoz E.

13th International Symposium on Computer and Information Sciences (ISCIS 98), BELEK ANTALYA, Türkiye, 26 - 28 Ekim 1998, cilt.53, ss.149-156 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 53
  • Basıldığı Şehir: BELEK ANTALYA
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.149-156
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

In this paper, a linear approximation for Gelenbe's Learning Algorithm developed for training Recurrent Random Neural Networks (RRNN) is proposed. Gelenbe's learning algorithm uses gradient descent of a quadratic error function in which the main computational effort is for obtaining the inverse of an n-by-n matrix. In this paper, the inverse of this matrix is approximated with a linear term and the efficiency of the approximated algorithm is examined when RRNN is trained as autoassociative memory.