ChartFormer: A Large Vision Language Model for Converting Chart Images into Tactile Accessible SVGs


Moured O., Alzalabny S., Osman A., Schwarz T., Müller K., Stiefelhagen R.

19th International Conference on Computers Helping People with Special Needs, ICCHP 2024, Linz, Avusturya, 8 - 12 Temmuz 2024, cilt.14750 LNCS, ss.299-305, (Tam Metin Bildiri) identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 14750 LNCS
  • Doi Numarası: 10.1007/978-3-031-62846-7_36
  • Basıldığı Şehir: Linz
  • Basıldığı Ülke: Avusturya
  • Sayfa Sayıları: ss.299-305
  • Anahtar Kelimeler: Chart Analysis, Tactile Charts, Vision-Language Models
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Visualizations, such as charts, are crucial for interpreting complex data. However, they are often provided as raster images, which are not compatible with assistive technologies for people with blindness and visual impairments, such as embossed papers or tactile displays. At the same time, creating accessible vector graphics requires a skilled sighted person and is time-intensive. In this work, we leverage advancements in the field of chart analysis to generate tactile charts in an end-to-end manner. Our three key contributions are as follows: (1) introducing the ChartFormer model trained to convert raster chart images into tactile-accessible SVGs, (2) training this model on the Chart2Tactile dataset, a synthetic chart dataset we created following accessibility standards, and (3) evaluating the effectiveness of our SVGs through a pilot user study with an refreshable two-dimensional tactile display. Our work is publicly available at https://github.com/nsothman/ChartFormer.