Generating in-between images from multiple views of a scene is a crucial task for both computer vision and computer graphics fields. Photorealistic rendering, 3DTV and robot navigation are some of many applications which benefit from arbitrary view synthesis, if it is achieved in real-time. GPUs excel in achieving high computation power by processing arrays of data in parallel, which make them ideal for real-time computer vision applications. This paper proposes an arbitrary view rendering algorithm by using two high resolution color cameras along with a single low resolution time-of-flight depth camera and utilizing GPUs to achieve realtime processing rates. The presented ideas are examined in an experimental framework and based on the experimental results, it could be concluded that it is possible to realize content production and display stages of a free-viewpoint system in real-time by using only low-cost commodity computing devices.