Web pages are typically decorated with different kinds of visual elements that help sighted people complete their tasks. Unfortunately, this is not the case for people accessing web pages in constraint environments such as visually disabled or small screen device users. In our previous work, we show that tracking the eye movements of sighted users provide good understanding of how people use these visual elements. We also show that people's experience in constraint environments can be improved by reengineering web pages by using these visual elements. However, in order to reengineer web pages based on eyetracking, we first need to aggregate, analyse and understand how a group of people's eyetracking data can be combined to create a common scanpath (namely, eye movement sequence) in terms of visual elements. This paper presents an algorithm that aims to achieve this. This algorithm was developed iteratively and experimentally evaluated with an eyetracking study. This study shows that the proposed algorithm is able to identify patterns in eyetracking scanpaths and it is fairly scalable. This study also shows that this algorithm can be improved by considering different techniques for pre-processing the data, by addressing the drawbacks of using the hierarchical structure and by taking into account the underlying cognitive processes.