The application of classification and prediction algorithms in healthcare can facilitate the detection of specific vital signs, which can be used to treat or prevent disease. In this study, a new framework based on deep learning architectures is introduced for the human activity recognition. Our proposed framework uses biosensors, electrocardiography (ECG) sensors, inertial sensors, and small single-board computers to collect and analyse sensory information. ECG and inertial sensory information is converted to images, and novel preprocessing techniques are used effectively. We use convolutional neural networks (CNN) along with sensor fusion, random forest, and long short-term memory (LSTM) with gated recurrent unit (GRU). The evaluation of the proposed approaches is carried out by considering well-known models such as transfer learning with MobileNet, CNN+MobileNet combined, and support vector machine in terms of accuracy. Moreover, the effects of the Null class, which are commonly seen in popular health-related datasets, are also investigated. The results show that LSTM with GRU, RandomForest and CNN with sensor fusion provided the highest accuracies with 99%, 98% and 98%, respectively. Since edge computing using sensors with relatively limited processing power and capacities has recently become quite common, a comparison is also provided to show the efficiency and edge computability of the architectures proposed in this study. The number of parameters used, the size of the models, and the training times as well as the testing times are evaluated and compared. The random forest algorithm provides the best training and testing time results, while models of LSTM with GRU have the smallest model size.