Federated learning (FL) is a communication-eﬀicient and privacy-preserving learning technique for collabo- rative training of machine learning models on vast amounts of data produced and stored locally on the distributed users. This paper investigates unbiased FL methods that achieve a similar convergence as state-of-the-art methods in scenarios with various constraints like an error-prone channel or intermittent energy availability. For this purpose, we propose FL algorithms that jointly design unbiased user scheduling and gradient weighting according to each user’s distinct energy and channel profile. In addition, we exploit a prevalent metric called the age of information (AoI), which quantifies the staleness of the gradient updates at the parameter server and adaptive momentum attenuation to increase the accuracy and accelerate the convergence for nonhomogeneous data distribution of participant users. The effect of AoI and mo- mentum on fair FL with heterogeneous users on various datasets is studied, and the performance is demonstrated by experiments in several settings.