Objective: Explainable Artificial Intelligence (XAI) offers transparent, trustworthy decision support, yet its implementation in disability contexts remains limited. This scoping review aims to map and evaluate XAI tools developed for individuals with disabilities and identify thematic patterns to inform the design of inclusive rehabilitation technologies.Methods: A systematic search of literature from January 2018 to June 2024 was conducted across SCOPUS, ACM Digital Library, IEEE Xplore, ProQuest and Google Scholar, guided by Arksey & O’Malley’s framework and PRISMA-ScR guidelines. From 1184 records, 26 peer-reviewed studies involving end-user evaluation were selected. Braun & Clarke’s six-phase thematic analysis was used to classify tools by explanation modality and design principle.Impact: Findings reveal a strong concentration on neurological conditions - such as Alzheimer’s disease, autism spectrum disorder and Parkinson’s disease - with limited focus on orthopaedic, sensory and spinal impairments. SHAP was the most common explanation model, followed by LIME, LRP-B and Grad-CAM. Accessibility goals centred around clinical transparency, user comprehension, sensory/cognitive adaptation and trust in low-resource settings. Thematic analysis identified three overarching dimensions: modelling techniques, decision-making and trust and diverse application contexts. Expanding XAI to underrepresented impairments and embedding multimodal, user-centred explanations into rehabilitation workflows - through participatory design, ethical oversight and standardised evaluation - can enhance autonomy, improve personalisation and support more effective, equitable care.