Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review
Résumé
Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as “black boxes.” Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.
Domaines
Intelligence artificielle [cs.AI]Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|