AI Explainability
Introduction
An author team is seeking unpublished manuscripts or raw data that compare AI systems with vs. without explanations, or compare different types of XAI
INTEREST CATEGORY: INNOVATION AND TECH
POSTING TYPE: Dialog
Posted by: Xuequn (Alex) Wang
Dear Colleagues,
We are conducting a comprehensive meta-analysis to examine the impact of AI explainability (XAI) on various user responses (e.g., perceptual, attitudinal, and behavioral).
To address potential publication bias and ensure a robust synthesis of the evidence, we are seeking unpublished manuscripts, working papers, conference papers, or raw data that compare AI systems with vs. without explanations, or compare different types of XAI.
Our inclusion criteria are studies that report:
- Independent Variable: AI explainability (presence vs. absence) or different XAI formats (e.g., feature importance, counterfactuals, visual vs. textual).
- Dependent Variables: User responses, including but not limited to:
- Perceptual:ÌýPerceived transparency, understanding, or trust.
- Attitudinal:ÌýSatisfaction, brand attitude, or perceived fairness.
- Behavioral:ÌýAdoption intention, reliance/compliance, or task performance.
- Study Context: Open to all application fields (e.g., marketing, healthcare, finance, e-commerce) and task types.
- Required Statistics: Correlation coefficients (r), or sufficient data to calculate effect sizes (e.g., Means/SDs with sample sizes, t-values, or F-values). Studies with non-significant results are highly encouraged.
Confidentiality & Acknowledgment:
All unpublished data will be kept strictly confidential and used only for this meta-analysis. If your data/paper is included, we will formally acknowledge your contribution in the final publication.
If you have any relevant work, please share the manuscript or data with us by March 4, 2026.
Please send your materials or any inquiries to Dr. Alex Wang atÌýxuequnwang@unm.edu.
Thank you for your time and for contributing to the advancement of research in this field.