A high-risk concern is the presence of healthcare insider threats: through which authorized users purposefully or accidentally expose or steal electronic health record (EHR) data, the attackers already have a credential, clinical context and workflow permissions that enable the abnormal behaviour to be invisible and hard to detect as such and legitimate care delivery. The paper proposes a principled architecture of an Explainable AI (XAI) which can assist in detecting such insider threats and also produce audit-ready explainable and causally-oriented explanations that may be utilized by the compliance team and investigators. We do it in a hybrid manner, where feature engineering by clinical-workflow awareness, inherently interpretable model families (e.g. rule lists, GAMs), a causal layer into which feature query counterfactual questions about clinical legitimacy are asked, and an evidence-packaging subsystem to produce tamper-evident investigation bundles. We evaluate the design objectives on three scales: detection utility against severe class imbalance and distributional shift, explanation fidelity and causal defensibility and human auditor effectiveness (triage accuracy, time and trust). It has been shown by simulation and clinician in the loop experiments that causal and counterfactual accounts reduce the false positives due to valid-but-infrequent clinical episodes, and material increase the speed of auditor triage without a loss of recall. The contributions publish (1) a stricture of XAI architecture of healthcare insider identification; (2) a system of justification that gives alert to compliance evidence levels; and (3) an assessment system by incorporating an adversarial stress test and human factors measurements.



