News

CALL FOR PAPERS JANUARY 2026

IJSAR going to launch new issue Volume 07, Issue 01, January 2026; Open Access; Peer Reviewed Journal; Fast Publication. Please feel free to contact us if you have any questions or comments send email to: editor@scienceijsar.com

IMPACT FACTOR: 6.673

Submission last date: 20th January 2026

Explainable ai for detecting insider threats in healthcare systems

×

Error message

  • Notice: Trying to access array offset on value of type int in element_children() (line 6609 of /home1/sciensrd/public_html/scienceijsar.com/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6609 of /home1/sciensrd/public_html/scienceijsar.com/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home1/sciensrd/public_html/scienceijsar.com/includes/common.inc).
Author: 
Nnennaya Ngwanma Halliday and Fidelis Alu
Page No: 
11175-11197

A high-risk concern is the presence of healthcare insider threats: through which authorized users purposefully or accidentally expose or steal electronic health record (EHR) data, the attackers already have a credential, clinical context and workflow permissions that enable the abnormal behaviour to be invisible and hard to detect as such and legitimate care delivery. The paper proposes a principled architecture of an Explainable AI (XAI) which can assist in detecting such insider threats and also produce audit-ready explainable and causally-oriented explanations that may be utilized by the compliance team and investigators. We do it in a hybrid manner, where feature engineering by clinical-workflow awareness, inherently interpretable model families (e.g. rule lists, GAMs), a causal layer into which feature query counterfactual questions about clinical legitimacy are asked, and an evidence-packaging subsystem to produce tamper-evident investigation bundles. We evaluate the design objectives on three scales: detection utility against severe class imbalance and distributional shift, explanation fidelity and causal defensibility and human auditor effectiveness (triage accuracy, time and trust). It has been shown by simulation and clinician in the loop experiments that causal and counterfactual accounts reduce the false positives due to valid-but-infrequent clinical episodes, and material increase the speed of auditor triage without a loss of recall. The contributions publish (1) a stricture of XAI architecture of healthcare insider identification; (2) a system of justification that gives alert to compliance evidence levels; and (3) an assessment system by incorporating an adversarial stress test and human factors measurements.

Download PDF: