Network Security in AI-based healthcare systems

Main Article Content

Bourair Al-Attar

Abstract

With the fast integration of artificial intelligence (AI) in healthcare, boosting diagnostics, treatment tailoring, and predictive analytics, securing patient data, and ensuring system integrity have become key challenges. This research examines the network security concerns particular to AI-based healthcare systems, attempting to uncover main vulnerabilities and assess viable protection strategies. Through a mix of systematic review and experimental validation, we tested numerous machines learning models, including convolutional neural networks (CNN), support vector machines (SVM), and random forests, against adversarial assaults that undermine model accuracy and data privacy. Results demonstrated that adversarial attacks might considerably impair model dependability, with accuracy decreases of up to 32% in CNN models under assault. However, adopting defensive strategies like adversarial training and defensive distillation dramatically increased model resilience, with post-defense accuracy rates returning by 15-25%. These results underline the necessity for strong network security policies suited to AI healthcare applications to guarantee both data protection and operational reliability. Our work adds useful insights on the adaptation of AI network security measures inside healthcare, identifying avenues for legislative updates and ongoing research to safeguard upcoming AI-driven health advances.

Article Details

Section

Articles

How to Cite

Network Security in AI-based healthcare systems (Bourair Al-Attar , Trans.). (2023). Babylonian Journal of Networking, 2023, 112-124. https://doi.org/10.58496/BJN/2023/015