Issue:
Volume 1, Issue 2, June 2026
Pages:
78-86
Received:
27 February 2026
Accepted:
11 March 2026
Published:
16 April 2026
DOI:
10.11648/j.sdai.20260102.13
Downloads:
Views:
Abstract: This research article proposes comprehensive requirements for securing artificial intelligence systems, focusing on large language models (LLMs) in organizational settings. It addresses risks like unauthorized access, data leakage, service instability, and introduces "veracity" alongside the classic CIA triad (confidentiality, integrity, availability). The paper advocates a two-contour access model: an Open Contour (OC) for public LLMs and an Internal Contour (IC) for corporate/individual models, separated by a gateway for filtered interactions. A mandatory Request Control Module (RCM) monitors all user-LLM exchanges, enforcing limits on request frequency, size, and content to block sensitive data transfers. Secure training mandates dataset cleaning, depersonalization, and documentation, approved by security leads. Key Security Requirements: Corporate LLMs in IC with firewalls and access controls; public ones restricted to OC. Network/Access: Encrypted channels, user authentication, and logging for audits. Availability: Overload protection via RCM limits and reservations. Threat Analysis - drawing from OWASP Top 10 for LLMs (v1.1), it covers prompt injection (LLM01), data poisoning (LLM03), insecure output (LLM02 leading to XSS/SSRF), excess requests (LLM04/06), model theft (LLM10), and info disclosure. The dual-contour setup mitigates many via filtering and separation but notes limits against sophisticated attacks or human errors. Recommends DevSecOps pipelines integrating security across plan, develop, build, test, deploy, and monitor stages (e.g., SAST/DAST, SCA). Tables detail threat coverage, architecture strengths (e.g., centralized monitoring), and weaknesses (e.g., public model opacity). In conclusion, these practical guidelines enhance LLM resilience in enterprises, aligning with NIST AI RMF and ENISA practices, while calling for further automated vulnerability tools.
Abstract: This research article proposes comprehensive requirements for securing artificial intelligence systems, focusing on large language models (LLMs) in organizational settings. It addresses risks like unauthorized access, data leakage, service instability, and introduces "veracity" alongside the classic CIA triad (confidentiality, integrity, availabili...
Show More