AI agents: Security concerns
This page covers potential security concerns related to AI agents and the mitigation strategies for these concerns.
Security concerns
Unauthorized database access
Concern: Unauthorized access to databases can lead to data breaches.
-
Mitigation: Read-only access
The LLM has no direct access to the database. It can only request the agent, via query tools, to query the database on its behalf, and the agent can only apply read-only operations. -
Mitigation: DBA control
Control over the database is determined using certificates. Only users whose certificates grant them a database administrator or a higher role can create and manage agents. The DBA retains full control over connections to the AI model (through connection strings), the agent configuration, and the queries that the agent is allowed to run. -
Mitigation: Agent scope
An AI agent is created for a specific database and has no access to other databases on the server, ensuring database-level isolation.
Data compromise during transit
Concern: Data may be compromised during transit.
- Mitigation: Secure TLS (Transport Layer Security) communication
All data is transferred over HTTPS between the client, the agent, the database, and the AI model, to ensure its encryption during transit.
Untraceable malicious or unexpected actions
Concern: Inability to trace malicious or unexpected actions related to agents.
-
Mitigation: Audit logging
RavenDB admin logs track the creation, modification, and deletion of AI agents, as well as agent interactions with the database.Example of an audit log entry recorded when an agent was deleted:
Starting to process record 16 (current 15) for aiAgent_useHandleToRunChat_1.
Type: DeleteAiAgentCommand.
Cluster database change type: RecordChanged
Date 2025-09-23 22:29:45.0391
Level DEBUG
Thread ID 58
Resource aiAgent_useHandleToRunChat_1
Logger Raven.Server.Documents.DocumentDatabase
AI model data memorization
Concern: Sensitive data might inadvertently be memorized and reproduced by the AI model.
-
Mitigation: Free selection of AI model
RavenDB does not enforce the usage of specific providers or AI models, but gives you free choice of the services that best suit your needs and security requirements.
When using the service of your choice, it is your responsibility to define safe queries and expose only the data that it is in your interest to share with the AI model. -
Mitigation: Agent parameters
You can use agent parameters to limit the scope of the defined query and the dataset subsequently transferred to the AI model.
Validation or injection attacks via user input
Concern: Validation or injection attacks crafted through malicious user input.
-
Mitigation: Query scope
The agent queries a limited subset of the stored data, restricting an attacker's access to the rest of the data and to data belonging to other users. -
Mitigation: Read-only access
Query tools can apply read-only RQL queries, preventing attackers from modifying any data.
LLM exposure in multi-agent setups
Concern: Sensitive parameters and untrusted model-generated values can flow through a multi-agent hierarchy.
When an agent is composed with sub-agents, two specific risks arise in the data flow:
-
Parameter visibility along the chain.
A parameter passed at conversation start flows by name to any sub-agent that declares it, and is exposed to every LLM in the chain unless explicitly hidden. -
Model-fabricated parameter values.
When a sub-agent needs a parameter with no inherited value, the parent's LLM is allowed by default to generate one.
This is unsafe for parameters that must come from a trusted source - for example, a user identifier that scopes queries to the caller's own data, a session token, or an account number.
The parent's LLM could then pick a plausible-looking value belonging to a different caller, and the sub-agent would run its scoped queries against that caller's data.
-
Mitigation: Hide values from the LLM at configuration level
SetSendToModel = falseonAiAgentParameterfor parameters that must not reach any LLM.
The value remains available to queries, action tools, and sub-agents, but is never included in prompts. -
Mitigation: Hide values from the LLM at conversation level
Passnew AiConversationParameterOptions { SendToModel = false }toAddParameterwhen creating a conversation.
This hides the value from every LLM in the hierarchy for the duration of the conversation, in addition to any restriction set by the agent configuration. Visibility is the AND of both levels. -
Mitigation: Forbid model-generated values for trusted parameters
Mark the sub-agent's parameter withAiAgentParameterPolicy.ForbidModelGeneration.
The parent's LLM is then blocked from generating a value for that parameter; a value must be inherited from a parent-side parameter of the same name or provided at conversation start, otherwise invoking the sub-agent fails withMissingAiAgentParameterException.
A multi-agents hierarchy does not introduce new access-control concerns: sub-agents live in the same database as their parent, are created under the same administrative controls, and use the same connection strings, so all other mitigations on this page apply to sub-agents too.
See Multi-agents for the full picture, including parameter propagation and conversation-document isolation.
Runaway iterations in multi-agent setups
Concern: A deep or cyclic sub-agent hierarchy can exhaust the per-message iteration limit before the user's request is satisfied.
RavenDB does not enforce a maximum hierarchy depth and does not detect cycles in sub-agent wiring.
Two agents (or more) can reference each other as sub-agents, and such a configuration is not rejected.
- Mitigation:
MaxModelIterationsPerCall
The root agent'sMaxModelIterationsPerCallcaps the number of model iterations per user message (sub-agent invocations counted among them). This is the only server-side bound on a conversation's cost. Set this limit carefully if your hierarchy is deep or may contain cycles.