Creating AI agents: API
-
To create an AI agent, a client defines its configuration, provides it with settings and tools, and registers the agent with the server.
-
Once the agent is created, the client can initiate or resume conversations, get LLM responses, and perform actions based on LLM insights.
-
This page provides a step-by-step guide to creating an AI agent and interacting with it using the Client API.
-
In this article:
Creating a connection string
Your agent will need a connection string to connect with the LLM. Create a connection string using an AiConnectionString instance and the PutConnectionStringOperation operation.
You can also create a connection string using Studio, see here
Choose a provider based on your needs.
-
Examples
- open-ai-cs
- azure-open-ai-cs
- google-cs
- ollama-cs
using (var store = new DocumentStore())
{
// Define the connection string to OpenAI
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "open-ai-cs",
// Connection type
ModelType = AiModelType.Chat,
// OpenAI connection settings
OpenAiSettings = new OpenAiSettings(
apiKey: "your-api-key",
endpoint: "https://api.openai.com/v1",
// LLM model for text generation
model: "gpt-5-mini")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
}using (var store = new DocumentStore())
{
// Define the connection string to Azure OpenAI
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "azure-open-ai-cs",
// Connection type
ModelType = AiModelType.Chat,
// Azure OpenAI connection settings
AzureOpenAiSettings = new AzureOpenAiSettings(
apiKey: "your-api-key",
endpoint: "https://your-resource.openai.azure.com/",
// LLM model for text generation
model: "your-api-model",
// Azure deployment name
deploymentName: "your-deployment-name")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
}using (var store = new DocumentStore())
{
// Define the connection string to Google AI
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "google-cs",
// Connection type
ModelType = AiModelType.Chat,
// Google AI connection settings
GoogleSettings = new GoogleSettings(
// LLM model for text generation
model: "gemini-3-flash-preview",
apiKey: "your-api-key")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
}using (var store = new DocumentStore())
{
// Define the connection string to Ollama
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "ollama-cs",
// Connection type
ModelType = AiModelType.Chat,
// Ollama connection settings
OllamaSettings = new OllamaSettings(
// LLM Ollama model for text generation
model: "llama3.2",
// Local URL
uri: "http://localhost:11434/")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
} -
Syntax
- open-ai-cs-syntax
- azure-open-ai-cs-syntax
- google-cs-syntax
- ollama-cs-syntax
public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public OpenAiSettings OpenAiSettings { get; set; }
...
}
public class OpenAiSettings : OpenAiBaseSettings
{
public string ApiKey { get; set; }
public string Endpoint { get; set; }
public string Model { get; set; }
public int? Dimensions { get; set; }
public double? Temperature { get; set; }
public string OrganizationId { get; set; }
public string ProjectId { get; set; }
}public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public AzureOpenAiSettings AzureOpenAiSettings { get; set; }
...
}
public class AzureOpenAiSettings : OpenAiBaseSettings
{
public string ApiKey { get; set; }
public string Endpoint { get; set; }
public string Model { get; set; }
public string DeploymentName { get; set; }
public int? Dimensions { get; set; }
public double? Temperature { get; set; }
}public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public GoogleSettings GoogleSettings { get; set; }
...
}
public class GoogleSettings : OpenAiBaseSettings
{
public string ApiKey { get; set; }
public string Endpoint { get; set; }
public string Model { get; set; }
public int? Dimensions { get; set; }
public double? Temperature { get; set; }
public GoogleAIVersion? AiVersion { get; set; }
}
public enum GoogleAIVersion
{
V1,
V1_Beta
}public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public OllamaSettings OllamaSettings { get; set; }
...
}
public class OllamaSettings : AbstractAiSettings
{
public string Model { get; set; }
public string Uri { get; set; }
}
Defining an agent configuration
To create an AI agent you need to prepare an agent configuration and populate it with your settings and tools.
Start by creating a new AiAgentConfiguration instance.
While creating the instance, pass its constructor:
- The agent's Name
- The connection string you created
- A System prompt
The agent will send the system prompt you define here to the LLM to define its basic characteristics, including its role, purpose, behavior, and the tools it can use.
- Example
// Start setting an agent configuration
var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
@"You work for a human experience manager.
The manager uses your services to find which employee has made the largest profit and to suggest
a reward.
The manager provides you with the name of a country, or with the word ""everything"" to indicate
all countries.
Then you:
1. use a query tool to load all the orders sent to the selected country,
or a query tool to load all orders sent to all countries.
2. calculate which employee made the largest profit.
3. use a query tool to learn in what general area this employee lives.
4. find suitable vacations sites or other rewards based on the employee's residence area.
5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
When you're done, return these details in your answer to the user as well.");
Once the initial agent configuration is created, we need to add a few additional elements to it, as shown below.
Set the agent ID
Use the Identifier property to provide the agent with a unique ID that the
system will recognize it by.
// Set agent ID
agent.Identifier = "reward-productive-employee";
Define a response object
Define a structured output response object that the LLM will populate with its response to the user.
To define the response object, you can use the SampleObject and/or the OutputSchema property
SampleObjectis a straightforward sample of the response object that you expect the LLM to return.
It is usually simpler to define the response object this way.OutputSchemais a formal JSON schema that the LLM can understand.
Even when defining the response object as aSampleObject, RavenDB will translate the object to a JSON schema before sending it to the LLM. If you prefer it however, you can explicitly define it as a schema yourself.- If you define both a sample object and a schema, the agent will send only the schema to the LLM.
- sample-object
- json-schema
// Set sample object
agent.SampleObject = "{" +
"\"suggestedReward\": \"your suggestions for a reward\", " +
"\"employeeId\": \"the ID of the employee that made the largest profit\", " +
"\"profit\": \"the profit the employee made\"" +
"}";
// Set output schema
agent.OutputSchema = "{" +
"\"name\": \"RHkxaWo5ZHhMM1RuVnIzZHhxZm9vM0c0UnYrL0JWbkhyRDVMd0tJa1g4Yz0\", " +
"\"strict\": true, " +
"\"schema\": {" +
"\"type\": \"object\", " +
"\"properties\": {" +
"\"employeeID\": {" +
"\"type\": \"string\", " +
"\"description\": \"the ID of the employee that made the largest profit\"" +
"}, " +
"\"profit\": {" +
"\"type\": \"string\", " +
"\"description\": \"the profit the employee made\"" +
"}, " +
"\"suggestedReward\": {" +
"\"type\": \"string\", " +
"\"description\": \"your suggestions for a reward\"" +
"}" +
"}, " +
"\"required\": [" +
"\"employeeID\", " +
"\"profit\", " +
"\"suggestedReward\"" +
"], " +
"\"additionalProperties\": false" +
"}" +
"}";
Add agent parameters
Agent parameters are parameters that can be used by query tools when the agent queries the database on behalf of the LLM.
Values for agent parameters are provided by the client, or by a user through the client,
when a chat is started.
When the agent is requested to use a query tool that uses agent parameters, it replaces these parameters with the values provided by the user before running the query.
Using agent parameters allows the client to focus the queries and the entire interaction on its current needs.
In the example below, an agent parameter is used to determine what area of the world a query will handle.
To add an agent parameter create an AiAgentParameter instance, initialize it with
the parameter's name and description (explaining to the LLM what the parameter
is for), and pass this instance to the agent.Parameters.Add method.
Example
// Set agent parameters
agent.Parameters.Add(new AiAgentParameter(
"country", "A specific country that orders were shipped to, " +
"or \"everywhere\" to look for orders shipped to all countries"));
Setting parameter values visibility to the LLM (Configuration level)
You can control whether to expose a parameter's value to the AI model, using the SendToModel property.
When set to false, the parameter value is hidden from the LLM but still available for query substitution.
This is useful for sensitive values like user IDs or tenant identifiers that should not be included in the model's context.
-
Example
// Set an agent parameter that is hidden from the LLM
agent.Parameters.Add(new AiAgentParameter(
"userId", "The current user's ID", sendToModel: false)); -
Note that the visibility of parameter values to the LLM can also be determined at the conversation level, as explained here.
See AiAgentParameter in the Syntax section.
Set maximum number of iterations
You can limit the number of times that the LLM is allowed to request the usage of
agent tools in response to a single user prompt. Use MaxModelIterationsPerCall to change this limit.
- Example
// Limit the number of times the LLM can request for tools in response to a single user prompt
agent.MaxModelIterationsPerCall = 3;
Set chat trimming configuration
To summarize the conversation, create an AiAgentChatTrimmingConfiguration instance,
use it to configure your trimming strategy, and set the agent's ChatTrimming property
with the instance.
When creating the instance, pass its constructor a summarization strategy using
a AiAgentSummarizationByTokens class.
The original conversation, before it was summarized, can optionally be
kept in the @conversations-history collection.
To determine whether to keep the original messages and for how long, also pass the
AiAgentChatTrimmingConfiguration constructor an AiAgentHistoryConfiguration instance
with your settings.
- Example
// Set chat trimming configuration
AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
{
// When the number of tokens stored in the conversation exceeds this limit
// summarization of old messages will be triggered.
MaxTokensBeforeSummarization = 32768,
// The maximum number of tokens that the conversation is allowed to contain
// after summarization.
MaxTokensAfterSummarization = 1024
};
agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization);
See AiAgentChatTrimmingConfiguration, AiAgentSummarizationByTokens, and AiAgentHistoryConfiguration in the Syntax section.
Adding agent tools
You can enhance your agent with Query and Action tools, that allow the LLM to query your database and trigger client actions.
After defining agent tools and submitting them to the LLM, it is up to the LLM to decide if and when to use them.
Query tools
Query tools provide the LLM with the ability to retrieve data from the database.
A query tool includes a natural-language description that explains the LLM what the tool is for, and an RQL query.
-
Passing values to query tools
- Query tools optionally include parameters, identified by a
$prefix.
Both the user and the LLM can pass values to these parameters. - Passing values from the user
Users can pass values to queries through agent parameters.
If agent parameters are defined in the agent configuration -- The client has to provide values for them when initiating a conversation with the agent.
- The parameters can be included in query tools RQL queries.
Before running a query, the agent will replace any agent parameter included in it with its value.
- Passing values from the LLM
The LLM can pass values to queries through a parameters schema.- The parameters schema layout is defined as part of the query tool.
- When the LLM requests the agent to run a query, it will add parameter values to the request.
- You can define a parameters schema either as a sample object or a formal JSON schema.
If you define both, the LLM will pass parameter values only through the JSON schema. - Before running a query, the agent will replace any parameter included in it with its value.
- Query tools optionally include parameters, identified by a
-
Example
-
The first query tool will be used by the LLM when it needs to retrieve all the orders sent to any place in the world. (the system prompt instructs it to use this tool when the user enters "everywhere" when the conversation is started.)
-
The second query tool will be used by the LLM when it needs to retrieve all the orders that were sent to a particular country, using the
$countryagent parameter. -
The third tool retrieves from the database the general location of an employee.
To do this it uses a$employeeIdparameter, whose value is set by the LLM in its request to run this tool.agent.Queries =
[
// Set a query tool that triggers the agent to retrieve all the orders sent everywhere
new AiAgentToolQuery
{
// Query tool name
Name = "retrieve-orders-sent-to-all-countries",
// Query tool description
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
// Query tool RQL query
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
// Sample parameters object for the query tool
// The LLM can use this object to pass parameters to the query tool
ParametersSampleObject = "{}"
},
// Set a query tool that triggers the agent to retrieve all the orders sent to a
// specific country
new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-a-specific-country",
Description = "a query tool that allows you to retrieve all orders sent " +
"to a specific country",
Query = "from Orders as O where O.ShipTo.Country == $country select O.Employee, " +
"O.Lines.Quantity",
ParametersSampleObject = "{}"
},
// Set a query tool that triggers the agent to retrieve the performer's
// residence region details (country, city, and region) from the database
new AiAgentToolQuery
{
Name = "retrieve-performer-living-region",
Description = "a query tool that allows you to retrieve an employee's country, " +
"city, and region, by the employee's ID",
Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
"E.Address.City, E.Address.Region",
ParametersSampleObject = "{" +
"\"employeeId\": \"embed the employee's ID here\"" +
"}"
}
];
-
Initial-context queries
-
You can set a query tool as an initial-context query using its
Options.AddToInitialContextproperty, to execute the query and provide the LLM with its results immediately when the agent is started.- An initial-context query is not allowed to use LLM parameters, since the query runs before the conversation starts, earlier than the first communication with the LLM, and the LLM will have no opportunity to fill the parameters with values.
- An initial-context query is allowed to use agent parameters, whose values are provided by the user even before the query is executed.
-
You can use the
Options.AllowModelQueriesproperty to Enable or Disable a query tool .- When a query tool is enabled, the LLM can freely trigger its execution.
- When a query tool is disabled, the LLM cannot trigger its execution.
- If a query tool is set as an initial-context query, it will be executed when the conversation
starts even if disabled using
AllowModelQueries.
-
Example
Set a query tool that runs when the agent is started and retrieves all the orders sent everywhere.new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-all-countries",
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
ParametersSampleObject = "{}",
Options = new AiAgentToolQueryOptions
{
// The LLM is allowed to trigger the execution of this query during the conversation
AllowModelQueries = true,
// The query will be executed when the conversation starts
// and its results will be added to the initial context
AddToInitialContext = true
}
}
Note: the two flags can be set regardless of each other.
- Setting
AddToInitialContexttotrueandAllowModelQueriestofalse
will cause the query to be executed when the conversation starts,
but the LLM will not be able to trigger its execution later in the conversation. - Setting
AddToInitialContexttotrueandAllowModelQueriestotrue
will cause the query to be executed when the conversation starts,
and the LLM will also be able to trigger its execution later in the conversation.
See AiAgentToolQuery and AiAgentToolQueryOptions in the Syntax section.
Action tools
Action tools allow the LLM to trigger the client to action (e.g., to modify or add a document).
An action tool includes a natural-language description that explains the LLM what the tool is capable of, and a schema that the LLM will fill with details related to the requested action before sending it to the agent.
In the example below, the action tool requests the client to store an employee's details in the database. The LLM will provide the employee's ID and other details whenever it requests the agent to apply the tool.
When the client finishes performing the action, it is required to send the LLM
a response that explains how it went, e.g. done.
Example
The following action tool sends to the client employee details that the tool needs to store in the database.
agent.Actions =
[
// Set an action tool that triggers the client to store the performer's details
new AiAgentToolAction
{
Name = "store-performer-details",
Description = "an action tool that allows you to store the ID of the employee that made " +
"the largest profit, the profit, and your suggestions for a reward, in the " +
"database.",
ParametersSampleObject = "{" +
"\"suggestedReward\": \"embed your suggestions for a reward here\", " +
"\"employeeId\": \"embed the employee’s ID here\", " +
"\"profit\": \"embed the employee’s profit here\"" +
"}"
}
];
See AiAgentToolAction in the Syntax section.
Creating the Agent
The agent configuration is ready, and we can now register the agent with the server using the CreateAgent method.
-
Create a response object class that matches the response schema defined in your agent configuration.
-
Call
CreateAgentand pass it -- The agent configuration
- A new instance of the response object class
-
Example
// Create the agent
// Pass it an object for its response
var createResult = await store.AI.CreateAgentAsync(agent, new Performer
{
suggestedReward = "your suggestions for a reward",
employeeId = "the ID of the employee that made the largest profit",
profit = "the profit the employee made"
});
// An object for the LLM response
public class Performer
{
public string suggestedReward;
public string employeeId;
public string profit;
}
See CreateAgent in the Syntax section.
Retrieving existing agent configurations
You can retrieve the configuration of an existing agent using GetAgent.
- Example
// Retrieve an existing agent configuration by its ID
var existingAgent = store.AI.GetAgent("reward-productive-employee");
You can also retrieve the configurations of all existing agents using GetAgents.
- Example
// Extract the agent configurations from the response into a new list
var existingAgentsList = store.AI.GetAgents();
var agents = existingAgentsList.AiAgents;
Deleting an agent
To delete an existing agent configuration, use DeleteAgent with the agent's ID.
- Example
// Delete an agent configuration by its ID
await store.AI.DeleteAgentAsync("reward-productive-employee");
See DeleteAgent in the Syntax section.
Managing conversations
Setting a conversation:
-
Set a conversation using the
store.AI.Conversationmethod.
PassConversation:- The agent ID
- The conversation ID
The conversation ID that you provide when starting a conversation determines whether a new conversation will start, or an existing conversation will be continued.- Conversations are kept in the
@conversationscollection.
A conversation document's name starts with a prefix (such asChats/) that can be set when the conversation is initiated. - You can -
Provide a full ID, including a prefix and the ID that follows it.
Provide a prefix that ends with/or|to trigger automatic ID creation, similarly to the creation of automatic IDs for documents. - If you pass the method the ID of an existing conversation (e.g.
"Chats/0000000000000008883-A") the conversation will be retrieved from storage and continued where you left off. - If you provide an empty prefix (e.g.
"Chats/), a new conversation will start.
- Conversations are kept in the
- Values for agent parameters, if defined, in an
AiConversationCreationOptionsinstance.
-
Set the user prompt using the
SetUserPromptmethod.
The user prompt informs the agent of the user's requests and expectations for this chat. -
Use the value returned by the
Conversationmethod to run the chat. -
Example
// Create a conversation instance
// Initialize it with -
// The agent's ID,
// A prefix (Performers/) for conversations stored in the @Conversations collection,
// Agent parameters' values
var chat = store.AI.Conversation(
createResult.Identifier,
"Performers/",
new AiConversationCreationOptions().AddParameter("country", "France"));
Setting parameter values visibility to the LLM (Conversation level)
-
As explained above, the visibility of a parameter's value to the LLM can be set in the agent configuration using the
SendToModelproperty ofAiAgentParameter.
You can also control the value's visibility to the LLM at the conversation level by passingsendToModel: falsetoAddParameter.
The final visibility is determined by both configuration-level and conversation-level settings: if either isfalse, the parameter is hidden from the model. -
Example
var chat = store.AI.Conversation(
createResult.Identifier,
"Performers/",
new AiConversationCreationOptions()
.AddParameter("country", "France")
.AddParameter("userId", currentUserId, sendToModel: false));
See Conversation, SetUserPrompt, and AiConversationCreationOptions in the Syntax section.
Processing action-tool requests:
During the conversation, the LLM can request the agent to trigger action tools.
The agent will pass a requested action tool's name and parameters to the client, and it is then up to the client to process the request.
- The client can process an action-tool request using a handler or a receiver.
- If an action-tool request arrives and there is no registered handler or receiver to process it, the client can catch it using OnUnhandledAction and decide how to address it.
Action-tool Handlers
A handler is created for a specific action tool and registered with the server using the Handle method.
When the LLM triggers this action tool with an action request, the handler is invoked to process the request, returns a response to the LLM, and ends automatically.
Handlers are typically used for simple, immediate operations like storing a document in the database and returning a confirmation, performing a quick calculation and sending its results, and other scenarios where the response can be generated and returned in a single step.
-
To create a handler,
pass theHandlemethod -- The action tool's name.
- An object to populate with the data sent with the action request.
Make sure that the object has the same structure defined for the action tool's parameters schema.
-
When an action request for this tool is received,
the handler will be given -- The populated object with the data sent with the action request.
-
When you finish handling the requested action,
returna response that will be sent by the agent back to the LLM. -
Example
In this example, the action tool is requested to store an employee's details in the database.// "store-performer-details" action tool handler
chat.Handle("store-performer-details", (Performer performer) =>
{
using (var session = store.OpenSession())
{
// store the values in the Performers collection in the database
session.Store(performer);
session.SaveChanges();
}
// return to the agent an indication that the action went well.
return "done";
});
// An object that represents the arguments provided by the LLM for this tool call
public class Performer
{
public string suggestedReward;
public string employeeId;
public string profit;
}
See Handle in the Syntax section.
Action-tool Receivers
A receiver is created for a specific action tool and registered with the server using the Receive method.
When the LLM triggers this action tool with an action request, the receiver is invoked to process the request, but unlike a handler, the receiver remains active until AddActionResponse is explicitly called to close the pending request and send a response to the LLM.
Receivers are typically used asynchronously for multi-step or delayed operations such as waiting for an external event or for user input before responding, performing long-running operations like batch processing or integration with an external system, and other use cases where the response cannot be generated immediately.
-
To create a receiver,
pass theReceivemethod -- The action tool's name.
- An object to populate with the data sent with the action request.
Make sure that this object has the same structure defined for the action tool's parameters schema.
-
When an action request for this tool is received,
the receiver will be given -- An
AiAgentActionRequestobject containing the details of the action request. - The populated object with the data sent with the action request.
- An
-
When you finish handling the requested action,
callAddActionResponse. Pass it -- The action tool's ID.
- The response to send back to the LLM.
Note that the response can be sent at any time, even after the receiver has finished executing,
and from any context, not necessarily from within the receiver callback.
-
Example
In this example, a receiver gets a recommendation for rewards that can be given to a performant employee and processes it.- Asynchronous
- Synchronous
chat.Receive("store-performer-details", async (AiAgentActionRequest request, Performer performer) =>
{
// Perform asynchronous work
using (var session = store.OpenAsyncSession())
{
await session.StoreAsync(performer);
await session.SaveChangesAsync();
}
// Example: Send a notification email asynchronously
await EmailService.SendNotificationAsync("manager@company.com", performer);
// Manually send the response to close the action
chat.AddActionResponse(request.ToolId, "done");
});chat.Receive("store-performer-details", (AiAgentActionRequest request, Performer performer) =>
{
// Perform synchronous work
using (var session = store.OpenSession())
{
session.Store(performer);
session.SaveChanges();
}
// Add any processing logic here
// Manually send the response and close the action
chat.AddActionResponse(request.ToolId, "done");
});
See Receive, AddActionResponse, and AiAgentActionRequest in the Syntax section.
Catching action-tool calls with no callback
If the LLM invokes an action tool that has no registered Handle or Receive callback, an exception is thrown by default.
To handle this gracefully instead, subscribe to the OnUnhandledAction event on the
conversation instance.
The event provides the action request details, allowing you to respond to the LLM or log the unexpected invocation.
Example
// Handle action tools that have no registered handler or receiver
chat.OnUnhandledAction += async (args) =>
{
// Log the unexpected action
Console.WriteLine($"Unhandled action: {args.Action.Name}");
// Respond to the LLM so the conversation can continue
args.Sender.AddActionResponse(args.Action.ToolId, "action not supported");
};
See OnUnhandledAction and UnhandledActionEventArgs in the Syntax section.
Injecting artificial tool context:
You can inject an artificial tool call and its response into the conversation context before running it, using the AddArtificialActionWithResponse method.
This will make the LLM believe that a specific tool was already called and returned a specific result, without actually executing any tool.
The tool name does not need to be registered in the agent configuration.
This is useful, for example, when you want to pre-load the conversation with contextual information that would normally be retrieved by a tool, such as user preferences, permissions, or other data that is already available to the client and doesn't require a real tool call to be obtained.
-
Example
In this example, the client already knows the user's allergies and injects them as if a tool had fetched them, so the LLM can use this information immediately.var chat = store.AI.Conversation(
"your-agent-id",
"chats/",
new AiConversationCreationOptions());
// Provide an artificial "GetUserAllergies" tool call and response
chat.AddArtificialActionWithResponse("GetUserAllergies", "Gluten, Lactose");
// The LLM will take the allergies into account when answering
chat.SetUserPrompt("Should I get regular cheese?");
var response = await chat.RunAsync<ModelAnswer>(CancellationToken.None);
// The response type matching the agent's output schema
public class ModelAnswer
{
public bool Recommend;
public string Reason;
}
See AddArtificialActionWithResponse in the Syntax section.
Conversation response:
The LLM response is returned by the agent to the client in an AiAnswer object, with an answer to the user prompt and the conversation status, indicating whether the conversation is complete or a further "turn" is required.
The AiAnswer<TAnswer> object contains:
Answer- the LLM's typed response, deserialized to the requested type.Status-Donewhen complete,ActionRequiredwhen the LLM needs tool responses before it can continue.Usage- token usage counters for this turn.Elapsed- total time taken to produce the answer.
See AiAnswer and AiConversationResult in the Syntax section.
Setting user prompt and running the conversation:
Set the user prompt using the SetUserPrompt method, and run the conversation using the
RunAsync method.
You can also use StreamAsync to stream the LLM's response as it is generated.
Learn how to do this in the Stream LLM responses section.
// Set the user prompt and run the conversation
chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
var response = await chat.RunAsync<Performer>(CancellationToken.None);
if (response.Status == AiConversationResult.Done)
{
// The LLM successfully processed the user prompt and returned its response.
// The performer's ID, profit, and suggested rewards were stored in the Performers
// collection by the action tool, and are also returned in the final LLM response.
}
See RunAsync in the Syntax section.
See the full example below.
Stream LLM responses
You can set the agent to stream the LLM's response to the client in real time as the LLM generates it, using the StreamAsync method, instead of using RunAsync which sends the whole response to the client when it is fully prepared.
Streaming the response allows the client to start processing it before it is complete, which can improve the application's responsiveness.
- Example
// A StringBuilder, used in this example to collect the streamed response
var reward = new StringBuilder();
// Using StreamAsync to collect the streamed response
// The response property to stream is in this case `suggestedReward`
var response = await chat.StreamAsync<Performer>(responseObj => responseObj.suggestedReward, str =>
{
// Callback invoked with the arrival of each incoming chunk of the processed property
reward.Append(str); // Add the incoming chunk to the StringBuilder instance
return Task.CompletedTask; // Return with an indication that the chunk was processed
}, CancellationToken.None);
if (response.Status == AiConversationResult.Done)
{
// Handle the full response when ready
// The streamed property was fully loaded and handled by the callback above,
// remaining parts of the response (including other properties if exist)
// will arrive when the whole response is ready and can be handled here.
}
See StreamAsync in the Syntax section.
Full example
The agent's user in this example is a human experience manager.
The agent helps its user to reward employees by searching, using query tools,
for orders sent to a certain country or (if the user prompts it "everywhere")
to all countries, and finding the employee that made the largest profit.
The agent then runs another query tool to find, by the employee's ID (that
was fetched from the retrieved orders) the employee's residence region,
and finds rewards suitable for the employee based on this region.
Finally, it uses an action tool to store the employee's ID, profit, and reward
suggestions in the Performers collection in the database, and returns the same
details in its final response as well.
public async Task createAndRunAiAgent_full()
{
var store = new DocumentStore();
// Define connection string to OpenAI
var connectionString = new AiConnectionString
{
Name = "open-ai-cs",
ModelType = AiModelType.Chat,
OpenAiSettings = new OpenAiSettings(
apiKey: "your-api-key",
endpoint: "https://api.openai.com/v1",
// LLM model for text generation
model: "gpt-4.1")
};
// Deploy connection string to server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
// Start setting an agent configuration
var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
@"You work for a human experience manager.
The manager uses your services to find which employee has made the largest profit and to suggest
a reward.
The manager provides you with the name of a country, or with the word ""everything"" to indicate
all countries.
Then you:
1. use a query tool to load all the orders sent to the selected country,
or a query tool to load all orders sent to all countries.
2. calculate which employee made the largest profit.
3. use a query tool to learn in what general area this employee lives.
4. find suitable vacations sites or other rewards based on the employee's residence area.
5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
When you're done, return these details in your answer to the user as well.");
// Set agent ID
agent.Identifier = "reward-productive-employee";
// Define LLM response object
agent.SampleObject = "{" +
"\"EmployeeID\": \"embed the employee’s ID here\"," +
"\"Profit\": \"embed the profit made by the employee here\"," +
"\"SuggestedReward\": \"embed suggested rewards here\"" +
"}";
// Set agent parameters
agent.Parameters.Add(new AiAgentParameter(
"country", "A specific country that orders were shipped to, " +
"or \"everywhere\" to look for orders shipped to all countries"));
agent.Queries =
[
// Set a query tool to retrieve all orders sent everywhere
new AiAgentToolQuery
{
// Query tool name
Name = "retrieve-orders-sent-to-all-countries",
// Query tool description
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
// Query tool RQL query
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
// Sample parameters object
ParametersSampleObject = "{}"
},
// Set a query tool to retrieve all orders sent to a specific country
new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-a-specific-country",
Description =
"a query tool that allows you to retrieve all orders sent to a specific country",
Query =
"from Orders as O where O.ShipTo.Country == " +
"$country select O.Employee, O.Lines.Quantity",
ParametersSampleObject = "{}"
},
// Set a query tool to retrieve the performer's residence region details from the database
new AiAgentToolQuery
{
Name = "retrieve-performer-living-region",
Description =
"a query tool that allows you to retrieve an employee's country, city, and " +
"region, by the employee's ID",
Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
"E.Address.City, E.Address.Region",
ParametersSampleObject = "{" +
"\"employeeId\": \"embed the employee's ID here\"" +
"}"
}
];
agent.Actions =
[
// Set an action tool to store the performer's details
new AiAgentToolAction
{
Name = "store-performer-details",
Description =
"an action tool that allows you to store the ID of the employee that made " +
"the largest profit, the profit, and your suggestions for a reward, in the database.",
ParametersSampleObject = "{" +
"\"suggestedReward\": \"embed your suggestions for a reward here\", " +
"\"employeeId\": \"embed the employee’s ID here\", " +
"\"profit\": \"embed the employee’s profit here\"" +
"}"
}
];
// Set chat trimming configuration
AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
{
// Summarize old messages When the number of tokens stored in the conversation exceeds this limit
MaxTokensBeforeSummarization = 32768,
// Max number of tokens that the conversation is allowed to contain after summarization
MaxTokensAfterSummarization = 1024
};
agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization);
// Limit the number of times the LLM can request for tools in response to a single user prompt
agent.MaxModelIterationsPerCall = 3;
var createResult = await store.AI.CreateAgentAsync(agent, new Performer
{
suggestedReward = "your suggestions for a reward",
employeeId = "the ID of the employee that made the largest profit",
profit = "the profit the employee made"
});
// Set chat ID, prefix, agent parameters.
// (specific country activates one query tool,"everywhere" activates another)
var chat = store.AI.Conversation(
createResult.Identifier,
"Performers/",
new AiConversationCreationOptions().AddParameter("country", "France"));
// Handle the action tool that the LLM uses to store the performer's details in the database
chat.Handle("store-performer-details", (Performer performer) =>
{
using (var session = store.OpenSession())
{
// store values in Performers collection in database
session.Store(performer);
session.SaveChanges();
}
return "done";
});
// Set user prompt and run chat
chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
var response = await chat.RunAsync<Performer>(CancellationToken.None);
if (response.Status == AiConversationResult.Done)
{
// The LLM successfully processed the user prompt and returned its response.
// The performer's ID, profit, and suggested rewards were stored in the Performers
// collection by the action tool, and are also returned in the final LLM response.
}
}
Syntax
Methods
Agent management
- CreateAgent
- DeleteAgent
Creates or updates an AI agent configuration on the database.
// Async with sample object
public Task<AiAgentConfigurationResult> CreateAgentAsync<TSchema>(
AiAgentConfiguration configuration, TSchema sampleObject,
CancellationToken token = default)
// Async without sample object
public Task<AiAgentConfigurationResult> CreateAgentAsync(
AiAgentConfiguration configuration,
CancellationToken token = default)
// Sync with sample object
public AiAgentConfigurationResult CreateAgent<TSchema>(
AiAgentConfiguration configuration, TSchema sampleObject)
where TSchema : new()
// Sync without sample object
public AiAgentConfigurationResult CreateAgent(
AiAgentConfiguration configuration)
Usage:
var result = await store.AI.CreateAgentAsync(configuration, sampleObject);
| Parameter | Type | Description |
|---|---|---|
| configuration | AiAgentConfiguration | The agent configuration to create or update. |
| sampleObject | TSchema | An example response object whose structure defines the schema the LLM should follow. |
| token | CancellationToken | Optional cancellation token. |
| Return value | |
|---|---|
AiAgentConfigurationResult | Contains Identifier (string) — the unique ID of the created or updated agent. |
Deletes an AI agent configuration from the database.
// Async
public Task<AiAgentConfigurationResult> DeleteAgentAsync(
string agentId, CancellationToken token = default)
// Sync
public AiAgentConfigurationResult DeleteAgent(string agentId)
Usage:
var result = await store.AI.DeleteAgentAsync("reward-productive-employee");
| Parameter | Type | Description |
|---|---|---|
| agentId | string | The unique ID of the agent to delete. |
| Return value | |
|---|---|
AiAgentConfigurationResult | Contains Identifier (string) — the ID of the deleted agent. |
- GetAgent
- GetAgents
Retrieves the configuration of a specific AI agent by its ID.
// Async
public Task<AiAgentConfiguration> GetAgentAsync(
string agentId, CancellationToken token = default)
// Sync
public AiAgentConfiguration GetAgent(string agentId)
Usage:
AiAgentConfiguration agent = await store.AI.GetAgentAsync("reward-productive-employee");
| Parameter | Type | Description |
|---|---|---|
| agentId | string | The unique ID of the agent to retrieve. |
| Return value | |
|---|---|
AiAgentConfiguration | The agent configuration, or null if not found. |
Retrieves the configurations of all AI agents in the database.
// Async
public Task<GetAiAgentsResponse> GetAgentsAsync(CancellationToken token = default)
// Sync
public GetAiAgentsResponse GetAgents()
Usage:
GetAiAgentsResponse response = await store.AI.GetAgentsAsync();
List<AiAgentConfiguration> agents = response.AiAgents;
| Return value | |
|---|---|
GetAiAgentsResponse | Contains AiAgents (List<AiAgentConfiguration>) — the list of all agent configurations. |
Conversation execution
- Conversation
- SetUserPrompt
- RunAsync / Run
- StreamAsync
Opens a new or existing conversation with an AI agent.
public IAiConversationOperations Conversation(
string agentId, string conversationId,
AiConversationCreationOptions creationOptions,
string changeVector = null)
Usage:
var chat = store.AI.Conversation(
"reward-productive-employee",
"Performers/",
new AiConversationCreationOptions().AddParameter("country", "France"));
| Parameter | Type | Description |
|---|---|---|
| agentId | string | The unique ID of the agent. |
| conversationId | string | The conversation ID. Provide a prefix ending with / or | to create a new conversation with an auto-generated ID, or provide a full ID to resume an existing conversation. |
| creationOptions | AiConversationCreationOptions | Conversation creation options, including agent parameter values. |
| changeVector | string | Optional change vector for concurrency control. |
| Return value | |
|---|---|
IAiConversationOperations | The conversation operations interface for managing the conversation lifecycle. |
Sets the user prompt to send to the AI agent on the next conversation turn.
void SetUserPrompt(string userPrompt)
Usage:
chat.SetUserPrompt("find the employee that made the largest profit");
| Parameter | Type | Description |
|---|---|---|
| userPrompt | string | The text of the user's message. |
Executes one turn of the conversation: sends the current prompt, processes any required actions, and awaits the agent's reply.
// Async
Task<AiAnswer<TAnswer>> RunAsync<TAnswer>(CancellationToken token = default)
// Sync
AiAnswer<TAnswer> Run<TAnswer>()
Usage:
var response = await chat.RunAsync<Performer>(CancellationToken.None);
if (response.Status == AiConversationResult.Done)
{
// Final answer is available in response.Answer
}
| Parameter | Type | Description |
|---|---|---|
| token | CancellationToken | Optional cancellation token. |
| Return value | |
|---|---|
AiAnswer<TAnswer> | Contains the model's answer, conversation status, token usage, and elapsed time. |
Executes one turn of the conversation while streaming a specified response property in real time as the LLM generates it.
// Stream property selected via lambda expression
Task<AiAnswer<TAnswer>> StreamAsync<TAnswer>(
Expression<Func<TAnswer, string>> streamPropertyPath,
Func<string, Task> streamedChunksCallback,
CancellationToken token = default)
// Stream property selected by name
Task<AiAnswer<TAnswer>> StreamAsync<TAnswer>(
string streamPropertyPath,
Func<string, Task> streamedChunksCallback,
CancellationToken token = default)
Usage:
var response = await chat.StreamAsync<Performer>(
responseObj => responseObj.suggestedReward, str =>
{
reward.Append(str);
return Task.CompletedTask;
}, CancellationToken.None);
| Parameter | Type | Description |
|---|---|---|
| streamPropertyPath | Expression<Func<TAnswer, string>>or string | A lambda expression that selects the property to stream from the response object or The name of the property in the response object to stream.
|
| streamedChunksCallback | Func<string, Task> | A callback invoked with each incoming chunk of the streamed property. |
| token | CancellationToken | Optional cancellation token. |
| Return value | |
|---|---|
Task<AiAnswer<TAnswer>> | The final conversation result and status after the streamed property and remaining properties are fully received. |
Action-tool processing
- Handle
- Receive
- AddActionResponse
- AddArtificialActionWithResponse
- OnUnhandledAction
Registers a handler for an action tool. When the LLM triggers this tool, the handler processes the request, returns a response to the LLM, and ends automatically.
// Async handler with typed result
void Handle<TArgs, TResult>(string actionName,
Func<TArgs, Task<TResult>> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class where TResult : class
// Sync handler
void Handle<TArgs>(string actionName,
Func<TArgs, object> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class
// Async handler with action request and typed result
void Handle<TArgs, TResult>(string actionName,
Func<AiAgentActionRequest, TArgs, Task<TResult>> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class where TResult : class
// Sync handler with action request
void Handle<TArgs>(string actionName,
Func<AiAgentActionRequest, TArgs, object> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class
Usage:
chat.Handle("store-performer-details", (Performer performer) =>
{
using (var session = store.OpenSession())
{
session.Store(performer);
session.SaveChanges();
}
return "done";
});
| Parameter | Type | Description |
|---|---|---|
| actionName | string | The name of the action tool to handle. |
| action | Func<TArgs, Task<TResult>>, Func<TArgs, object>, Func<AiAgentActionRequest, TArgs, Task<TResult>>, or Func<AiAgentActionRequest, TArgs, object> | The handler function that processes the action request and returns a response to the LLM. |
| aiHandleError | AiHandleErrorStrategy | Error handling strategy.SendErrorsToModel — send errors to the model as text.RaiseImmediately — throw exceptions to the caller. |
Registers a receiver for an action tool. Unlike a handler, the receiver remains active until AddActionResponse is explicitly called.
// Async receiver
void Receive<TArgs>(string actionName,
Func<AiAgentActionRequest, TArgs, Task> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class
// Sync receiver
void Receive<TArgs>(string actionName,
Action<AiAgentActionRequest, TArgs> action,
AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
where TArgs : class
Usage:
chat.Receive("store-performer-details", (AiAgentActionRequest request, Performer performer) =>
{
using (var session = store.OpenSession())
{
session.Store(performer);
session.SaveChanges();
}
chat.AddActionResponse(request.ToolId, "done");
});
| Parameter | Type | Description |
|---|---|---|
| actionName | string | The name of the action tool to handle. |
| action | Func<AiAgentActionRequest, TArgs, Task> or Action<AiAgentActionRequest, TArgs> | The receiver function that processes the action request. Must call AddActionResponse when done. |
| aiHandleError | AiHandleErrorStrategy | Error handling strategy.SendErrorsToModel — send errors to the model as text.RaiseImmediately — throw exceptions to the caller. |
Closes a pending action request and sends a response back to the LLM.
// String response
void AddActionResponse(string toolId, string actionResponse)
// Typed response
void AddActionResponse<TResponse>(string toolId, TResponse actionResponse)
where TResponse : class
Usage:
chat.AddActionResponse(request.ToolId, "done");
| Parameter | Type | Description |
|---|---|---|
| toolId | string | The unique ID of the action request (from AiAgentActionRequest.ToolId). |
| actionResponse | string or TResponse | The response to send back to the LLM through the agent. |
Injects an artificial tool call and response into the conversation context, making the LLM believe it already executed a specific tool (either action or query) and received a specific result, without actually executing any tool.
// String response
void AddArtificialActionWithResponse(string toolId, string actionResponse)
// Typed response
void AddArtificialActionWithResponse<TResponse>(string toolId, TResponse actionResponse)
where TResponse : class
Usage:
chat.AddArtificialActionWithResponse("store-performer-details", "done");
| Parameter | Type | Description |
|---|---|---|
| toolId | string | The name of the tool for which to simulate a call. |
| actionResponse | string or TResponse | The response to inject as the result of the simulated action. |
An event raised when the model invokes an action tool that has no registered Handle or Receive callback. If no event handler is subscribed and an unhandled action is raised, an exception is thrown.
event Func<UnhandledActionEventArgs, Task> OnUnhandledAction
Usage:
chat.OnUnhandledAction += async (args) =>
{
// args.Action contains the unhandled action request details
// args.Sender is the IAiConversationOperations instance
args.Sender.AddActionResponse(args.Action.ToolId, "action not supported");
};
| Event args property | Type | Description |
|---|---|---|
| Sender | IAiConversationOperations | The conversation instance that raised the event. |
| Action | AiAgentActionRequest | The action request from the model that has no registered handler. |
| Token | CancellationToken | Cancellation token associated with the current turn. |
Classes
Agent configuration classes
- AiAgentConfiguration
- AiAgentParameter
Defines the configuration for an AI agent, including its prompt, tools, output schema, and settings.
class AiAgentConfiguration
{
string Identifier
string Name
string ConnectionStringName
string SystemPrompt
string SampleObject
string OutputSchema
List<AiAgentToolQuery> Queries
List<AiAgentToolAction> Actions
List<AiAgentParameter> Parameters
AiAgentChatTrimmingConfiguration ChatTrimming
int? MaxModelIterationsPerCall
bool Disabled
}
Constructor:
public AiAgentConfiguration(string name, string connectionStringName, string systemPrompt)
| Property | Type | Description |
|---|---|---|
| Identifier | string | A unique identifier for the AI agent configuration. |
| Name | string | The name of the AI agent configuration. |
| ConnectionStringName | string | The name of the connection string used to connect to the AI provider. |
| SystemPrompt | string | The prompt that guides the behavior and purpose of the AI agent. |
| SampleObject | string | A sample JSON object describing the expected LLM response layout. Translated to a JSON schema before being sent to the LLM. |
| OutputSchema | string | A formal JSON schema describing the expected LLM response structure. If both SampleObject and OutputSchema are defined, only the schema is used. |
| Queries | List<AiAgentToolQuery> | Query tools that the LLM can use (through the agent) to retrieve data from the database. |
| Actions | List<AiAgentToolAction> | Action tools that the LLM can use to trigger the client to action. |
| Parameters | List<AiAgentParameter> | Agent parameters whose values are provided by the client when a conversation is started. |
| ChatTrimming | AiAgentChatTrimmingConfiguration | Configuration for summarizing the conversation to reduce token usage. |
| MaxModelIterationsPerCall | int? | Maximum number of times the LLM can request tool usage in response to a single user prompt. |
| Disabled | bool | Indicates whether the AI agent is disabled. |
A required input parameter used by an AI agent's tools.
class AiAgentParameter
{
string Name
string Description
bool? SendToModel
}
Constructors:
// Initialize with name and description
public AiAgentParameter(string name, string description)
// Initialize with name, description, and a flag to control model visibility
public AiAgentParameter(string name, string description, bool sendToModel)
| Property | Type | Description |
|---|---|---|
| Name | string | The parameter name as referenced by tools and queries. |
| Description | string | Human-readable description explaining what value the parameter expects. |
| SendToModel | bool? | Whether the parameter value is exposed to the AI model.true or null (default) — the parameter is exposed.false — the parameter is hidden from the model (useful for sensitive values like userId or tenant). |
- AiAgentChatTrimmingConfiguration
- AiAgentSummarizationByTokens
- AiAgentHistoryConfiguration
Configuration for reducing the size of the AI agent's chat history using a summarization strategy.
class AiAgentChatTrimmingConfiguration
{
AiAgentSummarizationByTokens Tokens
AiAgentHistoryConfiguration History
}
Constructor:
public AiAgentChatTrimmingConfiguration(
AiAgentSummarizationByTokens tokensConfig,
AiAgentHistoryConfiguration historyConfig = null)
| Property | Type | Description |
|---|---|---|
| Tokens | AiAgentSummarizationByTokens | Summarization settings that control how and when the chat history is summarized into a concise prompt. |
| History | AiAgentHistoryConfiguration | Optional. Configuration for persisting the original chat history when summarization occurs. If null, no history documents will be created. |
Configuration for AI agent conversation summarization based on token thresholds.
class AiAgentSummarizationByTokens
{
string SummarizationTaskBeginningPrompt
string SummarizationTaskEndPrompt
string ResultPrefix
long? MaxTokensBeforeSummarization
long? MaxTokensAfterSummarization
}
| Property | Type | Description |
|---|---|---|
| SummarizationTaskBeginningPrompt | string | Instruction text prepended to the conversation when requesting a summary. Sent with the system role. Customize it to influence the structure, tone, or depth of the generated summary. |
| SummarizationTaskEndPrompt | string | The user-role message that triggers the summarization process. Sent immediately after the system prompt. |
| ResultPrefix | string | Text prefix that appears before the generated summary of the previous conversation. |
| MaxTokensBeforeSummarization | long? | Maximum number of tokens allowed before summarization is triggered. |
| MaxTokensAfterSummarization | long? | Maximum number of tokens allowed in the generated summary. Default: 1024. |
Configuration for retention and expiration of AI agent chat history documents.
class AiAgentHistoryConfiguration
{
int? HistoryExpirationInSec
}
Constructors:
// Enables history with no expiration
public AiAgentHistoryConfiguration()
// Enables history with an expiration timespan
public AiAgentHistoryConfiguration(TimeSpan expiration)
| Property | Type | Description |
|---|---|---|
| HistoryExpirationInSec | int? | The timespan (in seconds) after which history documents expire and are eligible for removal. |
Agent tool classes
- AiAgentToolQuery
- AiAgentToolQueryOptions
- AiAgentToolAction
A query tool that allows the AI agent to retrieve data from the database.
class AiAgentToolQuery
{
string Name
string Description
string Query
string ParametersSampleObject
string ParametersSchema
AiAgentToolQueryOptions Options
}
| Property | Type | Description |
|---|---|---|
| Name | string | The identifier used by the AI to reference this query tool. |
| Description | string | A description explaining to the AI when to invoke this query. |
| Query | string | The RQL query string to execute. |
| ParametersSampleObject | string | A sample JSON object representing the parameters the LLM can pass to this query. |
| ParametersSchema | string | A formal JSON schema for the parameters. If both a sample object and a schema are defined, only the schema is used. |
| Options | AiAgentToolQueryOptions | Execution options for the query tool (initial context, model access). |
Execution options for a query tool.
class AiAgentToolQueryOptions
{
bool? AllowModelQueries
bool? AddToInitialContext
}
| Property | Type | Description |
|---|---|---|
| AllowModelQueries | bool? | true — the LLM can trigger this query on demand.false — the LLM cannot trigger this query.null — server-side defaults apply. |
| AddToInitialContext | bool? | true — the query runs when the conversation starts and its results are added to the initial context.false — the query does not run at conversation start.null — server-side defaults apply. |
An action tool that allows the AI agent to trigger the client to perform an action.
class AiAgentToolAction
{
string Name
string Description
string ParametersSampleObject
string ParametersSchema
}
| Property | Type | Description |
|---|---|---|
| Name | string | The identifier used by the AI to reference this action tool. |
| Description | string | A description explaining to the AI what this action is capable of. |
| ParametersSampleObject | string | A sample JSON object representing the parameters the LLM will fill when requesting this action. |
| ParametersSchema | string | A formal JSON schema for the parameters. If both a sample object and a schema are defined, only the schema is used. |
Conversation and response classes
- AiConversationCreationOptions
- AiAgentActionRequest
- AiAnswer
- AiUsage
Options for creating or continuing an AI conversation, including agent parameter values and expiration.
class AiConversationCreationOptions
{
Dictionary<string, object> Parameters
int? ExpirationInSec
}
Constructors and methods:
// Create with no parameters
public AiConversationCreationOptions()
// Create with a set of parameter values
public AiConversationCreationOptions(Dictionary<string, object> parameters)
// Add a parameter value (fluent, returns this)
public AiConversationCreationOptions AddParameter(string name, object value)
// Add a parameter value with visibility control (fluent, returns this)
public AiConversationCreationOptions AddParameter(string name, object value, bool sendToModel)
| Property | Type | Description |
|---|---|---|
| Parameters | Dictionary<string, object> | Values for agent parameters defined in the agent configuration. |
| ExpirationInSec | int? | Optional. If the conversation is idle longer than this period (in seconds), it will be automatically deleted. |
AddParameter parameter | Type | Description |
|---|---|---|
| name | string | The parameter name, matching a parameter defined in the agent configuration. |
| value | object | The value to assign to the parameter. |
| sendToModel | bool | Controls the visibility of this parameter value to the LLM for this conversation turn. The final visibility is determined by combining this value with the SendToModel setting on the matching AiAgentParameter in the agent configuration: if either is false, the parameter is hidden from the model.null (default) means not specified - the configuration-level value alone decides. |
Contains the details of an action request sent by the LLM to the agent.
class AiAgentActionRequest
{
string Name
string ToolId
string Arguments
}
| Property | Type | Description |
|---|---|---|
| Name | string | The action tool name. |
| ToolId | string | The unique ID of this action request. |
| Arguments | string | The arguments provided by the LLM, as a JSON string. |
The typed answer returned from an AI conversation turn.
class AiAnswer<TAnswer>
{
TAnswer Answer
AiConversationResult Status
AiUsage Usage
TimeSpan Elapsed
}
| Property | Type | Description |
|---|---|---|
| Answer | TAnswer | The answer content produced by the AI, deserialized to the requested type. |
| Status | AiConversationResult | The conversation status: Done or ActionRequired. |
| Usage | AiUsage | Token usage reported by the model for generating this answer. |
| Elapsed | TimeSpan | Total time elapsed to produce the answer, measured from the server's request to the LLM until the response was received. |
Tracks token usage for AI operations.
class AiUsage
{
long PromptTokens
long CompletionTokens
long TotalTokens
long CachedTokens
long ReasoningTokens
}
| Property | Type | Description |
|---|---|---|
| PromptTokens | long | Total number of tokens used in prompts. |
| CompletionTokens | long | Total number of tokens produced by completions. |
| TotalTokens | long | Total number of tokens used (prompt + completion). |
| CachedTokens | long | Number of tokens served from cache, if available. |
| ReasoningTokens | long | The part of the completion tokens used for reasoning by the model. |
- AiAgentConfigurationResult
- GetAiAgentsResponse
- UnhandledActionEventArgs
Result returned by the server for an AI agent configuration operation (create, update, or delete).
class AiAgentConfigurationResult
{
string Identifier
long RaftCommandIndex
}
| Property | Type | Description |
|---|---|---|
| Identifier | string | The AI agent configuration identifier. |
| RaftCommandIndex | long | Raft index of the command that performed the operation. |
The server response containing one or more AI agent configurations.
class GetAiAgentsResponse
{
List<AiAgentConfiguration> AiAgents
}
| Property | Type | Description |
|---|---|---|
| AiAgents | List<AiAgentConfiguration> | The list of returned AI agent configurations. |
Event arguments for the OnUnhandledAction event.
class UnhandledActionEventArgs
{
IAiConversationOperations Sender
AiAgentActionRequest Action
CancellationToken Token
}
| Property | Type | Description |
|---|---|---|
| Sender | IAiConversationOperations | The conversation instance that raised the event. |
| Action | AiAgentActionRequest | The action (tool call) requested by the AI model that has no registered handler. |
| Token | CancellationToken | Cancellation token associated with the current conversation turn. |
Enums
- AiConversationResult
- AiHandleErrorStrategy
Represents the outcome of a single conversation turn.
enum AiConversationResult
{
Done,
ActionRequired
}
| Value | Description |
|---|---|
| Done | The conversation has completed and a final answer is available. |
| ActionRequired | Further interaction is required, such as responding to tool requests. |
Specifies how errors thrown by tool handlers or receivers should be handled.
enum AiHandleErrorStrategy
{
SendErrorsToModel,
RaiseImmediately
}
| Value | Description |
|---|---|
| SendErrorsToModel | Convert the error to a textual message and send it back to the model as the tool response. |
| RaiseImmediately | Throw the exception immediately to the caller instead of sending it to the model. |