Creating AI agents: API
-
To create an AI agent, a client defines its configuration, provides it with settings and tools, and registers the agent with the server.
-
Once the agent is created, the client can initiate or resume conversations, get LLM responses, and perform actions based on LLM insights.
-
This page provides a step-by-step guide to creating an AI agent and interacting with it using the Client API.
-
In this article:
Creating a connection string
Your agent will need a connection string to connect with the LLM. Create a connection string using an AiConnectionString
instance and the PutConnectionStringOperation
operation.
(You can also create a connection string using Studio, see here)
You can use a local Ollama
model if your considerations are mainly speed, cost, open-source, or security,
Or you can use a remote OpenAI
service for its additional resources and capabilities.
-
Example
- open-ai-cs
- ollama-cs
using (var store = new DocumentStore())
{
// Define the connection string to OpenAI
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "open-ai-cs",
// Connection type
ModelType = AiModelType.Chat,
// OpenAI connection settings
OpenAiSettings = new OpenAiSettings(
apiKey: "your-api-key",
endpoint: "https://api.openai.com/v1",
// LLM model for text generation
model: "gpt-4.1")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
}using (var store = new DocumentStore())
{
// Define the connection string to Ollama
var connectionString = new AiConnectionString
{
// Connection string name & identifier
Name = "ollama-cs",
// Connection type
ModelType = AiModelType.Chat,
// Ollama connection settings
OllamaSettings = new OllamaSettings(
// LLM Ollama model for text generation
model: "llama3.2",
// local URL
uri: "http://localhost:11434/")
};
// Deploy the connection string to the server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
} -
Syntax
- open-ai-cs-syntax
- ollama-cs-syntax
public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public OpenAiSettings OpenAiSettings { get; set; }
...
}
public class OpenAiSettings : AbstractAiSettings
{
public string ApiKey { get; set; }
public string Endpoint { get; set; }
public string Model { get; set; }
public int? Dimensions { get; set; }
public string OrganizationId { get; set; }
public string ProjectId { get; set; }
}public class AiConnectionString
{
public string Name { get; set; }
public AiModelType ModelType { get; set; }
public string Identifier { get; set; }
public OllamaSettings OllamaSettings { get; set; }
...
}
public class OllamaSettings : AbstractAiSettings
{
public string Model { get; set; }
public string Uri { get; set; }
}
Defining an agent configuration
To create an AI agent you need to prepare an agent configuration and populate it with your settings and tools.
Start by creating a new AiAgentConfiguration
instance.
While creating the instance, pass its constructor:
- The agent's Name
- The connection string you created
- A System prompt
The agent will send the system prompt you define here to the LLM to define its basic characteristics, including its role, purpose, behavior, and the tools it can use.
-
Example
// Start setting an agent configuration
var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
@"You work for a human experience manager.
The manager uses your services to find which employee has made the largest profit and to suggest
a reward.
The manager provides you with the name of a country, or with the word ""everything"" to indicate
all countries.
Then you:
1. use a query tool to load all the orders sent to the selected country,
or a query tool to load all orders sent to all countries.
2. calculate which employee made the largest profit.
3. use a query tool to learn in what general area this employee lives.
4. find suitable vacations sites or other rewards based on the employee's residence area.
5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
When you're done, return these details in your answer to the user as well."); -
AiAgentConfiguration
constructorpublic AiAgentConfiguration(string name, string connectionStringName, string systemPrompt);
-
AiAgentConfiguration
classpublic class AiAgentConfiguration
{
// A unique identifier given to the AI agent configuration
public string Identifier { get; set; }
// The name of the AI agent configuration
public string Name { get; set; }
// Connection string name
public string ConnectionStringName { get; set; }
// The system prompt that defines the role and purpose of the agent and the LLM
public string SystemPrompt { get; set; }
// An example object that sets the layout for the LLM's response to the user.
// The object is translated to a schema before it is sent to the LLM.
public string SampleObject { get; set; }
// A schema that sets the layout for the LLM's response to the user.
// If both a sample object and a schema are defined, only the schema is used.
public string OutputSchema { get; set; }
// A list of Query tools that the LLM can use (through the agent) to access the database
public List<AiAgentToolQuery> Queries { get; set; } = new List<AiAgentToolQuery>();
// A list of Action tools that the LLM can use to trigger the user to action
public List<AiAgentToolAction> Actions { get; set; } = new List<AiAgentToolAction>();
// Agent parameters whose value the client passes to the LLM each time a chat is started,
// for stricter control over queries initiated by the LLM and as a means for interaction
// between the client and the LLM.
public List<AiAgentParameter> Parameters { get; set; } = new List<AiAgentParameter>();
// The trimming configuration defines if and how the conversation is summarized,
// to minimize the amount of data passed to the LLM when a conversation is started.
public AiAgentChatTrimmingConfiguration ChatTrimming { get; set; } = new
AiAgentChatTrimmingConfiguration(new AiAgentSummarizationByTokens());
// Control over the number of times that the LLM is allowed to use agent tools to handle
// a user prompt.
public int? MaxModelIterationsPerCall { get; set; }
}
Once the initial agent configuration is created, we need to add it a few additional elements.
Set the agent ID:
Use the Identifier
property to provide the agent with a unique ID that the
system will recognize it by.
// Set agent ID
agent.Identifier = "reward-productive-employee";
Define a response object:
Define a structured output response object that the LLM will populate with its response to the user.
To define the response object, you can use the SampleObject
and/or the OutputSchema
property
SampleObject
is a straightforward sample of the response object that you expect the LLM to return.
It is usually simpler to define the response object this way.OutputSchema
is a formal JSON schema that the LLM can understand.
Even when defining the response object as aSampleObject
, RavenDB will translate the object to a JSON schema before sending it to the LLM. If you prefer it however, you can explicitly define it as a schema yourself.- If you define both a sample object and a schema, the agent will send only the schema to the LLM.
- sample-object
- json-schema
// Set sample object
agent.SampleObject = "{" +
"\"suggestedReward\": \"your suggestions for a reward\", " +
"\"employeeId\": \"the ID of the employee that made the largest profit\", " +
"\"profit\": \"the profit the employee made\"" +
"}";
// Set output schema
agent.OutputSchema = "{" +
"\"name\": \"RHkxaWo5ZHhMM1RuVnIzZHhxZm9vM0c0UnYrL0JWbkhyRDVMd0tJa1g4Yz0\", " +
"\"strict\": true, " +
"\"schema\": {" +
"\"type\": \"object\", " +
"\"properties\": {" +
"\"employeeID\": {" +
"\"type\": \"string\", " +
"\"description\": \"the ID of the employee that made the largest profit\"" +
"}, " +
"\"profit\": {" +
"\"type\": \"string\", " +
"\"description\": \"the profit the employee made\"" +
"}, " +
"\"suggestedReward\": {" +
"\"type\": \"string\", " +
"\"description\": \"your suggestions for a reward\"" +
"}" +
"}, " +
"\"required\": [" +
"\"employeeID\", " +
"\"profit\", " +
"\"suggestedReward\"" +
"], " +
"\"additionalProperties\": false" +
"}" +
"}";
Add agent parameters:
Agent parameters are parameters that can be used by query tools when the agent queries the database on behalf of the LLM.
Values for agent parameters are provided by the client, or by a user through the client,
when a chat is started.
When the agent is requested to use a query tool that uses agent parameters, it replaces these parameters with the values provided by the user before running the query.
Using agent parameters allows the client to focus the queries and the entire interaction on its current needs.
In the example below, an agent parameter is used to determine what area of the world a query will handle.
To add an agent parameter create an AiAgentParameter
instance, initialize it with
the parameter's name and description (explaining to the LLM what the parameter
is for), and pass this instance to the agent.Parameters.Add
method.
-
Example
// Set agent parameters
agent.Parameters.Add(new AiAgentParameter(
"country", "A specific country that orders were shipped to, " +
"or \"everywhere\" to look for orders shipped to all countries")); -
AiAgentParameter
definitionpublic AiAgentParameter(string name, string description);
Set maximum number of iterations:
You can limit the number of times that the LLM is allowed to request the usage of
agent tools in response to a single user prompt. Use MaxModelIterationsPerCall
to change this limit.
-
Example
// Limit the number of times the LLM can request for tools in response to a single user prompt
agent.MaxModelIterationsPerCall = 3; -
MaxModelIterationsPerCall
Definitionpublic int? MaxModelIterationsPerCall
Set chat trimming configuration:
To summarize the conversation, create an AiAgentChatTrimmingConfiguration
instance,
use it to configure your trimming strategy, and set the agent's ChatTrimming
property
with the instance.
When creating the instance, pass its constructor a summarization strategy using
a AiAgentSummarizationByTokens
class.
The original conversation, before it was summarized, can optionally be
kept in the @conversations-history
collection.
To determine whether to keep the original messages and for how long, also pass the
AiAgentChatTrimmingConfiguration
constructor an AiAgentHistoryConfiguration
instance
with your settings.
-
Example
// Set chat trimming configuration
AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
{
// When the number of tokens stored in the conversation exceeds this limit
// summarization of old messages will be triggered.
MaxTokensBeforeSummarization = 32768,
// The maximum number of tokens that the conversation is allowed to contain
// after summarization.
MaxTokensAfterSummarization = 1024
};
agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization); -
Syntax
public class AiAgentSummarizationByTokens
{
// The maximum number of tokens allowed before summarization is triggered.
public long? MaxTokensBeforeSummarization { get; set; }
// The maximum number of tokens allowed in the generated summary.
public long? MaxTokensAfterSummarization { get; set; }
}
public class AiAgentHistoryConfiguration
{
// Enables history for AI agents conversations.
public AiAgentHistoryConfiguration()
// Enables history for AI agents conversations,
// with `expiration` determining the timespan after which history documents expire.
public AiAgentHistoryConfiguration(TimeSpan expiration)
// The timespan after which history documents expire.
public int? HistoryExpirationInSec { get; set; }
}
Adding agent tools
You can enhance your agent with Query and Action tools, that allow the LLM to query your database and trigger client actions.
After defining agent tools and submitting them to the LLM, it is up to the LLM to decide if and when to use them.
Query tools:
Query tools provide the LLM with the ability to retrieve data from the database.
A query tool includes a natural-language description that explains the LLM what the tool is for, and an RQL query.
-
Passing values to query tools
- Query tools optionally include parameters, identified by a
$
prefix.
Both the user and the LLM can pass values to these parameters. - Passing values from the user
Users can pass values to queries through agent parameters.
If agent parameters are defined in the agent configuration -- The client has to provide values for them when initiating a conversation with the agent.
- The parameters can be included in query tools RQL queries.
Before running a query, the agent will replace any agent parameter included in it with its value.
- Passing values from the LLM
The LLM can pass values to queries through a parameters schema.- The parameters schema layout is defined as part of the query tool.
- When the LLM requests the agent to run a query, it will add parameter values to the request.
- You can define a parameters schema either as a sample object or a formal JSON schema.
If you define both, the LLM will pass parameter values only through the JSON schema. - Before running a query, the agent will replace any parameter included in it with its value.
- Query tools optionally include parameters, identified by a
-
Example
-
The first query tool will be used by the LLM when it needs to retrieve all the orders sent to any place in the world. (the system prompt instructs it to use this tool when the user enters "everywhere" when the conversation is started.)
-
The second query tool will be used by the LLM when it needs to retrieve all the orders that were sent to a particular country, using the
$country
agent parameter. -
The third tool retrieves from the database the general location of an employee.
To do this it uses a$employeeId
parameter, whose value is set by the LLM in its request to run this tool.agent.Queries =
[
// Set a query tool that triggers the agent to retrieve all the orders sent everywhere
new AiAgentToolQuery
{
// Query tool name
Name = "retrieve-orders-sent-to-all-countries",
// Query tool description
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
// Query tool RQL query
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
// Sample parameters object for the query tool
// The LLM can use this object to pass parameters to the query tool
ParametersSampleObject = "{}"
},
// Set a query tool that triggers the agent to retrieve all the orders sent to a
// specific country
new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-a-specific-country",
Description = "a query tool that allows you to retrieve all orders sent " +
"to a specific country",
Query = "from Orders as O where O.ShipTo.Country == $country select O.Employee, " +
"O.Lines.Quantity",
ParametersSampleObject = "{}"
},
// Set a query tool that triggers the agent to retrieve the performer's
// residence region details (country, city, and region) from the database
new AiAgentToolQuery
{
Name = "retrieve-performer-living-region",
Description = "a query tool that allows you to retrieve an employee's country, " +
"city, and region, by the employee's ID",
Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
"E.Address.City, E.Address.Region",
ParametersSampleObject = "{" +
"\"employeeId\": \"embed the employee's ID here\"" +
"}"
}
];
-
-
Syntax
Query tools are defined in a list ofAiAgentToolQuery
classes.public class AiAgentToolQuery
{
public string Name { get; set; }
public string Description { get; set; }
public string Query { get; set; }
public string ParametersSampleObject { get; set; }
public string ParametersSchema { get; set; }
}
Initial-context queries
-
You can set a query tool as an initial-context query using its
Options.AddToInitialContext
property, to execute the query and provide the LLM with its results immediately when the agent is started.- An initial-context query is not allowed to use LLM parameters, since the query runs before the conversation starts, earlier than the first communication with the LLM, and the LLM will have no opportunity to fill the parameters with values.
- An initial-context query is allowed to use agent parameters, whose values are provided by the user even before the query is executed.
-
You can use the
Options.AllowModelQueries
property to Enable or Disable a query tool .- When a query tool is enabled, the LLM can freely trigger its execution.
- When a query tool is disabled, the LLM cannot trigger its execution.
- If a query tool is set as an initial-context query, it will be executed when the conversation
starts even if disabled using
AllowModelQueries
.
-
Example
Set a query tool that runs when the agent is started and retrieves all the orders sent everywhere.new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-all-countries",
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
ParametersSampleObject = "{}"
Options = new AiAgentToolQueryOptions
{
// The LLM is allowed to trigger the execution of this query during the conversation
AllowModelQueries = true,
// The query will be executed when the conversation starts
// and its results will be added to the initial context
AddToInitialContext = true
}
} -
Syntax
public class AiAgentToolQueryOptions : IDynamicJson
{
public bool? AllowModelQueries { get; set; }
public bool? AddToInitialContext { get; set; }
}Property Type Description AllowModelQueries
bool
true
: the LLM can trigger the execution of this query tool.false
: the LLM cannot trigger the execution of this query tool.null
: server-side defaults apply.AddToInitialContext
bool
true
: the query will be executed when the conversation starts and its results added to the initial context.false
: the query will not be executed when the conversation starts.null
: server-side defaults apply.Note: the two flags can be set regardless of each other.
- Setting
AddToInitialContext
totrue
andAllowModelQueries
tofalse
will cause the query to be executed when the conversation starts,
but the LLM will not be able to trigger its execution later in the conversation. - Setting
AddToInitialContext
totrue
andAllowModelQueries
totrue
will cause the query to be executed when the conversation starts,
and the LLM will also be able to trigger its execution later in the conversation.
- Setting
Action tools:
Action tools allow the LLM to trigger the client to action (e.g., to modify or add a document).
An action tool includes a natural-language description that explains the LLM what the tool is capable of, and a schema that the LLM will fill with details related to the requested action before sending it to the agent.
In the example below, the action tool requests the client to store an employee's details in the database. The LLM will provide the employee's ID and other details whenever it requests the agent to apply the tool.
When the client finishes performing the action, it is required to send the LLM
a response that explains how it went, e.g. done
.
-
Example
The following action tool sends to the client employee details that the tool needs to store in the database.agent.Actions =
[
// Set an action tool that triggers the client to store the performer's details
new AiAgentToolAction
{
Name = "store-performer-details",
Description = "an action tool that allows you to store the ID of the employee that made " +
"the largest profit, the profit, and your suggestions for a reward, in the " +
"database.",
ParametersSampleObject = "{" +
"\"suggestedReward\": \"embed your suggestions for a reward here\", " +
"\"employeeId\": \"embed the employee’s ID here\", " +
"\"profit\": \"embed the employee’s profit here\"" +
"}"
}
]; -
Syntax
Action tools are defined in a list ofAiAgentToolAction
classes.public class AiAgentToolAction
{
public string Name { get; set; }
public string Description { get; set; }
public string ParametersSampleObject { get; set; }
public string ParametersSchema { get; set; }
}
Creating the Agent
The agent configuration is ready, and we can now register the agent on the server
using the CreateAgent
method.
-
Create a response object class that matches the response schema defined in your agent configuration.
-
Call
CreateAgent
and pass it -- The agent configuration
- A new instance of the response object class
-
Example
// Create the agent
// Pass it an object for its response
var createResult = await store.AI.CreateAgentAsync(agent, new Performer
{
suggestedReward = "your suggestions for a reward",
employeeId = "the ID of the employee that made the largest profit",
profit = "the profit the employee made"
});
// An object for the LLM response
public class Performer
{
public string suggestedReward;
public string employeeId;
public string profit;
} -
CreateAgent
overloads// Asynchronously creates or updates an AI agent configuration on the database,
// with the given schema as an example for a response object
Task<AiAgentConfigurationResult> CreateAgentAsync<TSchema>(AiAgentConfiguration configuration, TSchema sampleObject, CancellationToken token = default)
// Creates or updates (synchronously) an AI agent configuration on the database
AiAgentConfigurationResult CreateAgent(AiAgentConfiguration configuration)
// Asynchronously creates or updates an AI agent configuration on the database
Task<AiAgentConfigurationResult> CreateAgentAsync(AiAgentConfiguration configuration, CancellationToken token = default)
// Creates or updates (synchronously) an AI agent configuration on the database,
// with the given schema as an example for a response object
AiAgentConfigurationResult CreateAgent<TSchema>(AiAgentConfiguration configuration, TSchema sampleObject) where TSchema : new()Property Type Description configuration AiAgentConfiguration
The agent configuration sampleObject TSchema
Example response object Return value Description AiAgentConfigurationResult
The result of the agent configuration creation or update, including the agent's ID.
Retrieving existing agent configurations
You can retrieve the configuration of an existing agent using GetAgent
.
- Example
// Retrieve an existing agent configuration by its ID
var existingAgent = store.AI.GetAgent("reward-productive-employee");
You can also retrieve the configurations of all existing agents using GetAgents
.
-
Example
// Extract the agent configurations from the response into a new list
var existingAgentsList = store.AI.GetAgents();
var agents = existingAgentsList.AiAgents; -
GetAgent
andGetAgents
overloads// Synchronously retrieves the configuration of an AI agent by its ID
AiAgentConfiguration GetAgent(string agentId)
// Asynchronously retrieves the configuration of an AI agent by its ID
async Task<AiAgentConfiguration> GetAgentAsync(string agentId, CancellationToken token = default)
// Synchronously retrieves the configurations of all AI agents
GetAiAgentsResponse GetAgents()
// Asynchronously retrieves the configurations of all AI agents
Task<GetAiAgentsResponse> GetAgentsAsync(CancellationToken token = default)Property Type Description agentId string
The unique ID of the agent you want to retrieve Return value Description AiAgentConfiguration
The agent configuration GetAiAgentsResponse
The response containing a list of all agent configurations -
GetAiAgentsResponse
classpublic class GetAiAgentsResponse
{
public List<AiAgentConfiguration> AiAgents { get; set; }
}
Managing conversations
Setting a conversation:
-
Set a conversation using the
store.AI.Conversation
method.
PassConversation
:- The agent ID
- The conversation ID
The conversation ID that you provide when starting a conversation determines whether a new conversation will start, or an existing conversation will be continued.- Conversations are kept in the
@conversations
collection.
A conversation document's name starts with a prefix (such asChats/
) that can be set when the conversation is initiated. - You can -
Provide a full ID, including a prefix and the ID that follows it.
Provide a prefix that ends with/
or|
to trigger automatic ID creation, similarly to the creation of automatic IDs for documents. - If you pass the method the ID of an existing conversation (e.g.
"Chats/0000000000000008883-A"
) the conversation will be retrieved from storage and continued where you left off. - If you provide an empty prefix (e.g.
"Chats/
), a new conversation will start.
- Conversations are kept in the
- Values for agent parameters, if defined, in an
AiConversationCreationOptions
instance.
-
Set the user prompt using the
SetUserPrompt
method.
The user prompt informs the agent of the user's requests and expectations for this chat. -
Use the value returned by the
Conversation
method to run the chat. -
Example
// Create a conversation instance
// Initialize it with -
// The agent's ID,
// A prefix (Performers/) for conversations stored in the @Conversations collection,
// Agent parameters' values
var chat = store.AI.Conversation(
createResult.Identifier,
"Performers/",
new AiConversationCreationOptions().AddParameter("country", "France")); -
Conversation
Definitionpublic IAiConversationOperations Conversation(string agentId, string conversationId, AiConversationCreationOptions creationOptions, string changeVector = null)
Property Type Description agentId string
The agent unique ID conversationId string
The conversation ID creationOptions AiConversationCreationOptions
Conversation creation options (see class definition below) changeVector string
Optional change vector for concurrency control Return value Description IAiConversationOperations
The conversation operations interface for conversation management.
Methods of this interface likeRun
,StreamAsync
,Handle
, and others, allow you send messages, receive responses, handle action tools, and manage various other aspects of the conversation lifecycle. -
SetUserPrompt
Definitionvoid SetUserPrompt(string userPrompt);
-
AiConversationCreationOptions
class
Use this class to set conversation creation options, including values for agent parameters and the conversation's expiration time if it remains idle.// Conversation creation options, including agent parameters and idle expiration configuration
public class AiConversationCreationOptions
{
// Values for agent parameters defined in the agent configuration
// Used to provide context or input values at the start of the conversation
public Dictionary<string, object> Parameters { get; set; }
// Optional expiration time (in seconds)
// If the conversation is idle for longer than this, it will be automatically deleted
public int? ExpirationInSec { get; set; }
// Initializes a new conversation instance with no parameters
// Use when you want to configure conversation options incrementally
public AiConversationCreationOptions();
// Initializes a new conversation instance and passes it a set of parameter values
public AiConversationCreationOptions(Dictionary<string, object> parameters);
// Adds an agent parameter value for this conversation
// Returns the current instance to allow method chaining
public AiConversationCreationOptions AddParameter(string name, object value);
}
Processing action-tool requests:
During the conversation, the LLM can request the agent to trigger action tools.
The agent will pass a requested action tool's name and parameters to the client,
and it is then up to the client to process the request.
The client can process an action-tool request using a handler or a receiver.
Action-tool Handlers
A handler is created for a specific action tool and registered with the server using the Handle
method.
When the LLM triggers this action tool with an action request, the handler is invoked to process the request, returns a response to the LLM, and ends automatically.
Handlers are typically used for simple, immediate operations like storing a document in the database and returning a confirmation, performing a quick calculation and sending its results, and other scenarios where the response can be generated and returned in a single step.
-
To create a handler,
pass theHandle
method -- The action tool's name.
- An object to populate with the data sent with the action request.
Make sure that the object has the same structure defined for the action tool's parameters schema.
-
When an action request for this tool is received,
the handler will be given -- The populated object with the data sent with the action request.
-
When you finish handling the requested action,
return
a response that will be sent by the agent back to the LLM. -
Example
In this example, the action tool is requested to store an employee's details in the database.// "store-performer-details" action tool handler
chat.Handle("store-performer-details", (Performer performer) =>
{
using (var session = store.OpenSession())
{
// store the values in the Performers collection in the database
session.Store(performer);
session.SaveChanges();
}
// return to the agent an indication that the action went well.
return "done";
});
// An object that represents the arguments provided by the LLM for this tool call
public class Performer
{
public string suggestedReward;
public string employeeId;
public string profit;
} -
Handle
overloadsvoid Handle<TArgs>(string actionName, Func<TArgs, Task<object>> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
void Handle<TArgs>(string actionName, Func<TArgs, object> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel) where TArgs : class;
void Handle<TArgs>(string actionName, Func<AiAgentActionRequest, TArgs, Task<object>> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
void Handle<TArgs>(string actionName, Func<AiAgentActionRequest, TArgs, object> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel) where TArgs : class;Property Type Description actionName string
The action tool name action Func<TArgs, Task<object>>
orFunc<TArgs, object>
orFunc<AiAgentActionRequest, TArgs, Task<object>>
orFunc<AiAgentActionRequest, TArgs, object>
The handler function that processes the action request and returns a response to the LLM aiHandleError AiHandleErrorStrategy
Errors handling strategy.
SendErrorsToModel
- Send errors to the model for handling.RaiseImmediately
- throw error exceptions.
Action-tool Receivers
A receiver is created for a specific action tool and registered with the server using the Receive
method.
When the LLM triggers this action tool with an action request, the receiver is invoked to process the request, but unlike a handler, the receiver remains active until AddActionResponse
is explicitly called to close the pending request and send a response to the LLM.
Receivers are typically used asynchronously for multi-step or delayed operations such as waiting for an external event or for user input before responding, performing long-running operations like batch processing or integration with an external system, and other use cases where the response cannot be generated immediately.
-
To create a receiver,
pass theReceive
method -- The action tool's name.
- An object to populate with the data sent with the action request.
Make sure that this object has the same structure defined for the action tool's parameters schema.
-
When an action request for this tool is received,
the receiver will be given -- An
AiAgentActionRequest
object containing the details of the action request. - The populated object with the data sent with the action request.
- An
-
When you finish handling the requested action,
callAddActionResponse
. Pass it -- The action tool's name.
- The response to send back to the LLM.
Note that the response can be sent at any time, even after the receiver has finished executing,
and from any context, not necessarily from within the receiver callback.
-
Example
In this example, a receiver gets a recommendation for rewards that can be given to a performant employee and processes it.- Asynchronous
- Synchronous
chat.Receive("store-performer-details", async (AiAgentActionRequest request, Performer performer) =>
{
// Perform asynchronous work
using (var session = store.OpenAsyncSession())
{
await session.StoreAsync(performer);
await session.SaveChangesAsync();
}
// Example: Send a notification email asynchronously
await EmailService.SendNotificationAsync("manager@company.com", performer);
// Manually send the response to close the action
chat.AddActionResponse(request.ToolId, "done");
});chat.Receive("store-performer-details", (AiAgentActionRequest request, Performer performer) =>
{
// Perform synchronous work
using (var session = store.OpenSession())
{
session.Store(performer);
session.SaveChanges();
}
// Add any processing logic here
// Manually send the response and close the action
chat.AddActionResponse(request.ToolId, "done");
}); -
Receive
overloads// Registers an Asynchronous receiver for an action tool
void Receive<TArgs>(string actionName, Func<AiAgentActionRequest, TArgs, Task> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
// Registers a Synchronous receiver for an action tool
void Receive<TArgs>(string actionName, Action<AiAgentActionRequest, TArgs> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)Property Type Description actionName string
The action tool name action Func<AiAgentActionRequest, TArgs, Task>
orAction<AiAgentActionRequest, TArgs>
The receiver function that processes the action request aiHandleError AiHandleErrorStrategy
Errors handling strategy.
SendErrorsToModel
- Send errors to the model for handling.RaiseImmediately
- throw error exceptions.
Conversation response:
The LLM response is returned by the agent to the client in an AiAnswer
object, with an answer to the user prompt and the conversation status, indicating whether the conversation is complete or a further "turn" is required.
AiAnswer
syntaxpublic class AiAnswer<TAnswer>
{
// The answer content produced by the AI
public TAnswer Answer;
// The status of the conversation
public AiConversationResult Status;
}
public enum AiConversationResult
{
// The conversation has completed and a final answer is available
Done,
// Further interaction is required, such as responding to tool requests
ActionRequired
}
Setting user prompt and running the conversation:
Set the user prompt using the SetUserPrompt
method, and run the conversation using the
RunAsync
method.
You can also use StreamAsync
to stream the LLM's response as it is generated.
Learn how to do this in the Stream LLM responses section.
// Set the user prompt and run the conversation
chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
var LLMResponse = await chat.RunAsync<Performer>(CancellationToken.None);
if (LLMResponse.Status == AiConversationResult.Done)
{
// The LLM successfully processed the user prompt and returned its response.
// The performer's ID, profit, and suggested rewards were stored in the Performers
// collection by the action tool, and are also returned in the final LLM response.
}
See the full example below.
Stream LLM responses
You can set the agent to stream the LLM's response to the client in real time as the LLM generates it, using the StreamAsync
method, instead of using RunAsync which sends the whole response to the client when it is fully prepared.
Streaming the response allows the client to start processing it before it is complete, which can improve the application's responsiveness.
-
Example
// A StringBuilder, used in this example to collect the streamed response
var reward = new StringBuilder();
// Using StreamAsync to collect the streamed response
// The response property to stream is in this case `suggestedReward`
var LLMResponse = await chat.StreamAsync<Performer>(responseObj => responseObj.suggestedReward, str =>
{
// Callback invoked with the arrival of each incoming chunk of the processed property
reward.Append(str); // Add the incoming chunk to the StringBuilder instance
return Task.CompletedTask; // Return with an indication that the chunk was processed
}, CancellationToken.None);
if (LLMResponse.Status == AiConversationResult.Done)
{
// Handle the full response when ready
// The streamed property was fully loaded and handled by the callback above,
// remaining parts of the response (including other properties if exist)
// will arrive when the whole response is ready and can be handled here.
} -
StreamAsync
overloads:// The property to stream is indicated using a lambda expression
Task<AiAnswer<TAnswer>> StreamAsync<TAnswer>
(Expression<Func<TAnswer, string>> streamPropertyPath,
Func<string, Task> streamedChunksCallback, CancellationToken token = default);// The property to stream is indicated as a string, using its name
Task<AiAnswer<TAnswer>> StreamAsync<TAnswer>
(string streamPropertyPath,
Func<string, Task> streamedChunksCallback, CancellationToken token = default);Property Type Description streamPropertyPath Expression<Func<TAnswer, string>>
A lambda expression that selects the property to stream from the response object. - The selected property must be a simple string (and not a JSON object or an array, for example).
- It is recommended that this would be the first property defined in the response schema.
The LLM processes the properties in the order they are defined. Streaming the first property will ensure that streaming to the user starts immediately even if it takes the LLM time to process later properties.
streamPropertyPath string
The name of the property in the response object to stream. - The selected property must be a simple string (and not a JSON object or an array, for example).
- It is recommended that this would be the first property defined in the response schema.
The LLM processes the properties in the order they are defined. Streaming the first property will ensure that streaming to the user starts immediately even if it takes the LLM time to process later properties.
streamedChunksCallback Func<string, Task>
A callback function that is invoked with each incoming chunk of the streamed property token CancellationToken
An optional token that can be used to cancel the streaming operation Return value Description Task<AiAnswer<TAnswer>>
After streaming the specified property, the return value contains the final conversation result and status (e.g. "Done" or "ActionRequired").
Full example
The agent's user in this example is a human experience manager.
The agent helps its user to reward employees by searching, using query tools,
for orders sent to a certain country or (if the user prompts it "everywhere")
to all countries, and finding the employee that made the largest profit.
The agent then runs another query tool to find, by the employee's ID (that
was fetched from the retrieved orders) the employee's residence region,
and finds rewards suitable for the employee based on this region.
Finally, it uses an action tool to store the employee's ID, profit, and reward
suggestions in the Performers
collection in the database, and returns the same
details in its final response as well.
public async Task createAndRunAiAgent_full()
{
var store = new DocumentStore();
// Define connection string to OpenAI
var connectionString = new AiConnectionString
{
Name = "open-ai-cs",
ModelType = AiModelType.Chat,
OpenAiSettings = new OpenAiSettings(
apiKey: "your-api-key",
endpoint: "https://api.openai.com/v1",
// LLM model for text generation
model: "gpt-4.1")
};
// Deploy connection string to server
var operation = new PutConnectionStringOperation<AiConnectionString>(connectionString);
var putConnectionStringResult = store.Maintenance.Send(operation);
using var session = store.OpenAsyncSession();
// Start setting an agent configuration
var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
@"You work for a human experience manager.
The manager uses your services to find which employee has made the largest profit and to suggest
a reward.
The manager provides you with the name of a country, or with the word ""everything"" to indicate
all countries.
Then you:
1. use a query tool to load all the orders sent to the selected country,
or a query tool to load all orders sent to all countries.
2. calculate which employee made the largest profit.
3. use a query tool to learn in what general area this employee lives.
4. find suitable vacations sites or other rewards based on the employee's residence area.
5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
When you're done, return these details in your answer to the user as well.");
// Set agent ID
agent.Identifier = "reward-productive-employee";
// Define LLM response object
agent.SampleObject = "{" +
"\"EmployeeID\": \"embed the employee’s ID here\"," +
"\"Profit\": \"embed the profit made by the employee here\"," +
"\"SuggestedReward\": \"embed suggested rewards here\"" +
"}";
// Set agent parameters
agent.Parameters.Add(new AiAgentParameter(
"country", "A specific country that orders were shipped to, " +
"or \"everywhere\" to look for orders shipped to all countries"));
agent.Queries =
[
// Set a query tool to retrieve all orders sent everywhere
new AiAgentToolQuery
{
// Query tool name
Name = "retrieve-orders-sent-to-all-countries",
// Query tool description
Description = "a query tool that allows you to retrieve all orders sent to all countries.",
// Query tool RQL query
Query = "from Orders as O select O.Employee, O.Lines.Quantity",
// Sample parameters object
ParametersSampleObject = "{}"
},
// Set a query tool to retrieve all orders sent to a specific country
new AiAgentToolQuery
{
Name = "retrieve-orders-sent-to-a-specific-country",
Description =
"a query tool that allows you to retrieve all orders sent to a specific country",
Query =
"from Orders as O where O.ShipTo.Country == " +
"$country select O.Employee, O.Lines.Quantity",
ParametersSampleObject = "{}"
},
// Set a query tool to retrieve the performer's residence region details from the database
new AiAgentToolQuery
{
Name = "retrieve-performer-living-region",
Description =
"a query tool that allows you to retrieve an employee's country, city, and " +
"region, by the employee's ID",
Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
"E.Address.City, E.Address.Region",
ParametersSampleObject = "{" +
"\"employeeId\": \"embed the employee's ID here\"" +
"}"
}
];
agent.Actions =
[
// Set an action tool to store the performer's details
new AiAgentToolAction
{
Name = "store-performer-details",
Description =
"an action tool that allows you to store the ID of the employee that made " +
"the largest profit, the profit, and your suggestions for a reward, in the database.",
ParametersSampleObject = "{" +
"\"suggestedReward\": \"embed your suggestions for a reward here\", " +
"\"employeeId\": \"embed the employee’s ID here\", " +
"\"profit\": \"embed the employee’s profit here\"" +
"}"
}
];
// Set chat trimming configuration
AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
{
// Summarize old messages When the number of tokens stored in the conversation exceeds this limit
MaxTokensBeforeSummarization = 32768,
// Max number of tokens that the conversation is allowed to contain after summarization
MaxTokensAfterSummarization = 1024
};
agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization);
// Limit the number of times the LLM can request for tools in response to a single user prompt
agent.MaxModelIterationsPerCall = 3;
var createResult = await store.AI.CreateAgentAsync(agent, new Performer
{
suggestedReward = "your suggestions for a reward",
employeeId = "the ID of the employee that made the largest profit",
profit = "the profit the employee made"
});
// Set chat ID, prefix, agent parameters.
// (specific country activates one query tool,"everywhere" activates another)
var chat = store.AI.Conversation(
createResult.Identifier,
"Performers/",
new AiConversationCreationOptions().AddParameter("country", "France"));
// Handle the action tool that the LLM uses to store the performer's details in the database
chat.Handle("store-performer-details", (Performer performer) =>
{
using (var session1 = store.OpenSession())
{
// store values in Performers collection in database
session1.Store(performer);
session1.SaveChanges();
}
return "done";
});
// Set user prompt and run chat
chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
var LLMResponse = await chat.RunAsync<Performer>(CancellationToken.None);
if (LLMResponse.Status == AiConversationResult.Done)
{
// The LLM successfully processed the user prompt and returned its response.
// The performer's ID, profit, and suggested rewards were stored in the Performers
// collection by the action tool, and are also returned in the final LLM response.
}
}