AI Agents Integration: Studio
Create AI Agent
To create an AI agent, open AI hub > AI Agents and click Add new agent.
-
AI Hub
Click to open the AI Hub view.
Use this view to handle AI connection strings and tasks, and to view task statistics. -
AI Agents
Click to open the AI Agents view.
Use this view to list, configure, or remove your agents. -
Add new agent
Click to add an AI agent.The Create AI Agent dialog will open, allowing you to define and test your agent.
Use the buttons at the bottom bar to Cancel, Save, or Test your changes.
-
Filter by name
When multiple agents are created, you can filter them by a string you enter here. -
Defined agent
After defining an agent, it is listed in this view, allowing you to run, edit, or remove the agent.
Configure basic settings
-
Agent name
Enter a name for the agent.
E.g., CustomerSupportAgent -
Identifier
Enter a unique identifier for the agent,
or click Regenerate to create it automatically. -
Connection String
Select an existing connection string that the agent will use to connect your LLM of choice,
or click Create a new AI connection string to define a new string.
Your agent can use a local LLM like Ollama, or an external model like OpenAI. -
System prompt
Enter a prompt that determines LLM characteristics like its role and purpose. -
Sample response object and Response JSON schema
Define a response JSON object for the LLM reply, either as a sample object or as a formal schema.- The response object guides the LLM in composing its replies, and can ease their parsing by the client.
- Defining a sample object is normally simpler.
- Behind the scenes, RavenDB will translate a sample object to a JSON schema format before sending it to the LLM, but if you prefer it you can define it yourself.
- After defining a sample object, you can open the schema tab and click the "View schema"
button to see the generated schema.
Set agent parameters
Define agent parameters.
After defining an agent parameter, it can be included in query tools RQL queries.
Values for agent parameters are provided by the client when a conversation is started.
Read more about parameters.
-
Add new parameter
Click to add an agent parameter. -
Name
Enter agent parameter name. -
Description
Describe the parameter in plain language so the LLM would understand its purpose. -
Remove parameter
Remove a defined parameter from the list.
Define agent tools
Define Query and Action agent tools.
- Query tools you define here can be freely used by the LLM.
- Query tools can trigger the agent to retrieve data from the database and return it to the LLM.
- Action tools can trigger the client to perform actions such as removing a spam entry from a comments section or adding a comment to an article.
- The LLM has no direct access to the database or any other server property, all queries and actions are performed through the agent.
- Find an AI agent usage flow chart here
-
Query tools
Click to add a new query tool. -
Action tools
Click to add a new action tool.
Add query tools:
-
Add new query tool
Click to add a new query tool. -
Remove
Click to remove this tool. -
Expand/Collapse tool Click to expand or collapse the tool's details.
-
Tool name
Enter a name for the query tool. -
Description
Write a description that will explain to the LLM in natural language what the attached query can be used for.
E.g.,apply this query when you need to retrieve the details of all the companies that reside in a certain country
-
Allow model queries
Enable to allow the LLM to trigger the execution of this query tool.
Disable to prevent the LLM from using this tool.When disabled, the LLM will not be able to trigger this tool - but if the tool is set as an initial-context query the agent will still be able to execute it when it is started.
-
Add to initial context
Enable to set the query tool as an initial-context query.
When enabled, the agent will execute the query immediately when it starts a conversation with the LLM without waiting for the LLM to invoke the tool, to include data that is relevant for the conversation in the initial context sent to the LLM.
Disable to prevent the agent from executing the query on startup.An initial-context query is not allowed to use LLM parameters, since the LLM will not have the opportunity to fill the parameters with values before the query is executed.
The query can use agent parameters, whose values are provided by the user when the conversation is started. -
Query
Enter the query that the agent will run when the LLM requests it to use this tool. -
Sample parameters object and Parameters JSON schema
Set a schema (as a sample object or a formal JSON schema) that allows the LLM to fill query parameters with values.
Read more about query parameters
Add action tools:
-
Add new action tool
Click to add a new action tool. -
Remove
Click to remove this tool. -
Expand/Collapse tool Click to expand or collapse the tool's details.
-
Tool name
Enter a name for the action tool. -
Description
Enter a description that explains to the LLM in natural language when this action tool should be applied.
E.g.,apply this action tool when you need to create a new summary document
-
Sample parameters object and Parameters JSON schema
Set a sample object or a JSON schema that the LLM can populate when it invokes the action tool. The agent will pass this information to the client to guide it through the action it is requested to perform.If you define both a sample response object and a schema, only the schema will be used.
Configure chat trimming
LLMs have no memory of prior interactions.
To allow a continuous conversation, each time the agent sends to the LLM a new prompt or request, it sends along with it the whole conversation up to this point.
To minimize the size of such messages, you can set the agent to summarize conversations.
-
Summarize chat
Use this option to limit the size of the conversation history. If its size breaches this limit, chat history will be summarized before it is sent to the LLM. -
Max tokens Before summarization
If the conversation contains a total number of tokens larger than the limit you set here, the conversation will be summarized. -
Max tokens After summarization
Set the maximum number of tokens that will be left in the conversation after it is summarized.
Messages exceeding this limit will be removed, starting with the oldest. -
History
- Enable history
When history is enabled, the conversation sent to the LLM will be summarized, but a copy of the original conversation will be kept in a dedicated document in the@conversations-history
collection. - Set history expiration
When this option is enabled, conversations will be deleted from the@conversations-history
collection once their age exceeds the period you set.
- Enable history
Save and Run your agent
When you're done configuring your agent, save it using the save button at the bottom.
You will find your agent in the main AI Agents view, where you can run or edit it.
-
Start new chat
Click to start your agent. -
Edit agent
Click to edit the agent.
Start new chat:
Starting a new chat will open the chat window, where you can provide values for the parameters you defined for this agent and enter a user prompt that explains to the agent what you expect from this session.
-
Conversation ID or prefix
- Entering a prefix (e.g.
Chats/
) will start a new conversation, with the prefix preceding an automatically-created conversation ID. - Entering the ID of a conversation that doesn't exist yet will also start a new conversation.
- Entering the ID of an existing conversation will send the entire conversation to the LLM and allow you to continue where you left off.
- Entering a prefix (e.g.
-
Set expiration
Enable this option and set an expiration period to automatically delete conversations from the@Conversations
collection when their age exceeds the set period. -
Agent parameters
Enter a value for each parameter defined in the agent configuration.
The LLM will embed these values in query tools RQL queries where you included agent parameters.
E.g., If you enterFrance
here as the value forCountry
, a query tool'sfrom "Orders" where ShipTo.Country == $country
RQL query will be executed asfrom "Orders" where ShipTo.Country == "France"
. -
User prompt
Use the user prompt to explain to the agent, in natural language, what this session is about.
Agent interaction:
Running the agent presents its components and interactions.
Agent parameters and their values:
The system prompt set for the LLM and the user prompt:
The query tools and their activity:
You can view the raw data of the agent's activity in JSON form as well:
Action tool dialog:
If the agent runs action tools, you will be given a dialog that shows you the information provided by the LLM when it requests the action, and a dialog inviting you to enter the results when you finish performing it.
Agent results:
And finally, when the AI model finishes its work and negotiations, you will be able to see its response. As with all other dialog boxes, you can expand the view to see the content or minimize it to see it in its context.
Test your agent
You can test your agent while creating or editing it, to examine its configuration and operability before you deploy it. The test interface resembles the one you see when you run your agent normally via Studio, but conversations are not kept in the @conversations
or @conversations-history
collections.
To test your agent, click Test at the bottom of the agent configuration view.
- New Chat
Click to start a new chat - Close
Click to return to the AI Agents configuration view. - Enter parameter value
Enter a value for each parameter defined in the agent configuration.
The LLM will be able to replace these parameters with fixed values when it uses query or action tools in which these parameters are embedded. - Agent prompt
Explain to the agent in natural language what this session is about. - Send prompt
Click to pass your agent your parameter values and user prompt and run the test.
You can keep sending prompts to the agent and receiving its replies in a continuous conversation.
Runtime view and Test results:
You will see the components that take part in the agent's run and be able to enter and send requested information for action tools. Each tool can be minimized to see it in context or expanded to view the data it carries.
When the LLM finishes processing, you will see its response.
You can expand the dialog or copy the content to see the response in detail.
{
"EmployeeID": "employees/1-A",
"EmployeeProfit": "1760",
"SuggestedRewards": "The employee lives in Redmond, WA, USA. For a special reward, consider a weekend getaway to the Pacific Northwest's scenic sites such as a stay at a luxury resort in Seattle or a relaxing wine tasting tour in Woodinville. Alternatively, you could offer gift cards for outdoor excursions in the Cascade Mountains or tickets to major cultural events in the Seattle area."
}