Skip to main content
Tired of repeating context to the base AI model, manually pasting internal documents, and re-stating answer formats every time? An Agent bundles a Knowledge Base + Tool + Guardrail + System Prompt together to create a per-department-optimized AI assistant.

Example

“Show me this month’s sales status”
StateBehaviorResult
Base modelGuesses from general AI knowledge”I cannot access sales data”
Agent (DB + KB connected)Queries the sales DB + applies report formatAccurate sales data + tabular response
Agent list

Agent Processing Pipeline

The agent receives a user question and generates a response through this pipeline. A guardrail validates the input, the Knowledge Base retrieves related documents, tools (API, DB) are invoked when needed, and the LLM produces the final response.

Agent vs. Base Model

AspectBase ModelAgent
KnowledgePre-training data onlyInternal documents, DB integration
ToolsBuilt-in onlyExternal APIs, MCP servers
Response styleGenericTask-specific guidance applied
SecurityNoneGuardrails validate I/O
ConsistencyVaries by promptMaintained via system prompt
Quality monitoringManualAuto-evaluation tracking
Agents don’t replace the base model — they add task context on top of it. Use the base model for simple general questions and agents for task-specific conversations.

Creating an Agent

1

Enter basic info

Click Workspace > Agents > ”+ New Agent” and fill in basic info.
Agent basic info
FieldDescriptionExample
NameAgent display name”Marketing Assistant”
DescriptionWhat the agent does”Marketing content creation and analysis support”
Profile imageAgent iconMarketing-related image
TagsClassification tagsmarketing, content
2

Pick the base model

Choose the AI model the agent will use. Pick from the model list registered by the admin.
3

Write the prompts

Define the agent’s role, persona, and response rules.
FieldDescription
Task PromptDefines the agent’s role, persona, restrictions, and concrete task instructions. Plays the role of the general system prompt.
Response Format PromptSpecifies the response format and structure (markdown, table, etc.). Separated from the task prompt so format can be managed independently.
Click the AI auto-generate button next to each prompt field — it analyzes the agent’s name, description, and connected resources to draft the prompt automatically.
During AI auto-generation, technical instructions (tool usage, SQL writing rules, etc.) are automatically excluded. The platform handles those — the prompt only contains role, persona, and restrictions.
Task prompt input
You are Cloocus's marketing assistant.

## Role
- Support marketing content creation
- Draft social media posts
- Analyze marketing data

## Response Rules
- Always respond in Korean
- Maintain a professional yet friendly tone
- Provide data-driven insights
- Comply with brand guidelines

## Restrictions
- Do not disparage competitors
- Do not use unverified statistics
The two prompts are used at different stages of agent execution.
Task PromptResponse Format Prompt
When appliedWhile the agent is using toolsWhen composing the final answer
Role”What to do” (role, restrictions)“How to answer” (markdown, tables, length)
IncludeRole definition, behavior rules, restrictionsOutput format, tone, structure
Don’t includeOutput format specsRole definition, behavior rules
Separating them lets you change just the output format while keeping the role, or vice versa.
4

Configure prompt suggestions (optional)

Set conversation-starter suggestions shown when an agent is selected in chat.
OptionDescription
DefaultUse system default suggestions
CustomSet agent-specific suggestions
Providing example questions matched to the agent’s purpose helps users start conversations quickly. Examples: “Summarize this month’s sales”, “Draft a social media post”
5

Connect Knowledge Bases

Attach documents the agent should reference.
  1. Click ”+ Add” in the “Knowledge Base” section
  2. Select Knowledge Bases to connect (multiple supported)
Connected Knowledge Bases are searched via RAG and used in answers, with citations.
Agent Knowledge Base connections
6

Connect databases (optional)

Connect a Database (DbSphere) for natural-language data queries (NL-to-SQL).
  1. Click ”+ Add” in the “Database” section
  2. Select databases (multiple supported)
Ask questions in natural language about the connected DB; the AI generates and runs SQL, returning results.
7

Connect glossaries (optional)

Connect a Glossary so the agent understands your organization’s business terminology.
  1. Click ”+ Add” in the “Glossary” section
  2. Select glossaries (multiple supported)
Term definitions, synonyms, and context registered in the glossary are reflected in agent responses.
8

Connect tools (optional)

Connect tools for external system integration. In the “Tool Connections” section, choose MCP servers or OpenAPI servers.
Tool TypeDescription
OpenAPI serverInteract with external services via REST API
MCP serverTool integration via Model Context Protocol
If a connected tool’s description is empty, a warning banner appears at the top of the editor. The tool description is the LLM’s primary cue for “when to call this tool”, so a missing description leads to incorrect calls or unused tools. When the banner appears, fill in the description from the tool definition.
9

Capability settings (optional)

Configure advanced features the agent can use. Each capability has 3 states.
StateDescription
DisabledThe capability is completely hidden in chat (default)
Default OnAuto-enabled at chat start, user can turn off
Default OffVisible in chat, but user must turn it on
CapabilityDescription
Web SearchReal-time web search for up-to-date info. Configurable result count and domain filter
Image GenerationAI image generation engine integration. Pick which connection to use
Code InterpreterRun Python code for calculations and data analysis
Agent capability settings
10

Response format (optional)

Constrain the agent’s response to a structured JSON format.
ModeDescription
ChatDefault freeform text response
StructuredStructured response per JSON Schema (Structured Output)
In Structured mode, define the response schema with the visual field builder or the raw JSON editor.
Response format settings
11

Guardrail settings (optional)

Attach security guardrails to validate I/O.
  • Auto-detect and mask PII
  • Custom pattern filtering
  • Block prohibited words
  • LLM-based content validation
12

Auto-evaluation settings (optional)

Automatically monitor response quality.
SettingDescription
Sampling rateShare of responses to evaluate (1%–100%)
Evaluation typeChoose from retrieval quality, faithfulness, response quality
Judge modelLLM to use for evaluation
TypeDescription
Retrieval QualityRelevance of documents retrieved from the Knowledge Base
FaithfulnessWhether the response is faithful to retrieved content (no hallucination)
Response QualityOverall quality, usefulness, and accuracy of the response
Sampling-rate guidance:
SituationRecommendedReason
New agent (validation phase)50–100%Need initial quality picture
Stabilized agent5–10%Save costs while monitoring
Critical-business agent20–30%Continuous quality assurance needed
Retrieval Quality and Faithfulness evaluations only run when there are Knowledge Base search results. For agents without a KB, choose Response Quality only.
Auto-evaluation settings
13

Access permissions

Set who can use the agent.
OptionDescription
PublicAvailable to all users
PrivateAvailable only to you
Group/OrganizationAvailable to specified groups or organizations
Access control settings
14

Save

Click Save to create the agent.

Using Agents

Select in Chat

In the model selector dropdown at the top of the chat, pick an agent. Agents appear in the list alongside regular models.

Invoke with @

Call a specific agent in chat with @agent-name.
@MarketingAssistant Draft 5 social posts for this month's promotion

Agent Management

ActionDescription
Activate / DeactivateToggle on the agent card to enable/disable. Inactive agents can’t be selected in chat
EditModify settings via the edit button or “more” menu on the agent card
CloneQuickly create a new agent by copying an existing one
Export / ImportBack up and migrate agent settings between environments via JSON
DeletePermanently delete the agent (no recovery)
Use export/import to move agents created in a development environment over to production.

Use Cases

Configuration:
  • Base model: GPT-4o-mini
  • Knowledge Base: HR policy, benefits guide
  • Task Prompt: HR specialist role
Conversation:
Q: How do I apply for annual leave?
A: Apply for annual leave with these steps:
1. Open the HR portal
2. Select the leave application menu
3. Enter leave type and date range
4. Request manager approval

[Source: HR Policy Article 15]

Best Practices

Prompt Writing

  1. Define the role clearly — “You are a content specialist on Cloocus’s marketing team”
  2. Provide concrete instructions — response language, length, citation rules, etc.
  3. Set restrictions — no competitor disparagement, no PII exposure, etc.

Knowledge Base Connection

  • Connect only relevant documents: Too many documents actually degrade retrieval accuracy
  • Keep documents up-to-date: Refresh stale information regularly
  • Write tool descriptions: Detailed tool descriptions for Knowledge Bases improve agent KB selection accuracy

Access Permissions

  • Principle of least privilege: Grant access only to those who need it
  • Manage by group/organization: More efficient than per-user assignment
  • Review periodically: Check permission settings on a regular cadence

FAQ

Agents add Knowledge Bases, Tools, system prompts, and Guardrails to a base model, optimizing it for a specific task. Use the base model for general-purpose chat and agents for task-specialized chat.
Yes — connect multiple Knowledge Bases and databases at once. The agent automatically picks the appropriate resource based on the question. Writing a detailed Tool Description for each Knowledge Base improves selection accuracy.
Check the agent’s capability settings:
  • Disabled: The capability is completely hidden in chat
  • Default Off: User must turn it on in the chat input
  • Default On: Auto-enabled. If still not working, check admin settings (web search/image generation connections)
Code Interpreter requires both the agent setting and the user toggle in chat to be on.
Yes — view per-agent usage, token consumption, and auto-evaluation results in the monitoring dashboard.
Yes — use Export to download the JSON, then Import in the other environment. Note: connected Knowledge Bases, tools, and guardrails must be set up separately in the target environment.
Yes — click More menu > Hide on the agent card to hide it from the chat model selector. The agent isn’t deleted and can be unhidden.

Knowledge Base

Document-based knowledge stores you can attach to agents

Guardrails

Validate agent I/O for security

Tools

OpenAPI / MCP external service integration