Skip to main content
Cloosphere supports a variety of AI models. Pick the right one for your goal and tune parameters in detail.

How to Select a Model

Choose a model from the dropdown in the chat header.
Chat header model selector
1

Open the dropdown

Click the model selection area at the top of the chat.
2

Search and select a model

Pick a model from the list, or type a name in the search box to filter.
3

Set as default (optional)

Click “Set as default” below the model selector to use this model as the default for new chats.
Type @modelname in the input box to ask a specific model directly, regardless of the currently selected model.

Default Model

When you set a frequently used model as default, it’s selected automatically every time you start a new chat.
1

Pick a model

Choose the model you want as default in the dropdown.
2

Save as default

Click “Set as default” below the model selector. A “Default model updated” notification appears.
The default model is per-user. If multiple models are selected when you save the default, the multi-model selection is saved as default.

Multi-model Conversations

Get answers from multiple models for the same question. Including the first one, you can select up to 4 models simultaneously.
Multi-model selection
1

Add a model

Click the ”+” button to the right of the first model selector.
2

Pick additional models

Choose models to compare from the new dropdown. Up to 4 total (including the first).
3

Send the question

When you send the message, all selected models generate responses simultaneously.
4

Compare and merge responses

Compare the responses, and use Merge Responses to combine the best answers if needed.
Comparing multi-model responses
  • Quality comparison: Compare model performance with the same question
  • Cross-validation: Get multiple opinions before important decisions
  • Best-answer selection: Combine each model’s strengths into the best answer
  • Cost vs. performance: Compare results from low-cost and high-performance models
The multi-model feature can be restricted for regular users by admin settings. If you don’t have permission, the ”+” button isn’t shown.

Tuning Model Parameters

The Chat Controls panel lets you tune model parameters per conversation.

Opening the Panel

EnvironmentPath
DesktopTop-right More (⋯) menu in the chat → select Overview or Artifacts
MobileTap the slider icon on the right of the header, or open the More menu → Controls
The desktop More (⋯) menu only appears after sending the first message. In a fresh chat, send a prompt once to start the conversation, then tune parameters. (Adjusted values apply to subsequent messages.)
Once the panel opens, switch to the Controls view in the left tabs to adjust the System Prompt and Advanced Params.
Chat Controls panel

System Prompt

Set a system prompt that applies to the entire conversation.
Example: "You are a Python expert. All code must include type hints
and follow PEP 8 style."

Key Parameters

Controls response creativity (randomness).
ValueBehaviorUse Case
0.0 ~ 0.3Deterministic, consistent answersCode generation, fact-checking, data extraction
0.4 ~ 0.7Balanced answersGeneral chat, summarization, translation
0.8 ~ 2.0Creative, varied answersBrainstorming, creative writing, idea generation
Range: 0.0 ~ 2.0 (UI slider initial value: 0.8 — the value shown when switching to custom; this may differ from the model’s actual default)
Limits the probability range the model considers when picking the next token.
  • 0.1: Considers only the top 10% probability tokens — very focused
  • 0.9: Considers the top 90% — diverse expression
  • Generally used together with Temperature; we recommend tuning only one at a time.
Limits the maximum length of the AI response in tokens. If unset, the model’s default max applies.
Setting it too low can cause responses to be cut off mid-stream.
Discourages reuse of tokens that already appeared. Higher values reduce repetition.Range: -2.0 ~ 2.0 (default: 0)
Setting a fixed seed produces (nearly) identical responses for the same prompt. Useful when you need reproducible results.
Applies only to reasoning models. Controls how much effort goes into reasoning.
  • Use values like low, medium, high.
  • Not supported by all models.
When the specified string appears, the model stops generating. Use it to control specific output formats.

Other Advanced Parameters

ParameterDescription
Stream Chat ResponseReal-time streaming response on/off
Function CallingTool calling mode (Default / Native)
Top KConsiders only the top K tokens for sampling
Min PConsiders only tokens with at least the minimum probability
Presence PenaltyApplies a flat penalty to already-seen tokens to encourage new topics (range: -2.0 ~ 2.0)
Repeat PenaltyPenalty on repeated tokens (Ollama-only, range: 0.0 ~ 2.0)
Repeat Last NNumber of recent tokens checked for repetition
MirostatAlgorithm that auto-regulates response perplexity
Context Length (Ollama)Context window size (in tokens)
Logit BiasDirect adjustment of the appearance probability of specific tokens (-100 ~ 100)
The Chat Controls panel itself is visible to all users, but the System Prompt and Advanced Params sections are only shown to admins or users with the chat-controls permission (permissions.chat.controls).

Model Capability Display

Agent models manage each capability (web search, image generation, code execution) individually.
StateDescription
onEnabled by default
userUser can enable manually (default off)
offDisabled (not shown in the input menu)
For base models (those without base_model_id), all capabilities enabled by the admin in system settings are shown.