
Not all uploaded files appear here. Only files with classification results or guardrail blocks are shown in the log.
Document Sensitivity Classification
On file upload, an LLM-based classifier auto-determines document sensitivity.| Classification | Description | Action |
|---|---|---|
| PUBLIC | Generally shareable documents | Allow |
| INTERNAL | Internal-use documents | Allow |
| CONFIDENTIAL | Sensitive — PII, financial info | Flag (warn) |
| RESTRICTED | Top secret, regulated | Block (reject upload) |
Guardrail Check Types
Multiple security checks apply automatically per file type.- Document Classification
- Macro Detection
- EXIF Metadata Removal
- NSFW Detection
LLM-based sensitivity classification. Analyzes document text to assign PUBLIC ~ RESTRICTED grade.
- Target: All files with extractable text (PDF, DOCX, TXT, etc.)
- Result: Classification grade + confidence + reason
- Action: CONFIDENTIAL → flag, RESTRICTED → block
Viewing Logs
Filter Options
| Filter | Options | Description |
|---|---|---|
| Search | Text input | Search by filename, uploader name/email |
| Source | Chat / Knowledge / Project | Filter by upload path |
| Classification | PUBLIC / INTERNAL / CONFIDENTIAL / RESTRICTED | Filter by sensitivity grade |
| Status | Flagged / Blocked | Filter by processing result |
Table Columns
| Column | Description |
|---|---|
| Filename | Original filename + content type |
| Uploader | User name (email subtitle) |
| Source | Upload path (color badges: Chat=blue, Knowledge=purple, Project=green) |
| Classification | Sensitivity grade (color badges: PUBLIC=green, INTERNAL=blue, CONFIDENTIAL=yellow, RESTRICTED=red) |
| Upload time | Upload timestamp |
| Status | Blocked (red) or Flagged (yellow) badge |
Pagination
- 20 entries per page
- Total file count shown at top
- Page navigation controls when over 20
Detail View
Click a table row to see the file’s guardrail analysis in a modal. Items shown:- File info: Filename, content type, size (KB)
- Uploader info: Name, email, upload time
- Source: Upload path badge
- Classification result:
- Sensitivity grade + confidence (%)
- Classification reason
- Classification model used
- Error message on errors
- Guardrail details (when applicable):
- Block reason and detail JSON
- EXIF removal result
- NSFW detection result
File Processing Flow
Use Cases
Sensitive Document Monitoring
Sensitive Document Monitoring
- Set Classification filter to
CONFIDENTIAL - Review the flagged file list
- Confirm classification reason and confidence in the detail modal
- Adjust policy in Guardrail Settings as needed
Per-Source Upload Pattern Analysis
Per-Source Upload Pattern Analysis
- Use the Source filter to view upload status by Chat / Knowledge / Project
- If a source has many blocks, inspect that workflow
- Review Knowledge upload classification results to manage RAG data quality
Blocked File Investigation
Blocked File Investigation
- Set Status filter to
Blocked - Confirm uploader and reason for blocked files
- Review guardrail detail JSON in the detail modal
- Consider revising guardrail rules if false positives
Best Practices
- Periodic monitoring: Review file logs at least weekly to detect anomaly patterns
- Guardrail integration: When blocks/flags occur frequently, review Guardrail Settings
- False positive management: When CONFIDENTIAL/RESTRICTED classification confidence is low, consider changing the classification model
- Source analysis: Watch for patterns where sensitive files concentrate in a specific source
