Security
Your data is yours. Period.
We know the #1 concern with AI isn't whether it works — it's whether you can trust it with your business data. Here's exactly how we handle it.
Core principles
Four things we never compromise on.
Your data never trains AI models.
We use AI providers (Anthropic, OpenAI, Google) via enterprise API tiers that explicitly prohibit using customer data for model training. Your conversations, documents, and business data are never used to improve anyone's AI.
Your data stays isolated.
Each client gets a completely isolated agent environment. Your agent's memory, files, and configurations are separated from every other client. No shared context, no cross-contamination.
We don't see what we don't need to.
Your agent connects to your tools with scoped permissions — the minimum access needed to do its job. We don't request broad access and we don't store raw data beyond what the agent needs.
You can leave anytime — and take everything.
No lock-in. Cancel and we export your agent's memory and configuration. We delete our copy within 30 days.
AI providers
Does my data end up in AI training?
No. Here's specifically how:
| Provider | Data policy | Our usage |
|---|---|---|
| Anthropic (Claude) | Enterprise API: zero data retention available, never trains on API data | Primary provider for reasoning and writing |
| OpenAI (GPT) | Enterprise API: data not used for training by default | Used for specific tasks where optimal |
| Google (Gemini) | Enterprise API: customer data excluded from training | Used selectively |
We use enterprise API tiers exclusively. Your data passes through these providers for processing and is not stored, retained, or used for model training. We can provide links to each provider's enterprise data terms on request.
Technical details
Under the hood.
Infrastructure
Agents run on isolated, dedicated environments
Encryption
All data encrypted in transit (TLS 1.3) and at rest (AES-256)
Authentication
All tool integrations use OAuth 2.0 or API keys with minimal scopes
Access control
Only your designated team members interact with your agent
Backups
Encrypted backups with 30-day retention, stored separately
Monitoring
Automated health monitoring — no human access to your data for monitoring
Logging
Agent actions logged for debugging and transparency — available to you on request
Boundaries
What your agent can and can't do.
✓ Your agent CAN
- Read and send emails on its designated account
- Access calendars you've authorized
- Post messages in Slack/Teams channels you've added it to
- Read and update documents you've shared with it
- Remember context about your team, projects, and preferences
✗ Your agent CANNOT
- Access tools or accounts you haven't authorized
- Share information between different clients' agents
- Make purchases, sign contracts, or take legally binding actions
- Access personal devices, cameras, or location
- Override guardrails — even if asked to
Guardrails
Built-in safety, always on.
Action limits
Agents confirm before high-impact actions (external emails, modifying shared documents, scheduling with external parties)
Human escalation
Agents escalate to your team when unsure, rather than guess
No backend exposure
Clients interact with the agent, not our infrastructure
No admin override
Clients direct their agent but cannot modify system configurations
Compliance
Certifications & roadmap.
We're pursuing formal security certifications and will update this page as milestones are reached. Current compliance posture:
FAQ
Common questions.
Still have questions?
Ask us directly. We'll give you a straight answer, not a sales pitch.
millie@hellomillie.ai