Encryption at rest
Core documents, spreadsheet data, content blocks, file metadata, and other persisted application data are encrypted in storage rather than left as plain text.
Jetdraft is built for work that often includes customer data, internal process details, supplier information, pricing, legal language, and other sensitive context. Our security model starts with encrypted storage, then gets stricter at the inference layer by routing heavily through providers that publish stronger zero-data-retention and compliance commitments than the default posture many AI products ship with.
Core documents, spreadsheet data, content blocks, file metadata, and other persisted application data are encrypted in storage rather than left as plain text.
We lean heavily on providers like Fireworks and DeepInfra because they publicly document stronger no-training and low- or zero-retention handling for the inference paths we care about.
If you enable word masking, sensitive terms are replaced before provider-bound content is sent, so the original confidential terms never reach the model.
Encryption is applied at write time, not added as a layer after the fact. Document bodies, spreadsheet data, content blocks, and file metadata all go through encrypted storage paths.
Most AI products default to providers that retain request data and logs. Jetdraft routes through Fireworks and DeepInfra specifically because their published policies document materially stronger no-retention postures.
Term masks replace sensitive words before content reaches the provider. The model receives the alias or REDACTED value — not the original.
Legal, financial, consulting, and operations teams often work with context that should not travel through a default-retention AI pipeline. Jetdraft's stack is built around that constraint.
If your policy is to avoid the normal retained-by-default path common in mainstream AI products, Jetdraft’s routing posture is materially different.
Word masking is designed for exactly this case: the model can still do the job, but it does so on aliases or redacted replacements instead of the original confidential language.
Fireworks publicly states zero-data-retention for open models without explicit opt-in, no training on prompts or API inputs without opt-in, SOC 2 Type II, and HIPAA compliance.
DeepInfra publicly states standard inference data is held in memory during inference, then deleted, with no disk storage for normal prompt/output handling, plus SOC 2 / ISO 27001 alignment.
For masked workflows, configured confidential terms are replaced before provider-facing content is sent, which is the strongest zero-trust option Jetdraft offers short of never sending the content at all.