To run the service: process your prompts, route them to the model providers you select, store results, and write back to your tools.
To improve the product: aggregate, anonymized usage signals — what works, what breaks, where people get stuck.
To support you: respond to messages, troubleshoot issues, send service announcements.
We don’t use your workspace content to train shared models. If you opt into model providers (OpenAI, Anthropic, and others), those calls are subject to that provider’s policy at the time of the call.