Sanitize PII before it hits your LLM — 5-line wrapper
Any app where users type free-text that gets forwarded to an LLM is leaking personal data into your provider's logs (and sometimes their training sets). Fix: run inputs through a PII sanitizer (OpenAI's privacy filter, Microsoft Presidio, or guardrails-ai) BEFORE the LLM call. Replace names / emails / phones / addresses with tokens, restore after. 5-line wrapper, major compliance win.