| Data Type | Retention Period |
|---|---|
| API inputs/outputs | 30 days, then deleted |
| Claude Pro/Free conversations | Until YOU delete them |
| Feedback (thumbs up/down) | 5 years + may be used for training |
| Usage policy violations | 2-7 years |
| Device info, IP, usage patterns | Collected automatically |
People absolutely do run local models. The r/LocalLLaMA community has 300k+ members.
| Budget | Hardware | What It Runs |
|---|---|---|
| ~$200-400 | Raspberry Pi 5 / Old laptop | 7B models only, slow |
| ~$500 | Used RTX 3090 (24GB VRAM) | 70B quantized, 13B full speed |
| ~$1,000 | Mac Mini M4 24GB | 13-30B models comfortably |
| ~$1,500 | Mac Mini M4 Pro 48GB | 70B at decent speed |
| ~$2,000 | RTX 4090 (24GB VRAM) | Fast 70B inference |
| ~$3,000+ | Mac Studio 64-128GB | Large models, very smooth |
| Model Size | RAM/VRAM Needed | Quality Level |
|---|---|---|
| 7B params | 8-16GB | Basic tasks, simple code |
| 13B params | 16-24GB | Decent general use |
| 30B params | 24-48GB | Good quality |
| 70B params | 48-64GB+ | Approaches Claude/GPT quality |
| Setup | Who Sees Your Files | Who Sees Your Prompts |
|---|---|---|
| Local Mac/PC | Just you | Nobody (fully private) |
| Cloud VPS + API | VPS provider | AI company (Anthropic/OpenAI) |
| Claude Pro web | N/A | Anthropic |
| OpenClaw + Cloud API | Just you (local) | AI company |
| OpenClaw + Local LLM | Just you | Nobody |
For experimentation: Start with Claude Pro ($20/mo) or cloud API
For privacy-sensitive work: Invest in local setup (Mac with 48GB+ or RTX 4090)
Hybrid approach: Use local for sensitive data, cloud for complex tasks
Generated by Wren | February 2026