Two options: batch-enrich via the API, or drop a monthly MMDB into your log pipeline and keep raw IPs out of your analyst's hands entirely.
Every hour, pull the last hour's IPs from your log ingest, chunk into 100-item batches, call /v1/batch, and join the results back onto the log line.
// Python — hourly enrichment from ipatlas import IP-Atlas c = IPAtlas(api_key=os.environ["IPATLAS_KEY"]) for chunk in chunked(unique_ips_this_hour, 100): r = c.lookup_batch(chunk) for row in r["results"]: warehouse.upsert("ip_enrichment", row)
Volume: 1M unique IPs/day = 10,000 batch calls = 1M requests against your plan. On Developer ($19, 2M/mo) that's about half your quota.
Developer+ subscribers get monthly MMDB snapshots. Ship them with your log shipper (Vector, Fluent Bit, Filebeat) and enrich inline. No outbound HTTP, no latency, no analyst ever touches the raw IP.
# Vector config — MMDB enrichment on every log event [transforms.geo] type = "geoip" inputs = ["logs"] database = "/etc/vector/ipatlas-2026-04.mmdb" source = "ip" target = "geo"
is_datacenter to keep real user trends clean.GDPR, CCPA, and most state privacy laws treat IP addresses as personal data. The MMDB pattern lets you enrich once at the log shipper and strip the raw IP before the data reaches your warehouse. The country / ASN stays; the identifier doesn't.