Problem Statement
Large payloads (5–20 MB) are causing broker pressure. How would you redesign message flow?
Explanation
Keep events small: publish metadata and a pointer to object storage. Consumers fetch the blob when needed, with signed URLs and short TTLs. Add server-side compression and chunking for rare large items.
Introduce quotas and reject oversize messages early. For hot objects, cache near consumers to avoid repeated downloads and tune prefetch to avoid large in-flight memory.
Code Solution
SolutionRead Only
event: { id, type, s3_url, checksum } // fetch & verify on consumePractice Sets
This question appears in the following practice sets:
