Problem Statement
How would you design a logging architecture for a large Kubernetes cluster to support debugging and audit trails?
Explanation
For a large Kubernetes environment you typically centralise container logs via a logging agent (like Fluentd or Filebeat) running as DaemonSet, forward data to a log-store (e.g., Elasticsearch) and provide dashboards (Kibana/Grafana) for querying and alerting. Include metadata (pod, namespace, node) for traceability and use retention policies and archival for audit. This design gives visibility into both real-time operations and historical incidents.
Practice Sets
This question appears in the following practice sets:
