Problem Statement
Explain Linux system performance tuning techniques. How do you identify bottlenecks and optimize CPU, memory, disk, and network performance?
Explanation
Identify bottlenecks: use monitoring tools determining if CPU, memory, disk I/O, or network is constraining performance. Top or htop shows CPU and memory, vmstat shows system stats, iostat shows disk I/O, sar provides historical data. High load average (>CPU cores) indicates CPU bottleneck, high swap usage indicates memory pressure, high iowait indicates disk bottleneck.
CPU optimization: identify CPU-intensive processes with top, nice/renice adjust process priority, taskset pins processes to specific CPU cores (CPU affinity), disable unnecessary services reducing CPU load, upgrade to more/faster CPUs. Tune: echo performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor sets CPU governor to performance mode.
Memory optimization: add more RAM if constantly swapping, tune swappiness: echo 10 > /proc/sys/vm/swappiness reduces swap usage (default 60), tune cache pressure: echo 50 > /proc/sys/vm/vfs_cache_pressure controls caching behavior, identify memory leaks (growing RSS in top), disable unused services, use lightweight alternatives.
Disk I/O optimization: use faster storage (SSD instead of HDD), tune I/O scheduler: cat /sys/block/sda/queue/scheduler shows schedulers, echo deadline > /sys/block/sda/queue/scheduler sets deadline scheduler (good for SSDs), increase read-ahead: blockdev --setra 8192 /dev/sda, use separate disks for different workloads (logs on separate disk from database), tune filesystem mount options (noatime reduces writes), enable write caching (with UPS backup).
Network optimization: tune TCP buffers: sysctl -w net.core.rmem_max=16777216, sysctl -w net.core.wmem_max=16777216 increases buffer sizes, tune TCP settings: sysctl -w net.ipv4.tcp_tw_reuse=1 allows reusing TIME_WAIT connections, use faster network interface, enable jumbo frames on gigabit networks: ifconfig eth0 mtu 9000.
Filesystem tuning: choose appropriate filesystem (XFS for large files, ext4 for general use), tune mount options (noatime, nodiratime, data=writeback for performance, data=journal for consistency), adjust reserved blocks: tune2fs -m 1 /dev/sda1 reduces reserved space from 5% to 1%.
Kernel tuning: parameters in /etc/sysctl.conf or /proc/sys. Examples: net.ipv4.ip_local_port_range for port range, net.core.somaxconn for connection queue, vm.dirty_ratio and vm.dirty_background_ratio for write caching. Apply: sysctl -p.
Benchmarking: establish baseline before tuning, benchmark after changes, common tools: sysbench (CPU, memory, I/O), iperf (network), fio (disk I/O), stress-ng (stress testing). Document changes and results. Understanding performance tuning optimizes system resource utilization.