Problem Statement
How do you build a memory-efficient data pipeline with generators? Explain the pattern.
Explanation
Create a chain of small generator functions, each transforming or filtering items and yielding to the next stage. Because each stage yields on demand, only a few items live in memory at once.
Compose them with for loops or yield from so that back pressure controls flow. This pattern is ideal for log processing, streaming CSVs, and network streams where you cannot load everything at once.
Code Solution
SolutionRead Only
def read_lines(f):
for line in f: yield line.strip()
def only_errors(lines):
for s in lines:
if 'ERROR' in s: yield s
with open('app.log') as f:
for msg in only_errors(read_lines(f)): print(msg)