Problem Statement
Which option reduces memory when loading a large CSV with pandas?
Explanation
Specifying dtypes and selecting only needed columns cuts RAM usage. Parsing dates during read can also help if types are clear.
For huge files, consider chunksize to stream data and process in batches.
Code Solution
SolutionRead Only
pd.read_csv('data.csv', usecols=['id','price'], dtype={'id':'int32','price':'float32'})Practice Sets
This question appears in the following practice sets:
