Open In App

Memory profiling in Python using memory_profiler

Last Updated : 18 Aug, 2025
Comments
Improve
Suggest changes
2 Likes
Like
Report

When discussing Python performance, many developers primarily think about execution time “how fast does the code run?”. However, in real-world applications, memory usage is equally important. If a program consumes excessive RAM, it may slow down entire system or even lead to crashes.

This is where memory profiling becomes valuable. It helps track how much memory different parts of code are consuming. In this article, Python package memory-profiler will be used to analyze memory usage of functions step by step.

Step by Step Implementation

Step 1: Install Required Packages

We’ll need to install:

  • memory-profiler: To track memory usage
  • requests: To test memory profiling on a large text file

Run following command on cmd:

pip install memory-profiler requests

Step 2: Profiling Code

Now that everything is set up, create a file with name word_extractor.py and add below code to it.

Python
from memory_profiler import profile
import requests

class BaseExtractor:
    # Function to write words from a list into a file
    @profile  # This decorator will monitor memory usage
    def parse_list(self, array):
        with open('words.txt', 'w') as f:
            for word in array:
                f.writelines(word + "\n")

    # Function to fetch data from a URL and save it
    @profile
    def parse_url(self, url):
        response = requests.get(url).text
        with open('url.txt', 'w') as f:
            f.writelines(response)

Explanation:

  • @profile decorator tells Python to monitor memory usage of that function.
  • parse_list(): Writes words from a list to a file.
  • parse_url(): Downloads a huge word list from a URL and saves it to a file (this will consume much more memory).

Step 3: Driver Code

Now, our main code is ready. Let's write driver code which will call this class functions. Now, create another file called run.py and write following code in it.

Python
from word_extractor import BaseExtractor
if __name__ == "__main__":
    # URL with a huge word list
    url = 'https://raw.githubusercontent.com/dwyl/english-words/master/words.txt'
    
    # Small local word list
    array = ['one', 'two', 'three', 'four', 'five']
    
    # Initialize BaseExtractor object
    extractor = BaseExtractor()
    
    # Call both functions
    extractor.parse_url(url)   # This will use more memory
    extractor.parse_list(array)

Explanation:

  • extractor = BaseExtractor(): Creates an object of the BaseExtractor class.
  • extractor.parse_url(url): Extracts and processes words from given URL (large dataset, uses more memory).
  • extractor.parse_list(array): Extracts and processes words from small local list (uses less memory).

Step 4: Run the Profiler

Now to test the code, simply run run.py file. Run following command:

python -m memory_profiler run.py

If everything ran successfully, then you should see something like this:

memory_profilingOutput
Memory Profiler Stats

Here, you’ll notice:

  • parse_url() consumes much more memory (because it fetches a huge file from internet).
  • parse_list() uses less memory (since it just writes a few words).

With this, our memory profiling is successfully completed and we can clearly see how different functions consume memory.

Key points to remember

  • memory-profiler itself consumes memory, so don’t use it in production. Use it only while developing or debugging.
  • For production memory optimization, you’d use tools like tracemalloc or monitor memory at a system level.
  • Always remember to close files properly (use with open() which does it automatically).

Use Case of Memory Profiling

Memory profiling is not just for debugging it’s an essential part of optimizing real-world Python applications. Some common use-cases include:

  • Detecting memory leaks in data pipelines: Helps find places where objects are not released properly.
  • Measuring memory impact of large dataset processing: Useful in data science and machine learning when working with huge files or arrays.
  • Comparing memory cost of different algorithms: Lets you choose the most memory-efficient implementation for your problem.

Explore