Are you writing slow Python loops? Recalculating the same values over and over? If so, you're wasting time! But there’s a simple trick that can instantly speed up your code—without rewriting everything from scratch.
Let’s explore Python’s built-in memoization feature and see how it can make your code 14,000x faster!
The Problem: Slow Recursive Functions
Consider this basic Python project with a recursive function to compute the Fibonacci sequence:
# Regular Fibonacci function
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
This works—but for large values of n
, it’s painfully slow. Why?
Every time fib(n-1)
and fib(n-2)
are called, they recalculate values that have already been computed. This results in an exponential number of function calls!
For example, calculating fib(30)
requires over 1 million recursive calls. That’s a performance nightmare.
How can we fix this?
The Forgotten Trick: @lru_cache
The solution is memoization—a technique that stores previously computed values so they don’t have to be recalculated.
And the best part? Python has a built-in way to do this with just one line.
Simply add @lru_cache
from functools
, and watch the magic happen:
from functools import lru_cache
@lru_cache(maxsize=None) # Automatically caches results
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
Why This Works
- Python now remembers past results, avoiding redundant calculations.
- Drastically reduces execution time (especially for large
n
). - Requires zero extra logic—just a single decorator!
Benchmarking: How Much Faster Is It?
Let’s compare the performance of the regular vs cached Fibonacci functions using Python’s timeit
module. Fire up your Python editor and follow along with me:
import timeit
from functools import lru_cache
# Regular Fibonacci function
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
# Cached Fibonacci function
@lru_cache(maxsize=None)
def cached_fib(n):
if n <= 1:
return n
return cached_fib(n - 1) + cached_fib(n - 2)
# Measure execution time
normal_time = timeit.timeit("fib(30)", globals={"fib": fib}, number=1)
cached_time = timeit.timeit("cached_fib(30)", globals={"cached_fib": cached_fib}, number=1)
# Prevent division by zero error
if cached_time > 0:
speedup_factor = normal_time / cached_time
print(f"Without lru_cache: {normal_time:.5f} seconds")
print(f"With lru_cache: {cached_time:.10f} seconds")
print(f"Speedup: ~{speedup_factor:.2f}x faster!")
else:
print(f"Without lru_cache: {normal_time:.5f} seconds")
print("With lru_cache: Too fast to measure!")
print("Speedup: Easily 100000x+ faster!")
Results
Without lru_cache: 0.13654 seconds
With lru_cache: 0.0000092090 seconds
Speedup: ~14826.82x faster!
For fib(30)
, @lru_cache makes the function more than 14,000x faster! In some cases, the execution time becomes so small it’s too fast to measure.
This optimization can make a huge difference when working with expensive computations in real-world applications.
Video Walkthrough
Wrapping Up
Whenever you have a function with repeated calculations, just slap @lru_cache
on it!
- Your code runs faster.
- Your CPU works less.
- And you look like a Python genius!
Have a favorite Python performance trick? Drop it in the comments below! And if you found this helpful, don’t forget to share it with your fellow coders.