BishopPhillips
BishopPhillips
BPC Home BPC Python Topic Home BPC RiskManager BPC SurveyManager BPC RiskWiki Learn HTML 5 and CSS Enquiry

Python Techniques - Decorators

1. Python Decorators.

Author: Jonathan Bishop
AI Revolution

This article is an illustration of techniques in advanced Python coding and an exploration of decorators. A decorator in Python is like a wrapper for a function or method that adds extra functionality without modifying the function or method's original code. Decorators can be plain, chained, or paramaterised to make a decorator factory (a generator of decorators). They are designated with an "@name" where "name" is the name of the decorator. Think of it like this:

You have a plain coffee (a function), and you want to add whipped cream or caramel drizzle without changing the core drink. You wrap the coffee with an object that contains the addons. That’s a decorator.

An Example Decorator - Logging

Here's a basic example. Here's a simple decorator that logs whenever a function is called:


def logger(func):
    def wrapper(*args, **kwargs):
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@logger
def greet(name):
    print(f"Hello, {name}!")

greet("Fred")

 

Running this program without the "@logger" line will generate the following output:


Hello Fred!

 

Running the program with the "@logger" line included will generate the following output:


Calling greet
Hello, Fred!

 

The "@logger" expression is a decorator invocation. It is syntactic sugar for:


logger(greet)("Fred!")

 

The function "logger()" take the original function "greet" and returns a new function "wrapper" that adds behaviour before and after the original call.

Now, in this case we have simply put the output to the console, so it is a bit silly, but if we had instead written it to a file or the stderror it becomes clear that this is a potentially useful trick in debugging:


import sys

def logger(func):
    def wrapper(*args, **kwargs):
        print(f"Calling {func.__name__}", file=sys.stderr)
        return func(*args, **kwargs)
    return wrapper

@logger
def greet(name):
    print(f"Hello, {name}!")

greet("Fred")

 

Now the greeting goes to the console and the log notifications go to stderr.

Coding technique note:

As a slight aside if you were actually doing this for real, you might want to define a reusable eprint() function like this (so you don't have to keep setting the file argument):


import sys

def eprint(*args, **kwargs):
    print(*args, file=sys.stderr, **kwargs)

def logger(func):
    def wrapper(*args, **kwargs):
        eprint(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

 

Naming conventions versus Keywords

Although we call the wrapper function "wrapper" in these examples, this is just a "convention", not a keyword. The wrapper function can be called anything you like


def shout_factory(times):  # This is your decorator factory
    def decorate_me(func):  # This is the actual decorator
        def boom(*args, **kwargs):  # This is the wrapper function
            for _ in range(times):
                print("HEH!", end=" ")
            return func(*args, **kwargs)
        return boom
    return decorate_me

@shout_factory(3)
def greet(name):
    print(f"Hello, {name}!")

greet("Jonathan") #Outputs: HEH! HEH! HEH! Hello Jonathan!

 

We will go into decorator factories later, but for now I just want you to note that the decorator factory, decorator and wrapper can have any name you like. That being said, using clear names like decorator and wrapper can make your code easier to read and understand, especially if you revisit it later or share it with someone else.

2. Why Decorate?

The standard use-cases for decoration are:

  • Logging
  • Timers (in code performance measurement)
  • Access control / permissioning
  • Memoization
  • Input validation / edit checks

Python comes with a number of built-in decorators which you may have used:

  1. @staticmethod
  2. @classmethod
  3. @property
  4. @functools.lru_cache

We will consider each of these in turn:

@staticmethod

Defines a method that doesn't take self or cls as the first argument. It is typically used when the method’s logic is related to the class but doesn't need instance or class access. I.e it is statically available regardless of the presence of a class instance.

For example:


class Math:
    @staticmethod
    def add(x, y):
        return x + y

Math.add(3, 4)  # ➜ 7 

 

Note that in this case it looks like a module access. We don't actually need to instantiate an instance of the Math class to be able to use "add". It essentially creates a utility function held inside a class definition for organisation purposes.

@classmethod

Defines a method that takes "cls" (but not "self") as its first argument, giving access to the class itself. It is used for class factory methods or anything that needs to modify the shared class level state.

For example:


class Book:
    default_title = "Untitled"

    def __init__(self, title):
        self.title = title

    @classmethod
    def create_default(cls):
        return cls(cls.default_title)

myBook = Book.create_default() 

 

Here we have defined a classmethod that is an alternative constructor, so we definately want it to be available before any instance of the class is instantiated. The standard constructor (defined as "__init_()" takes a title as an argument, while this method invokes the standard constructer using the default_title as the argument and so defaults the title to the class default. Obviousl;y, there are also other ways to achieve this exact behaviour.

@property

Defines getter behavior so an instance method can be read - accessed like an attribute. Essentially cosmetic, it provides a cleaner interface for read only behaviours.


class Circle:
    def __init__(self, radius):
        self.radius = radius

    @property
    def area(self):
        from math import pi
        return pi * self.radius ** 2

c = Circle(3)
print(c.area)  # ➜ 28.27..., accessed like an attribute!

 

It makes a method "feel" like a read only attribute while retaining the benefits of the functional model.

@functools.lru_cache

Part of the functools library, the lru_cache decorator adds memoization to a function. We will look at memoization and lru_cache in a little more detail later in this article, but the idea behind memoization is to capture or cache the results of executing a function so that if it is called again the cached value can be returned instead of recalculating the value - thus improving execution speeds.

3. Stacked & Nested Decorators

Decorators can be stacked to compose behaviors, just like function chaining or pipe() style transformations in Pandas. Stacked decorators are when you apply multiple decorators to a single function, and they’re applied from the bottom up, but executed top-down—like layers of an onion.

Here’s a simple example with two decorators: one that logs the function call, and another that times how long it takes.

Example: Logging + Timing


import time
import functools

def logger(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        print(f"[LOG] Calling {func.__name__}()")
        return func(*args, **kwargs)
    return wrapper

def timer(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        start = time.time()
        result = func(*args, **kwargs)
        end = time.time()
        print(f"[TIMER] {func.__name__} took {end - start:.4f}s")
        return result
    return wrapper

# Stacking: applied bottom-up, run top-down
@logger
@timer
def compute():
    time.sleep(1)
    print("Finished computation")

compute() 

 

When run this produces an output similar to:


LOG] Calling compute()
Finished computation
[TIMER] compute took 1.0002s 

 

Notice:

  1. @timer wraps the original compute
  2. @logger wraps the result of @timer(compute)
  3. So logger(timer(compute)) is what’s actually run

If you reverse the order of the decorators, the output changes too.

This example shows a few more hidden "gotcha's" with decorators that we should explore:

  1. the use of @functools.wraps()
  2. nesting of decorators

Nesting decorators

In the example we have included another decorator "@functools.wraps()" within the decorator definition for both logger and timer. We will go into what this decorator actually does in a minute but first let's note that including a decorator inside a decorator effectively nests the decorator.

Stacking decorators is functionally equivalent to nesting them in code. The behaviour of a decorator used in the declaration of another decorator is the same as when it is used outside of a decorator. There is nothing really special about a decorator definition as it is just another function defnition that is using closures to hold state. Just like a block of code, there is no theoretical limit to the number of nests (although, of course there is always a practical limit) - just like with recursion: there is no theoretical limit to the number of recursions, but there is a practical limit because eventually you run out of stack space.

Use of @functools.wraps()

This topic is kind-of REALLY important and ideally would have been the first topic in the article, but had we done that there wouldn't be the context to understand it.

When you create a decorator that wraps another function, you're actually replacing that original function with your own wrapper. The problem is, without @functools.wraps, the wrapper function steals the identity of the original:

  1. wrapper.__name__ becomes "wrapper" instead of "compute"
  2. wrapper.__doc__ no longer has the docstring of the original
  3. Tools like debuggers, introspection, help(), and testing frameworks might misbehave or lose context

If all you are doing is running the code, you won't notice the difference, but in the editor or debugger you very well might.

The @functools.wraps decorator copies the metadata (like __name__, __doc__, and __module__) from the original func to your wrapper function. This is the pattern:


import functools

def my_decorator(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        print("Wrapping!")
        return func(*args, **kwargs)
    return wrapper 

 

That one line makes your decorators much friendlier to debugging, documentation, and composability.

Next we will compare what happens with and without the wraps decorator:

Decorator With @wraps:

    
import functools

def log_decorator(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        """Wrapper function that logs calls"""
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@log_decorator
def say_hello():
    """Greet the user"""
    print("Hello!")

print(say_hello.__name__)   # 'say_hello'
print(say_hello.__doc__)    # 'Greet the user'        
    
 

With @wraps, all the metadata from say_hello is preserved, even though it's wrapped.

Decorator without @wraps:


def log_decorator(func):
    def wrapper(*args, **kwargs):
        """Wrapper function that logs calls"""
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@log_decorator
def say_hello():
    """Greet the user"""
    print("Hello!")

print(say_hello.__name__)   # 'wrapper'
print(say_hello.__doc__)    # 'Wrapper function that logs calls'

 

Without @wraps, Python thinks your original say_hello() is now called wrapper, and its docstring is lost. This messes with:

  1. Documentation tools
  2. Introspection (help() output)
  3. Debugging (stack traces become confusing)
  4. Testing frameworks and decorators like @patch

If you're wrapping functions, always use @functools.wraps(func). It's one line of protection that preserves your original function’s identity.

4. Parameterised Decorators - Decorator Factories

Every decorator has an implied argument: the function name it is wrapping. When we talk about decorators with or without arguments we are referring to the presence of arguments other than the "invisible" function name.

When a decorator has arguments—like @lru_cache(maxsize=128)—you’re actually dealing with a decorator factory. It’s a function that returns a decorator, which in turn wraps your function. For example:

Decorator Without Arguments:

Here, my_decorator is a function that takes func and returns a modified version of it


@my_decorator
def func():
    ...

 

Decorator With Arguments:

Now my_decorator(arg1, arg2) runs first, returning a new decorator customized with arg1 and arg2. That returned decorator is then used to wrap func.


@my_decorator(arg1, arg2)
def func():
    ...

 

For example, this is a decorator that takes one argument and repeats the function call n times when called once:


def repeat(n_times):
    def decorator(func):
        def wrapper(*args, **kwargs):
            for _ in range(n_times):
                func(*args, **kwargs)
        return wrapper
    return decorator

@repeat(3)
def say_hello():
    print("Hello!")

say_Hello()

 

The for-loop inside the wrapper causes the wrapped function to be called n times. Output:


Hello!
Hello!
Hello!

 

A Sensible Example: @lru_cache(maxsize)

We have already touched on lru_cache decorator earlier. This decorator is one of the more useful ones as it memoises the function, caching the output and checking if the input/output is in the cache before calling the function again. If the output is in the cache it reuses that previous output instead. The argument sets the size of the cache


from functools import lru_cache

@lru_cache(maxsize=128)  # <-- this returns a caching decorator
def my_fib(n):
    if n <= 1:
        return n
    return my_fib(n - 1) + my_fib(n - 2)


print(f"{my_fib(6)}")
print(f"{my_fib(4)}") 

 

In the example, the first call will cause every instance of fib<=6 to be called and stored in the cache, so the second call just reuses the cached result.

  1. lru_cache(maxsize=128) returns a decorator that adds caching of up to 128 entries.
  2. That decorator wraps my_fib() so repeated calls with the same input fetch from cache, saving computation.

A couple of notes about lru_cache:

  1. The cache is thread safe
  2. The cache uses a dictionary to store the parameter pattern so the arguments must be hashable and order preserved. Changing the order of the calling parameters will create a new entry in the cache.
  3. Setting maxsize=None removes the upper bound to the cache size and it can grow indefinately. The default size is 128.
  4. Setting typed=True will make the cache sensitive tothe type of the param as well as the name/order

5. Decorator Factories Without Arguments

It's the "()" that indicates the decorator factory rather than the arguments themselves. @mydecorator() is a decorator factory, even if you’re not passing any arguments. It’s the difference between using a decorator and generating one dynamically. A factory returns a decorator that is then used to wrap a function, while a decorator just wraps the function.

For example - a simple decorator:


def mydecorator(func):
    def wrapper(*args, **kwargs):
        print("Running!")
        return func(*args, **kwargs)
    return wrapper

@mydecorator
def greet():
    print("Hello")

 

Here, mydecorator directly takes the function and returns a modified version. No parentheses needed because you're not calling it—you're passing the function into it.

Contrast with:


def mydecorator():
    def actual_decorator(func):
        def wrapper(*args, **kwargs):
            print("Running from factory!")
            return func(*args, **kwargs)
        return wrapper
    return actual_decorator

@mydecorator()  # <-- called here, returns the actual decorator
def greet():
    print("Hello")

 

Here we are calling mydecorator first, expecting it to return an actual decorator which is then used to wrap the function. Even if mydecorator() doesn’t take arguments, it still returns a decorator when called so it’s still a factory, just one without configurable behavior (yet).

We might use a decorator factory without arguments because we:

  1. Want to keep a consistent interface even when no customization is needed (future-proofing).
  2. Need setup logic that runs once, independent of the decorated function.
  3. Are toggling behavior based on config or environment internally.

6. Hybrid Decorator Factories

A decorator factory can be designed to be called with or without the (), but the behaviour is different in each case and a little more setup is required in the code. These are hybrid decorators. They function both as decorators and decorator factories.

For example:


def log_call(_func=None, *, prefix=">>"):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            print(f"{prefix} Calling {func.__name__}")
            return func(*args, **kwargs)
        return wrapper

    # If used without parentheses: @log_call
    if callable(_func):
        return decorator(_func)

    # If used with parentheses: @log_call(...)
    return decorator

@log_call
def greet():
    print("Hello!")

@log_call(prefix=">>>")
def farewell():
    print("Goodbye!")

 

Note that the first invocation has no () and is applying the decorator immediately, while the second invocation is calling the factory. Earlier I noted that, all decorator invocations actually have arguments (even if they look like they don't) because the first argument is the function itself, if it has no other arguments and no explicit "()". So the key to distinguishing between a decorator invocation and a decorator factory invocation is checking whether the first argument (_func) is callable:

  1. If it is, the decorator was used without parentheses.
  2. If not, it was called with arguments and should return a decorator

This pattern is used in real-world libraries like pytest.fixture and click.command.

7. Decorators As Closures

In another article on our website (www,bishopphillips.com) I go through closures. A decorator is essentially a kind of closure (a function that holds state between function calls). This characteristic can be utilised to create some useful behaviours.

Counting Decorator

Here is a decorator that tracks how many times a function has been called:


import functools

def count_calls(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        wrapper.call_count += 1
        print(f"[Call #{wrapper.call_count}] {func.__name__} was called")
        return func(*args, **kwargs)
    
    wrapper.call_count = 0
    return wrapper

@count_calls
def greet(name):
    print(f"Hello, {name}!")

#Usage: 

greet("Jonathan")
greet("Ada")
greet("Grace")

print(f"Greet call count: {greet.call_count}"))

 

Which creates the output:


[Call #1] greet was called
Hello, Jonathan!
[Call #2] greet was called
Hello, Ada!
[Call #3] greet was called
Hello, Grace!
Greet call count: 3

 

The wrapper function creates a local variable or attribute inside the function definition "wrapper.call_count" which attaches a counter directly to the wrapper function object. With each call that counter is incremented, and it can be inspected by accessing the function object without invocation, which we do in the final print statement.

Anonymising Decorator

We can create a decorator to hide the original functions identity. For example:

 

Which creates the output:


def anonymize(func):
    def wrapper(*args, **kwargs):
        print("A mysterious function is running...")
        return func(*args, **kwargs)

    # Mangle the metadata
    wrapper.__name__ = "anonymous_function"
    wrapper.__doc__ = "This function prefers to remain nameless."
    wrapper.__module__ = "unknown.origin"
    return wrapper

@anonymize
def launch_protocol():
    """Launches the orbital transport sequence."""
    print("Liftoff initiated.")


#Usage
launch_protocol()
print(launch_protocol.__name__)     # anonymous_function
print(launch_protocol.__doc__)      # This function prefers to remain nameless.
print(launch_protocol.__module__)   # unknown.origin 

 

Just like modules and packages in python, functions have invisible attributes like "__name__" and "__doc__". A decorator can be designed to deliberately modify these as we have done above. This kind of manipulation might be desirable under the following scenarios:

  1. Anonymize functions in logs or outputs
  2. Standardize signatures of multiple functions to share tooling
  3. Temporarily override metadata during testing or benchmarking
  4. Decorators for dynamic function generation, e.g. templating APIs or building DSLs

8. Exploring Use Some Additional Use Cases For Decorators

So far, we have demonstrated decorators for:

  • Counting function calls
  • Logging
  • Timers
  • Metadata mangling
  • Function multiplying
  • Anonymizing
  • Memoizing (caching)

As the final part of this article I want to float some addional use cases for applying decorators.

1. Argument Mutators or Injectors

Assuming we design our code to rigid patterns that include certain standard keywork parameters like "debug", that can be turned on or off to create additional behaviours, we can use decorators to set this behaviour project wide. You would probably want to create a package or module with such decorators and import them into your python module. An argument modifier would:

  • Modify or inject keyword arguments at call time
  • Force "debug=True" or inject a context or request_id or override headers

And would look something like this:


def force_debug(func):
    def wrapper(*args, **kwargs):
        kwargs["debug"] = True
        return func(*args, **kwargs)
    return wrapper

 

2. Retries + Backoff

  • Useful for flaky I/O or network operations
  • Common in API clients, file handlers, database retries

@retry(retries=3, delay=2)
def upload_file(...):
    ...

 

3. Access Control / Permissions

  • Validate user roles, tokens, or security checks before proceeding
  • Great in web frameworks or microservices (even in MCP-like command gateways)

def require_admin(func):
    def wrapper(*args, **kwargs):
        if not current_user.is_admin:
            raise PermissionError("Not authorized")
        return func(*args, **kwargs)
    return wrapper 

 

4. Caching / Lazy Evaluation

  • Beyond @lru_cache, you could design scoped or time-based caches
  • Great for expensive computations or config hydration

@time_cache(ttl=60)  # custom TTL-based caching
def get_config_from_disk():

 

5. Profiling / Tracing / Telemetry

  • Automatically measure memory or line-by-line timing
  • Insert distributed tracing IDs

@trace_request("MCP_SUBSYSTEM")
def handle_message(msg): ...

 

6. Type Enforcement or Validation

  • Check or coerce argument types at runtime—helpful for plugin interfaces or fuzzing
  • Could even reject forbidden combinations

7. Concurrency Control

  • Limit how often something is called (rate limiting)
  • Synchronize access to a shared resource or wrap it in a thread lock

@rate_limit(calls=10, per=60)
def check_status(): ...

 

8. Mock/Stub Injection in Test Environments

  • Replace default behaviors in tests or simulation modes
  • Imagine using it to swap out real I/O with MCP-simulated transport

9. Argument Routing/Redirection

  • Switch communication streams mid-flight based on a kwarg
  • E.g., @route_comms('stdio') vs @route_comms('http') depending on target