Hello and happy new year! 🥳🙌
I’ve made a video on how to implement the singleton pattern in Python using the LRU cache.
Thank you!
Design patterns for reusable software
Hello and happy new year! 🥳🙌
I’ve made a video on how to implement the singleton pattern in Python using the LRU cache.
Thank you!
Hi 👋
In this article we’ll talk about the Object Pool pattern in Golang.
The Object Pool pattern is a design pattern used in situations when constructing objects is a costly operation, for example building an HTTPClient or DatabaseClient object can take some time.
By having a pool of resources, the resources are requested from the pool when needed and then returned when not needed so they can be reused later.
Programs can benefit from this pattern because once the object is constructed when you need it again, you’ll just grab an instance instead of constructing it again from scratch.
In Golang this pattern is easily implemented with sync.Pool. Given a struct Resource struct, to implement an object pool we’ll need to pass the NewResource function to the pool.
To track how many active instances, we have of the object Resource, we use the counter variable.
var logger = log.Default()
var counter = 0
type Resource struct {
id string
}
func NewResource() *Resource {
logger.Printf("NewResource called")
counter += 1
return &Resource{id: fmt.Sprintf("Resource-%d", counter)}
}
func (r *Resource) doWork() {
logger.Printf("%s doing work", r.id)
}
Let’s demo sync.Pool!
In the first demo, we get the resource from the pool, do some work and then put it back. By doing this one step at the time in the end we’ll end with just one Resource instance.
func demo1() {
println("demo1")
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
item := theResourcePool.Get().(*Resource)
item.doWork()
theResourcePool.Put(item)
}
println("done", counter)
}
Output
demo1 2022/08/17 22:38:59 NewResource called 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work 2022/08/17 22:38:59 Resource-1 doing work done 1
Resource-1 is the only instance that does work.
In demo2 we spawn 10 goroutines, that use the pool. Since all goroutines start roughly at the same time and require a resource to doWork, in the end the pool will have 10 Resource instances.
func demo2() {
println("demo2")
wg := sync.WaitGroup{}
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
item := theResourcePool.Get().(*Resource)
item.doWork()
theResourcePool.Put(item)
}()
}
wg.Wait()
println("done", counter)
}
Output
demo2 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-3 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-4 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-5 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-6 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-7 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-8 doing work 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 NewResource called 2022/08/17 22:41:12 Resource-1 doing work 2022/08/17 22:41:12 Resource-2 doing work 2022/08/17 22:41:12 Resource-9 doing work 2022/08/17 22:41:12 Resource-10 doing work done 10
In demo3 doing the same thing we did in demo2 with some random sleeps in between, some goroutines are faster and others are slower. The faster goroutines will also return the resource faster to the pool and slower goroutines which start at a later time will reuse the resource instead of creating a new one.
func demo3() {
println("demo2")
wg := sync.WaitGroup{}
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(900)+100) * time.Millisecond)
item := theResourcePool.Get().(*Resource)
item.doWork()
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
theResourcePool.Put(item)
}()
}
wg.Wait()
println("done", counter)
}
Output
demo2 2022/08/17 22:42:35 NewResource called 2022/08/17 22:42:35 Resource-1 doing work 2022/08/17 22:42:35 NewResource called 2022/08/17 22:42:35 Resource-2 doing work 2022/08/17 22:42:35 NewResource called 2022/08/17 22:42:35 Resource-3 doing work 2022/08/17 22:42:36 Resource-1 doing work 2022/08/17 22:42:36 Resource-2 doing work 2022/08/17 22:42:36 Resource-3 doing work 2022/08/17 22:42:36 Resource-1 doing work 2022/08/17 22:42:36 NewResource called 2022/08/17 22:42:36 Resource-4 doing work 2022/08/17 22:42:36 NewResource called 2022/08/17 22:42:36 Resource-5 doing work 2022/08/17 22:42:36 Resource-2 doing work done 5
Only 5 Resource instances have been created at this time.
The object pool pattern is a great pattern when you need to reuse an instance of an object. Constructing the object every time can be slow.
In Go we have sync.pool which implements the Object Pool pattern for us, we just need to give it a New function that returns a pointer.
Thanks for reading! 📚
package main
import (
"fmt"
"log"
"math/rand"
"sync"
"time"
)
var logger = log.Default()
var counter = 0
type Resource struct {
id string
}
func NewResource() *Resource {
logger.Printf("NewResource called")
counter += 1
return &Resource{id: fmt.Sprintf("Resource-%d", counter)}
}
func (r *Resource) doWork() {
logger.Printf("%s doing work", r.id)
}
func demo1() {
println("demo1")
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
item := theResourcePool.Get().(*Resource)
item.doWork()
theResourcePool.Put(item)
}
println("done", counter)
}
func demo2() {
println("demo2")
wg := sync.WaitGroup{}
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
item := theResourcePool.Get().(*Resource)
item.doWork()
theResourcePool.Put(item)
}()
}
wg.Wait()
println("done", counter)
}
func demo3() {
println("demo2")
wg := sync.WaitGroup{}
theResourcePool := sync.Pool{New: func() any {
return NewResource()
}}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(900)+100) * time.Millisecond)
item := theResourcePool.Get().(*Resource)
item.doWork()
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
theResourcePool.Put(item)
}()
}
wg.Wait()
println("done", counter)
}
func main() {
demo1()
//demo2()
//demo3()
}
Hello,
In this short article I would like to talk about context managers. I personally consider that at the core they are just a form of decorators. If you don’t know what a decorator is check the Decorator Pattern Wikipedia article.
Decorators can be used to implement cross-cutting concerns. We have componentA and we need logging and security, we could write the logic for logging and security handling in componentA but some people consider component a should be componentA not componentAthatAlsoKnowsAboutSecurityAndOtherStuff. Since it’s not the component’s responsibility to authorize requests or log calls to a external logging service, we can wrap the componentA into a decorator that does just that.
A formal definition for cross-cutting concerns as taken from Wikipedia is the following:
In aspect-oriented software development, cross-cutting concerns are aspects of a program that affect other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and can result in either scattering (code duplication), tangling (significant dependencies between systems), or both.
And some examples of cross cutting concerns include:
Since the context managers are sort of similar to decorators you can use them to implement cross cutting concerns. Let’s explore.
In Python you can have two types of context managers: a function and a class. In order for the function to behave like a context manager it will need to be decorated with the @contextmanager decorator, and in order for a class behave like a context manager it needs to implement __enter
__ and __exit__
.
Context managers can be called using the with statement. The following code snippet demonstrates two context managers:
from contextlib import contextmanager
@contextmanager
def simple_context_manager(function):
try:
print("calling function")
yield function
finally:
print("function call has ended")
class SimpleContextManager:
def __init__(self, cb):
self.cb = cb
def _intercept(self, *args, **kwargs):
print(f"calling with {args} {kwargs}")
return print(*args, **kwargs)
def __enter__(self):
print("intercept start")
return self._intercept
def __exit__(self, exc_type, exc_val, exc_tb):
print("intercept end")
def main():
with simple_context_manager(print) as print_func:
print_func("hi")
with SimpleContextManager(print) as print_func:
print_func("hi")
print_func("hi", end="\n\n", sep=",")
print_func("hi")
if __name__ == '__main__':
main()
What is caching? In short..
Caching is used to store the result of an expensive computation somewhere in memory or on a persistent storage device in order to optimize the program.
We have the compute_fibonacci function, which is quite slow. A version that uses cache has been implementing in the CachedComputeFibonacci class. Notice how the code takes some time to output the result for the first call of print(cached_compute_fibonacci(35))
statement but the second print in instant.
def compute_fibonacci(number):
if number <= 1:
return number
return compute_fibonacci(number-1) + compute_fibonacci(number-2)
class CachedComputeFibonacci:
def __init__(self):
self._cache = {}
def __call__(self, *args, **kwargs):
number = args[0]
if number in self._cache:
return self._cache[number]
result = compute_fibonacci(number)
self._cache[number] = result
return result
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def main():
# Non cached
print(compute_fibonacci(10))
# Cached
with CachedComputeFibonacci() as cached_compute_fibonacci:
print(cached_compute_fibonacci(35))
print(cached_compute_fibonacci(35))
if __name__ == '__main__':
main()
Logging can be useful for debugging and auditing purposes.
def compute_fibonacci(number):
if number <= 1:
return number
return compute_fibonacci(number-1) + compute_fibonacci(number-2)
class LoggedComputeFibonacci:
def __init__(self):
pass
def __call__(self, *args, **kwargs):
print(f"calling compute_fibonacci with args={args} kwargs={kwargs}")
result = compute_fibonacci(args[0])
print(f"compute_fibonacci={result}")
return result
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def main():
# Logging
with LoggedComputeFibonacci() as cached_compute_fibonacci:
print(cached_compute_fibonacci(35))
print(cached_compute_fibonacci(36))
if __name__ == '__main__':
main()
If you find yourself duplicating the same try/catch logic in multiple places of your code perhaps you can extract it into a context manager for handling errors:
from contextlib import contextmanager
@contextmanager
def my_error_handler():
try:
yield
except ZeroDivisionError:
print("abort abort")
def main():
# error handling
with my_error_handler():
print("0 / 0 =", 0 / 0)
if __name__ == '__main__':
main()
The code is definitely more cleaner this way, in my opinion.
Thanks for reading and I hope that you’ve learnt something!
Hello,
In this article we’re going to explore the Method Injection and Property Injection design patterns.
To demonstrate the patterns I’m going to add a new interface named Encoder to the printer.py file and a concrete implementation for two encoders: Rot13Encoder and NullEncoder.
class Encoder(metaclass=abc.ABCMeta):
def encode(self, message: Message) -> Message:
raise NotImplementedError("encode must be implemented!")
class Rot13Encoder(metaclass=abc.ABCMeta):
def encode(self, message: Message) -> Message:
return Message(codecs.encode(str(message), 'rot_13'))
class NullEncoder(metaclass=abc.ABCMeta):
def encode(self, message: Message) -> Message:
return message
The Encoder will be used by the printer in order to encode the messages before printing them.
The method injection pattern is used as an alternative to the constructor injection when the dependency is optional or is only used in one spot, so it wouldn’t make sense to inject it in the constructor.
My console printer would look like this If I’d use this pattern:
class ConsolePrinter(Printer): def __init__(self, prefix: str): self._prefix = prefix def print(self, message: Message, encoder: Encoder): print(self._prefix, encoder.encode(message))
When the application.py would call Printer.print it would pass the Encoder as a dependency.
The property injection patter is mostly used in libraries, applications should avoid it. To use the property injection pattern I would have to modify the ConsolePrinter class like so:
class ConsolePrinter(Printer):
def __init__(self, prefix: str):
self._prefix = prefix
self.encoder = NullEncoder()
def print(self, message: Message):
print(self._prefix, self.encoder.encode(message))
I have a property called encoder which by default acts as a NullEncoder, if for some reason the user of the library needs to change it, it can do so by injecting the needed dependency in the property.
The code for the Property Injection and Method Injection patterns is on my Github! 🙂
Thanks for reading!