Categorygithub.com/pilotso11/lazywritercache
modulepackage
0.1.5
Repository: https://github.com/pilotso11/lazywritercache.git
Documentation: pkg.go.dev

# README

In memory GOLANG Lazy Writer Cache

A lazy writer cache is useful in situations where you don't want to block a low latency operation with something expensive, such as a database write, but you want the consistency of a single version of truth if you need to look for the object again. A simple goroutine spun off write satisfies the first condition of unblocking the operation, but doesn't provide any mechanism to resolve the lookup.

A lazy writer cache has the following properties:

  • An in memory cache of objects
  • Store operations are fast, objects are marked as dirty for lazy writing
  • Read objets are also fast as they are coming from the in memory cache of objects
  • A goroutine provides background services to write the cache to some durable store (Postgres etc).
  • A separate goroutine provides cache pruning to the configured size
  • There is no guarantee of object persistence between lazy writes. If your system crashes in between, your data is lost. You must be comfortable with this tradeoff.
  • To keep writes fast, eviction is also a lazy operation so the size limit is a soft limit that can be breached. The lazy purge will do a good job of keeping memory usage under control as long as you don't create objects faster than it can purge them.

Dependencies

The basic lazywritercache has no dependencies outside of core go packages. The dependencies in go.mod are for the GORM example in /example.

A second lock free implementation which performs slightly better under highly parallel read workloads is in the lockfree sub-folder as lockfree.LazyWriterCacheLF and has a dependence on xsync.MapOf.

Why not just use REDIS?

Good question. In fact, if you need to live in a distributed world then REDIS is probably a good solution. But it's still much slower than an in memory map. You've got to make hops to a server etc. This in memory cache is much faster even than REDIS - roughly 100x. The benchmark results below (and in the tests) show performance nearly as good as an in memory map. Keep in mind that the lazy writer db time is still a practical limit on how fast you can update the cache. A network based solution will have times measured in 10's of microseconds at best. A database write is another 2 orders of magnitude slower.

If you are really sensitive about nonblocking performance then you could conceivably put a lazy write cache in front of redis with a fairly quick sync loop.

Benchmark results for cache size of 20k items and 100k items on macbook apple silicon.

Benchmarkns/opb/opallocs/op
CacheWrite 20k1671473
CacheWrite 100k1681464
CacheRead 20k4800
CacheRead 100k5410
CacheRead 100k 5 threads67740
Benchmark Lock Freens/opb/opallocs/op
CacheWrite 20k2991096
CacheWrite 100k2701137
CacheRead 20k3900
CacheRead 100k5210
CacheRead 100k 5 threads63740

How do I use it?

You need to be at least on go1.18 as we use generics. If you've not tried out generics, now is great time to do it.

go get github.com/pilotso11/lazywritercache

You will need to provide 2 structures to the cache.

1. CacheReaderWriter

You will need a CacheReaderWriter implementation. This interface abstracts the cache away from any specific persistence mechanism. It has functions to look up cache items, store cache items and manage transactions. Any of these can be no-op operations if you don't need them. A GORM implementation is provided.

import "github.com/pilotso11/lazywritercache"

type CacheReaderWriter[K,T] interface {
	Find(key K, tx any) (T, error)
	Save(item T, tx any) error
	BeginTx() (tx any, err error)
	CommitTx(tx any)
	Info(msg string)
	Warn(msg string)
}

2. A Cachable structure

Cacheable strucs must implement a function to return their key and a function to sync up their data post storage. If your database schema as dynamically allocated primary keys you need this because you won't know the key at save time (I'm assuming you have a secondary unique key that you are using to lookup the items).

type Cacheable interface {
	Key() any
	CopyKeyDataFrom(from Cacheable) Cacheable // This should copy in DB only ID fields.  If gorm.Model is implement this is ID, creationTime, updateTime, deleteTime
}

For example, if you were using GORM and your type implements gorm.Model your copy function would look like:

func (data MyData) CopyKeyDataFrom(from lazywritercache.Cacheable) lazywritercache.CacheItem {
    fromData := from.(MyData)  
    
    data.ID = fromData.ID
    data.CreatedAt = fromData.CreatedAt
    data.UpdatedAt = fromData.UpdatedAt
    data.DeletedAt = fromData.DeletedAt
    
    return data
}

Two sample CacheReaderWriter implementations are provided. A no-op implementation for unit testing and a simple GORM implementation that looks up items with a key field and saves them back.

The test code is an illustration of the no-op implementation. The example folder has as sample that use GORM and postgres.

Assuming you have a CacheReaderWriter implementation, and you've created your Cacheable, then creating your cache is straightforward.

readerWriter := lazywritercache.NewGormCacheReaderWriter[string, Person](db, NewEmptyPerson)
cacheConfig := lazywritercache.NewDefaultConfig[string, Person](readerWriter)
cache := lazywritercache.NewLazyWriterCache[string, Person](cacheConfig)

# Packages

MIT License # Copyright (c) 2023 Seth Osher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
No description provided by the author

# Functions

No description provided by the author
NewGormCacheReaderWriter creates a GORM Cache Reader Writer supply a new item creator and a wrapper to db.Save() that first unwraps item Cacheable to your type.
NewLazyWriterCache creates a new cache and starts up its lazy db writer ticker.
No description provided by the author

# Structs

No description provided by the author
No description provided by the author
EmptyCacheable - placeholder used as a return value if the cache can't find anything.
GormCacheReaderWriter is the GORM implementation of the CacheReaderWriter.
LazyWriterCache This cache implementation assumes this process OWNS the database There is no synchronisation on Save or any error handling if the DB is in an inconsistent state To use this in a distributed mode, we'd need to replace it with something like REDIS that keeps a distributed cache for update, and then use a single writer to persist to the DB - with some clustering strategy.
No description provided by the author

# Interfaces

No description provided by the author
No description provided by the author