For cache persistence support, please use @simp-lee/cache package instead.
SwiftCache
is a streamlined, in-memory cache library for Go, inspired by the segmented caching concepts from freecache and bigcache. Optimized for single-machine use, it encompasses under 600 lines of code including comments, offering a robust yet concise caching solution. Influenced by go-cache, SwiftCache
features a user-friendly interface for effortless integration.
A key feature is its thread-safe map[string]interface{}
structure with support for entry expiration, eliminating the need for content serialization or network transmission. This allows for versatile object storage within the cache.
SwiftCache stands out with both Least Recently Used (LRU) and First In First Out (FIFO) eviction policies, giving developers control over data eviction according to their application's requirements. LRU prioritizes evicting least-accessed items, while FIFO removes items based on their addition order, catering to varied caching strategies.
SwiftCache has undergone extensive benchmarking against traditional Go maps and the widely-used go-cache
library, showcasing its superior performance in scenarios characterized by high concurrency and large data volumes. The following benchmarks highlight SwiftCache's efficiency, especially when managing millions of items and performing concurrent operations.
Benchmarks source code can be found here.
goos: windows
goarch: amd64
pkg: github.com/simp-lee/swiftcache
cpu: 13th Gen Intel(R) Core(TM) i5-13500H
BenchmarkGetManyConcurrent-16 27213105 41.54 ns/op
--- BENCH: BenchmarkGetManyConcurrent-16
cache_test.go:932: Total items: 1000000, Each: 0, Hit Rate: 0.00%
cache_test.go:932: Total items: 1000000, Each: 6, Hit Rate: 96.00%
cache_test.go:932: Total items: 1000000, Each: 625, Hit Rate: 100.00%
cache_test.go:932: Total items: 1000000, Each: 62500, Hit Rate: 100.00%
cache_test.go:932: Total items: 1000000, Each: 1700819, Hit Rate: 100.00%
...
The following benchmark summary showcases SwiftCache's performance in different scenarios:
+--------------------------+--------------------+----------------+-----------------+----------+
| Benchmark Scenario | Cache Solution | Operations/sec | Latency (ns/op) | Hit Rate |
+--------------------------+--------------------+----------------+-----------------+----------+
| GetManyConcurrent | SwiftCache FIFO | 27,213,105 | 41.54 | 100.00% |
| | SwiftCache LRU | 32,707,440 | 50.37 | 100.00% |
| | go-cache | 18,846,588 | 66.29 | 100.00% |
| | Map | 21,343,122 | 54.70 | 100.00% |
+--------------------------+--------------------+----------------+-----------------+----------+
| SetManyConcurrent | SwiftCache FIFO | 15,027,386 | 88.20 | - |
| | SwiftCache LRU | 14,252,052 | 85.15 | - |
| | go-cache | 3,881,776 | 357.3 | - |
| | Map | 5,064,476 | 283.7 | - |
+--------------------------+--------------------+----------------+-----------------+----------+
| SetAndGetManyConcurrent | SwiftCache FIFO | 10,798,873 | 95.76 | 100.00% |
| | SwiftCache LRU | 11,510,503 | 103.2 | 100.00% |
| | go-cache | 3,054,723 | 412.2 | 100.00% |
| | Map | 3,459,856 | 354.6 | 100.00% |
+--------------------------+--------------------+----------------+-----------------+----------+
GetManyConcurrent Test: SwiftCache demonstrates exceptional retrieval speeds, with FIFO mode slightly outperforming LRU due to its simpler eviction logic. Both exhibit performance marginally superior to standard Go maps and go-cache
, consistently achieving up to a 100% hit rate.
SetManyConcurrent Test: In write-heavy scenarios, SwiftCache maintains high performance, with FIFO mode again showing a slight edge over LRU. The efficiency starkly contrasts with the higher latencies observed in go-cache
and Go maps.
SetAndGetManyConcurrent Test: When testing combined set and get operations, SwiftCache's efficiency shines, offering the lowest operation times among the tested solutions and maintaining a 100% hit rate.
go get github.com/simp-lee/swiftcache
A quick start guide to using SwiftCache:
package main
import (
"fmt"
"github.com/simp-lee/swiftcache"
)
func main() {
// Initialize the cache with default settings
cache, _ := swiftcache.NewCache()
// Set a value with default expiration
cache.Set("myKey", "myValue", swiftcache.DefaultExpiration)
// Retrieve and check if the value exists
value, found := cache.Get("myKey")
if found {
fmt.Println("Found myKey: ", value)
}
}
To customize SwiftCache's behavior, such as segment count or eviction policy:
package main
import (
"fmt"
"github.com/simp-lee/swiftcache"
"hash/fnv"
"time"
)
func main() {
cacheConfig := swiftcache.CacheConfig{
SegmentCount: 1024, // Customize the number of segments
MaxCacheSize: 5000, // Set the maximum cache size
DefaultExpiration: 5 * time.Minute, // Default expiration time
EvictionPolicy: "LRU", // Set the eviction policy
HashFunc: fnv.New32, // Use FNV-1 32-bit hash function
}
// Initialize the cache with custom settings
cache, err := swiftcache.NewCache(cacheConfig)
if err != nil {
fmt.Println("Cache initialization failed: %v", err)
}
// Use the cache with the configured settings
// ...
}
SwiftCache utilizes a segmented storage mechanism to distribute cache data across multiple segments or "shards". This approach significantly reduces lock contention, a common bottleneck in traditional, single-lock cache systems. By partitioning the cache space, SwiftCache allows concurrent access to different segments, enhancing throughput and scalability. Each segment operates independently with its own lock, ensuring that operations on different segments do not block each other.
The performance benefits of SwiftCache over standard Go maps and the go-cache
library stem from this segmented architecture and the tailored locking strategy. Go maps, while fast for single-threaded operations, can suffer from contention in concurrent scenarios. SwiftCache's design minimizes this contention, providing faster access and modification times in multi-threaded environments.
SwiftCache implements two eviction policies: Least Recently Used (LRU) and First In First Out (FIFO).
LRU: Items accessed least recently are evicted first. This policy is ideal for retaining frequently accessed data in the cache. Implementing LRU requires updating access order on each retrieval, which slightly affects performance due to the necessary lock operations for order maintenance.
FIFO: Items are evicted in the order they were added, without considering access patterns. FIFO simplifies the eviction process as it does not require updating order on access, leading to potentially higher performance for scenarios where access order is not critical.
The choice between FIFO and LRU affects the locking mechanism used. LRU's need for order updates requires more sophisticated locking strategies, whereas FIFO can often operate with simpler, more performance-optimized locking due to its straightforward eviction approach.
Data eviction and expiration in SwiftCache are handled lazily. Instead of performing periodic sweeps to clean expired or evictable items, these operations are triggered during access attempts. This strategy ensures that the overhead of cleaning is spread out over time, preventing spikes in processing time that can occur with batch eviction or expiration processes. Lazy eviction contributes to a smoother performance profile, effectively "smoothing out" potential performance peaks.
SwiftCache optimizes key allocation to segments using bitwise AND operations, leveraging the requirement for the number of segments to be powers of two (e.g., 2, 4, 8, 16, ... 512, 1024). This design allows SwiftCache to replace conventional modulo arithmetic with a faster bitwise operation. Bitwise AND is used because, for any number of segments that is a power of two, calculating hash & (numberOfSegments - 1) effectively performs a modulo operation but with improved performance. Although the gain from this optimization is relatively minor, it underscores SwiftCache's dedication to maximizing efficiency across all aspects of its architecture. This approach not only simplifies the internal logic for segment allocation but also contributes to the overall performance advantages of SwiftCache in high-concurrency scenarios.
In summary, SwiftCache's design principles—segmented storage, tailored eviction policies, lazy data management, and optimized segment allocation—work in concert to provide a high-performance caching solution suitable for a wide range of applications.
NewCache(options ...CacheConfig) (*Cache, error)
: Creates a new cache instance with optional configuration. The CacheConfig struct allows customization of segments, maximum cache size, default expiration, hash function, and eviction policy.
Set(key string, value interface{}, ttl time.Duration)
: Adds a new item to the cache or updates an existing item's value and expiration time. ttl can be set to NoExpiration for items that should not expire.
Get(key string) (interface{}, bool)
: Retrieves an item from the cache. Returns the item and a boolean indicating whether the key was found.
Delete(key string)
: Removes an item from the cache by its key.
Flush()
: Clears all items from the cache.
GetWithExpiration(key string) (interface{}, time.Time, bool)
: Similar to Get
, but also returns the expiration time of the item if it exists.
ItemCount() int
: Returns the total number of items currently in the cache, including those that may have expired but have not yet been cleaned up.
Items() map[string]interface{}
: Returns a copy of all unexpired items in the cache as a map.
Item(key string) (*Item, bool)
: Retrieve an item along with its metadata from the cache. It returns a pointer to the Item and a boolean indicating whether the key was found. The Item struct includes the value, expiration time, and other internal details. This method is particularly useful when you need more information about a cache item, such as its expiration time, in addition to the value itself.
cache.Set("userId", 12345, 5*time.Minute)
item, found := cache.Item("userId")
if found {
fmt.Printf("User ID: %v, Expires at: %v\n", item.Value, time.Unix(0, item.Expiration))
}
Increment(key string, n int64) error
: Increments the value of a numerical item by n. Returns an error if the key does not exist, the item has expired, or the item's value is not a number.
Decrement(key string, n int64) error
: Decrements the value of a numerical item by n. Similar error conditions apply as with Increment
.
cache.Set("counter", 10, swiftcache.DefaultExpiration)
err := cache.Increment("counter", 5)
if err == nil {
counter, _ := cache.Get("counter")
fmt.Printf("Counter after increment: %v\n", counter)
}
err = cache.Decrement("counter", 3)
if err == nil {
counter, _ := cache.Get("counter")
fmt.Printf("Counter after decrement: %v\n", counter)
}
OnEvicted(f func(string, interface{}))
: Sets a callback function that is called whenever an item is evicted from the cache. This can be due to expiration or when an item is manually deleted.
Special thanks to the creators of freecache and bigcache for their innovative caching mechanisms in Go, which significantly inspired the design of SwiftCache.
A heartfelt acknowledgment to go-cache and its contributors. The design of SwiftCache's API and a substantial part of our testing methodologies are influenced by the simplicity and effectiveness of go-cache. Its approach to caching solutions has been crucial in guiding the development of a user-friendly interface and robust functionality in SwiftCache.
We are thankful to the open-source community for the collaboration and knowledge-sharing that make projects like SwiftCache possible. It is through these community efforts that we can continue to build efficient and effective software solutions.