cache

package module
v0.5.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 21, 2025 License: MIT Imports: 2 Imported by: 0

README

Cache Library for Go

This library provides efficient caching solutions for various usage patterns, such as write-heavy or read-heavy scenarios, and advanced features like expiration handling, integer-specific operations, and easy-to-use locking.

Features

  • WriteHeavyCache: Optimized for frequent write operations.
  • ReadHeavyCache: Optimized for frequent read operations.
  • Expiration Support: Built-in expiration for cache entries in WriteHeavyCacheExpired and ReadHeavyCacheExpired.
  • SWR-Friendly Expiration: GetWithExpireStatus returns the value with an expired flag for stale-while-revalidate.
  • Integer-Specific Caches: Specialized caches (WriteHeavyCacheInteger and ReadHeavyCacheInteger) with increment operations.
  • RollingCache: A dynamically growing slice-based cache with efficient Append and Rotate operations, suitable for maintaining ordered sequences of elements.
  • Faster Singleflight: A custom implementation that is up to 2x faster than the standard Singleflight. See detailed results in the benchmark directory.
  • LockManager: Simplified locking for managing concurrency in your applications.

Documentation

For full API documentation, visit pkg.go.dev.

Installation

Install the library using go get:

go get github.com/catatsuy/cache

Usage

WriteHeavyCache

The WriteHeavyCache uses a sync.Mutex to lock for both read and write operations. This makes it suitable for scenarios with frequent writes.

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCache[int, string]()

	c.Set(1, "apple")
	value, found := c.Get(1)

	if found {
		fmt.Println("Found:", value) // Output: Found: apple
	} else {
		fmt.Println("Not found")
	}

	// GetItems example
	items := c.GetItems()
	fmt.Println("Items:", items) // Output: Items: map[1:apple]

	// SetItems example
	c.SetItems(map[int]string{2: "banana", 3: "cherry"})
	fmt.Println("Items after SetItems:", c.GetItems()) // Output: Items after SetItems: map[2:banana 3:cherry]

	// Size example
	fmt.Println("Size:", c.Size()) // Output: Size: 2
}
ReadHeavyCache

The ReadHeavyCache uses a sync.RWMutex, allowing multiple readers while locking only for writes. This makes it ideal for read-heavy scenarios.

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCache[int, string]()

	c.Set(1, "orange")
	value, found := c.Get(1)

	if found {
		fmt.Println("Found:", value) // Output: Found: orange
	} else {
		fmt.Println("Not found")
	}

	// GetItems example
	items := c.GetItems()
	fmt.Println("Items:", items) // Output: Items: map[1:orange]

	// SetItems example
	c.SetItems(map[int]string{2: "peach", 3: "plum"})
	fmt.Println("Items after SetItems:", c.GetItems()) // Output: Items after SetItems: map[2:peach 3:plum]

	// Size example
	fmt.Println("Size:", c.Size()) // Output: Size: 2
}
Expiration Support

The WriteHeavyCacheExpired and ReadHeavyCacheExpired caches provide expiration functionality, allowing you to specify a duration for each cache entry. Both expose GetWithExpireStatus to read a value together with its expiration status.

WriteHeavyCacheExpired Example
package main

import (
	"fmt"
	"time"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCacheExpired[int, string]()

	c.Set(1, "apple", 1*time.Second)
	fmt.Println("Before expiration:", c.Get(1)) // Output: Found: apple

	time.Sleep(2 * time.Second)
	_, found := c.Get(1)
	fmt.Println("After expiration:", found) // Output: After expiration: false
}
ReadHeavyCacheExpired Example
package main

import (
	"fmt"
	"time"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCacheExpired[int, string]()

	c.Set(1, "orange", 1*time.Second)
	fmt.Println("Before expiration:", c.Get(1)) // Output: Found: orange

	time.Sleep(2 * time.Second)
	_, found := c.Get(1)
	fmt.Println("After expiration:", found) // Output: After expiration: false
}
Stale-While-Revalidate

To implement stale-while-revalidate, use GetWithExpireStatus, which returns the value even if it is expired, together with the expired flag. You can serve the stale value immediately and trigger a background refresh.

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/catatsuy/cache"
)

var c = cache.NewReadHeavyCacheExpired[string, string]()

func fetch(ctx context.Context, key string) string {
    // heavy fetch (placeholder)
    time.Sleep(200 * time.Millisecond)
    return "fresh:" + key
}

func getWithSWR(ctx context.Context, key string) string {
    if v, found, expired := c.GetWithExpireStatus(key); found {
        if expired {
            go func() {
                fresh := fetch(ctx, key)
                c.Set(key, fresh, 5*time.Minute)
            }()
        }
        return v // serve stale-or-fresh immediately
    }
    // cache miss: fetch synchronously
    fresh := fetch(ctx, key)
    c.Set(key, fresh, 5*time.Minute)
    return fresh
}

func main() {
    fmt.Println(getWithSWR(context.Background(), "k"))
}
Integer-Specific Caches

For scenarios where increment operations are common, WriteHeavyCacheInteger and ReadHeavyCacheInteger are available.

WriteHeavyCacheInteger Example
package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCacheInteger[int, int]()

	c.Set(1, 100)
	c.Incr(1, 10)

	value, _ := c.Get(1)
	fmt.Println("Incremented Value:", value) // Output: Incremented Value: 110
}
ReadHeavyCacheInteger Example
package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCacheInteger[int, int]()

	c.Set(1, 50)
	c.Incr(1, 5)

	value, _ := c.Get(1)
	fmt.Println("Incremented Value:", value) // Output: Incremented Value: 55
}
RollingCache

The RollingCache provides a simple, dynamically growing cache for ordered elements. It supports appending new items and rotating the cache, which resets its contents while returning the previous state.

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	// Create a RollingCache with an initial capacity of 10
	c := cache.NewRollingCache[int](10)
	c.Append(1)
	c.Append(2)
	c.Append(3)

	// Get the current items
	fmt.Println("Current items:", c.GetItems()) // Output: Current items: [1 2 3]

	// Check the size of the cache
	fmt.Println("Current size:", c.Size()) // Output: Current size: 3

	// Rotate the cache and get the rotated items
	rotated := c.Rotate()
	fmt.Println("Rotated items:", rotated) // Output: Rotated items: [1 2 3]

	// The cache should now be empty
	fmt.Println("Items after rotation:", c.GetItems()) // Output: Items after rotation: []
}
LockManager

The LockManager is designed for managing locks associated with unique keys.

package main

import (
	"fmt"
	"time"

	"github.com/catatsuy/cache"
)

func main() {
	lm := cache.NewLockManager[int]()

	lm.Lock(1)
	go func() {
		defer lm.GetAndLock(1).Unlock()
		fmt.Println("Goroutine locked and released")
	}()

	time.Sleep(1 * time.Second)
	lm.Unlock(1)
	fmt.Println("Main goroutine released lock")
}
SingleflightGroup

SingleflightGroup ensures that only one function call happens at a time for a given key. If multiple calls are made with the same key, only the first one runs, while others wait and receive the same result. This is useful when you want to avoid running the same operation multiple times simultaneously.

SingleflightGroup Example
package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	sf := cache.NewSingleflightGroup[string]()

	// Define a function to load data only if it's not already cached
	loadData := func(key string) (string, error) {
		// Simulate data fetching or updating
		return fmt.Sprintf("Data for key %s", key), nil
	}

	// Use SingleflightGroup to ensure only one call for the same key at a time
	value, err, _ := sf.Do("key", func() (string, error) {
		return loadData("key")
	})

	if err != nil {
		fmt.Println("Error:", err)
	} else {
		fmt.Println("Result:", value) // Output: Result: Data for key 1
	}
}
SingleflightGroup and Caching Example

This example demonstrates how to use SingleflightGroup with WriteHeavyCache to prevent duplicate data retrieval requests for the same key. When a key is requested multiple times simultaneously, SingleflightGroup ensures that only one retrieval function (HeavyGet) executes, while the other requests wait and receive the cached result once it completes.

package main

import (
	"fmt"
	"log"
	"time"

	"github.com/catatsuy/cache"
)

// Global cache and singleflight group
var (
	c  = cache.NewWriteHeavyCache[int, int]()
	sf = cache.NewSingleflightGroup[int]()
)

// Get retrieves a value from the cache or loads it with HeavyGet if not cached.
// Singleflight ensures only one HeavyGet call per key at a time.
func Get(key int) int {
	// Attempt to retrieve the item from cache
	if value, found := c.Get(key); found {
		return value
	}

	// Use SingleflightGroup to prevent duplicate HeavyGet calls for the same key
	v, err, _ := sf.Do(fmt.Sprintf("cacheGet_%d", key), func() (int, error) {
		// Load the data and store it in the cache
		value := HeavyGet(key)
		c.Set(key, value)
		return value, nil
	})
	if err != nil {
		panic(err)
	}

	return v
}

// HeavyGet simulates a time-consuming data retrieval operation.
// Here, it sleeps for 1 second and returns twice the input key as the result.
func HeavyGet(key int) int {
	log.Printf("call HeavyGet %d\n", key)
	time.Sleep(time.Second)
	return key * 2
}

func main() {
	// Simulate concurrent access to Get with keys 0 through 9, each key accessed 10 times
	for i := 0; i < 100; i++ {
		go func(i int) {
			Get(i % 10)
		}(i)
	}

	// Wait for all concurrent fetches to complete
	time.Sleep(2 * time.Second)

	// Print cached values for keys 0 through 9
	for i := 0; i < 10; i++ {
		log.Println(Get(i))
	}
}

This example shows how multiple simultaneous requests for the same key result in only a single call to HeavyGet, while other requests wait for the result. The results are then cached, preventing repeated retrievals for the same data.

Documentation

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type LockManager added in v0.0.3

type LockManager[K comparable] struct {
	// contains filtered or unexported fields
}

LockManager manages a set of mutexes identified by keys of type K. It is designed to provide fine-grained locking for operations on individual keys.

Example

ExampleLockManager provides an example usage of LockManager.

package main

import (
	"fmt"
	"sync"

	"github.com/catatsuy/cache"
)

func main() {
	// Create a new LockManager for integer keys
	lm := cache.NewLockManager[int]()
	var wg sync.WaitGroup
	firstDone := make(chan struct{})

	// Simulate concurrent access to the same key
	key := 1

	// Main goroutine does work first deterministically
	lm.Lock(key)
	fmt.Println("Locked")
	// Simulate some work
	fmt.Println("Doing work")
	lm.Unlock(key)
	fmt.Println("Unlocked")

	// First goroutine locks and performs some work
	wg.Go(func() {
		lm.Lock(key)
		fmt.Println("Goroutine 1: Locked")
		// Simulate some work
		fmt.Println("Goroutine 1: Doing work")
		lm.Unlock(key)
		fmt.Println("Goroutine 1: Unlocked")
		close(firstDone)
	})

	wg.Go(func() {
		<-firstDone // ensure goroutine 1 runs first
		defer lm.GetAndLock(key).Unlock()
		fmt.Println("Goroutine 2: Locked")
		// Simulate some work
		fmt.Println("Goroutine 2: Doing work")
		fmt.Println("Goroutine 2: Unlocked")
	})

	wg.Wait()
}
Output:

Locked
Doing work
Unlocked
Goroutine 1: Locked
Goroutine 1: Doing work
Goroutine 1: Unlocked
Goroutine 2: Locked
Goroutine 2: Doing work
Goroutine 2: Unlocked
Example (WithLockAndUnlock)

Example for LockManager with Lock and Unlock

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	lm := cache.NewLockManager[int]()

	lm.Lock(1)
	fmt.Println("Resource 1 is locked")

	fmt.Println("Resource 1 is being used")

	lm.Unlock(1)
	fmt.Println("Resource 1 is unlocked")
}
Output:

Resource 1 is locked
Resource 1 is being used
Resource 1 is unlocked

func NewLockManager added in v0.0.3

func NewLockManager[K comparable]() *LockManager[K]

NewLockManager creates a new instance of LockManager.

func (*LockManager[K]) GetAndLock added in v0.0.3

func (lm *LockManager[K]) GetAndLock(id K) *sync.Mutex

GetAndLock retrieves the mutex associated with the given key, locks it, and returns the locked mutex. This is useful for cases where you want to obtain and lock the mutex in a single line. For example, you can use it like this:

defer lm.GetAndLock(id).Unlock()

This pattern allows you to ensure the mutex is unlocked when the surrounding function exits.

Example

Example for LockManager with GetAndLock

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	var lm = cache.NewLockManager[int]()

	heavyOperation := func(id int) {
		defer lm.GetAndLock(id).Unlock()
		fmt.Printf("Starting heavy operation on resource %d\n", id)
		// simulate heavy work without slow test
		// time.Sleep(2 * time.Second)
		fmt.Printf("Completed heavy operation on resource %d\n", id)
	}

	heavyOperation(1)
	heavyOperation(2)
}
Output:

Starting heavy operation on resource 1
Completed heavy operation on resource 1
Starting heavy operation on resource 2
Completed heavy operation on resource 2

func (*LockManager[K]) Lock added in v0.0.3

func (lm *LockManager[K]) Lock(id K)

Lock locks the mutex associated with the given key.

func (*LockManager[K]) Unlock added in v0.0.3

func (lm *LockManager[K]) Unlock(id K)

Unlock unlocks the mutex associated with the given key.

type ReadHeavyCache

type ReadHeavyCache[K comparable, V any] struct {
	sync.RWMutex // ReadHeavyCache allows concurrent read access with RWMutex
	// contains filtered or unexported fields
}

ReadHeavyCache is a cache optimized for read-heavy operations. It uses an RWMutex to allow concurrent reads and synchronized writes.

Example

Example for ReadHeavyCache

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCache[int, string]()

	c.Set(1, "orange")
	value, found := c.Get(1)

	if found {
		fmt.Println("Found:", value)
	} else {
		fmt.Println("Not found")
	}
}
Output:

Found: orange

func NewReadHeavyCache

func NewReadHeavyCache[K comparable, V any]() *ReadHeavyCache[K, V]

NewReadHeavyCache creates a new instance of ReadHeavyCache

func (*ReadHeavyCache[K, V]) Clear added in v0.0.2

func (c *ReadHeavyCache[K, V]) Clear()

Clear removes all items from ReadHeavyCache

func (*ReadHeavyCache[K, V]) Delete added in v0.2.0

func (c *ReadHeavyCache[K, V]) Delete(key K)

Delete removes a key from ReadHeavyCache.

func (*ReadHeavyCache[K, V]) Get

func (c *ReadHeavyCache[K, V]) Get(key K) (V, bool)

Get retrieves a value from ReadHeavyCache, using a read lock

func (*ReadHeavyCache[K, V]) GetItems added in v0.2.1

func (c *ReadHeavyCache[K, V]) GetItems() map[K]V

GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.

Example

Example for ReadHeavyCache GetItems

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCache[int, string]()

	c.Set(1, "orange")
	c.Set(2, "lemon")

	items := c.GetItems()
	fmt.Println("Items:", items)
}
Output:

Items: map[1:orange 2:lemon]

func (*ReadHeavyCache[K, V]) Set

func (c *ReadHeavyCache[K, V]) Set(key K, value V)

Set sets a value in ReadHeavyCache, locking for the write operation

func (*ReadHeavyCache[K, V]) SetItems added in v0.2.1

func (c *ReadHeavyCache[K, V]) SetItems(items map[K]V)

SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.

Example

Example for ReadHeavyCache SetItems

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCache[int, string]()

	c.SetItems(map[int]string{
		1: "peach",
		2: "plum",
	})

	items := c.GetItems()
	fmt.Println("Items after SetItems:", items)
}
Output:

Items after SetItems: map[1:peach 2:plum]

func (*ReadHeavyCache[K, V]) Size added in v0.2.1

func (c *ReadHeavyCache[K, V]) Size() int

Size returns the number of items currently in the cache.

Example

Example for ReadHeavyCache Size

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCache[int, string]()

	c.Set(1, "orange")
	c.Set(2, "lemon")

	fmt.Println("Size:", c.Size())
}
Output:

Size: 2

type ReadHeavyCacheExpired added in v0.0.4

type ReadHeavyCacheExpired[K comparable, V any] struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

ReadHeavyCacheExpired is a cache optimized for read-heavy operations with expiration support. It uses an RWMutex to allow concurrent reads and synchronized writes, storing values with expiration times.

Example

Example for ReadHeavyCacheExpired

package main

import (
	"fmt"
	"time"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewReadHeavyCacheExpired[int, string]()

	c.Set(1, "orange", 1*time.Second)

	if value, found := c.Get(1); found {
		fmt.Println("Found:", value)
	} else {
		fmt.Println("Not found")
	}

	// Expire immediately without waiting
	c.Set(1, "orange", -1*time.Second)
	if _, found := c.Get(1); !found {
		fmt.Println("Item has expired")
	}
}
Output:

Found: orange
Item has expired

func NewReadHeavyCacheExpired added in v0.0.4

func NewReadHeavyCacheExpired[K comparable, V any]() *ReadHeavyCacheExpired[K, V]

NewReadHeavyCacheExpired creates a new instance of ReadHeavyCacheExpired

func (*ReadHeavyCacheExpired[K, V]) Clear added in v0.2.0

func (c *ReadHeavyCacheExpired[K, V]) Clear()

Clear removes all items from WriteHeavyCache

func (*ReadHeavyCacheExpired[K, V]) Delete added in v0.2.0

func (c *ReadHeavyCacheExpired[K, V]) Delete(key K)

Delete removes a key from ReadHeavyCacheExpired.

func (*ReadHeavyCacheExpired[K, V]) Get added in v0.0.4

func (c *ReadHeavyCacheExpired[K, V]) Get(key K) (V, bool)

Get method for ReadHeavyCacheExpired

func (*ReadHeavyCacheExpired[K, V]) GetWithExpireStatus added in v0.5.1

func (c *ReadHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)

GetWithExpireStatus retrieves a value from ReadHeavyCacheExpired. It returns the value, whether it was found, and whether it is expired. When the item is expired, it still returns the stored value with expired=true. This is useful for implementing stale-while-revalidate behavior.

func (*ReadHeavyCacheExpired[K, V]) Set added in v0.0.4

func (c *ReadHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)

Set method for ReadHeavyCacheExpired with a specified expiration duration

type ReadHeavyCacheInteger

type ReadHeavyCacheInteger[K comparable, V interface {
	~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}] struct {
	sync.RWMutex // ReadHeavyCacheInteger uses RWMutex for read-heavy scenarios
	// contains filtered or unexported fields
}

ReadHeavyCacheInteger is a cache optimized for read-heavy operations for integer-like types. It uses an RWMutex to allow concurrent reads and synchronized writes.

func NewReadHeavyCacheInteger

func NewReadHeavyCacheInteger[K comparable, V interface {
	~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}]() *ReadHeavyCacheInteger[K, V]

NewReadHeavyCacheInteger creates a new read-heavy cache for integer types

func (*ReadHeavyCacheInteger[K, V]) Clear added in v0.0.2

func (c *ReadHeavyCacheInteger[K, V]) Clear()

Clear removes all items from ReadHeavyCacheExpired.

func (*ReadHeavyCacheInteger[K, V]) Delete added in v0.2.0

func (c *ReadHeavyCacheInteger[K, V]) Delete(key K)

Delete removes a key from ReadHeavyCacheInteger.

func (*ReadHeavyCacheInteger[K, V]) Get

func (c *ReadHeavyCacheInteger[K, V]) Get(key K) (V, bool)

Get retrieves a value from ReadHeavyCacheInteger, using a read lock

func (*ReadHeavyCacheInteger[K, V]) GetItems added in v0.2.1

func (c *ReadHeavyCacheInteger[K, V]) GetItems() map[K]V

GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.

func (*ReadHeavyCacheInteger[K, V]) Incr

func (c *ReadHeavyCacheInteger[K, V]) Incr(key K, value V)

Incr increments a value in ReadHeavyCacheInteger, locking for the operation

func (*ReadHeavyCacheInteger[K, V]) Set

func (c *ReadHeavyCacheInteger[K, V]) Set(key K, value V)

Set sets a value in ReadHeavyCacheInteger, locking for the write operation

func (*ReadHeavyCacheInteger[K, V]) SetItems added in v0.2.1

func (c *ReadHeavyCacheInteger[K, V]) SetItems(items map[K]V)

SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.

func (*ReadHeavyCacheInteger[K, V]) Size added in v0.2.1

func (c *ReadHeavyCacheInteger[K, V]) Size() int

Size returns the number of items currently in the cache.

type RollingCache added in v0.3.0

type RollingCache[V any] struct {
	sync.Mutex
	// contains filtered or unexported fields
}

RollingCache is a thread-safe cache that uses a slice for storing elements. It supports Append and Rotate operations, and maintains an initial length for reset.

Example

Example for RollingCache Append and GetItems

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewRollingCache[int](10)

	// Append values
	c.Append(1)
	c.Append(2)
	c.Append(3)

	// Get the items
	items := c.GetItems()
	fmt.Println("Items:", items)
}
Output:

Items: [1 2 3]
Example (DynamicGrowth)

Example for RollingCache with dynamic growth

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewRollingCache[int](3)

	// Append more values than the initial length
	c.Append(1)
	c.Append(2)
	c.Append(3)
	c.Append(4) // This grows the slice beyond the initial length

	items := c.GetItems()
	fmt.Println("Items after appending more:", items)
}
Output:

Items after appending more: [1 2 3 4]

func NewRollingCache added in v0.3.0

func NewRollingCache[V any](length int) *RollingCache[V]

NewRollingCache creates a new RollingCache with the specified initial length.

func (*RollingCache[V]) Append added in v0.3.0

func (c *RollingCache[V]) Append(value V)

Append adds a value to the cache. The slice grows dynamically.

func (*RollingCache[V]) GetItems added in v0.3.0

func (c *RollingCache[V]) GetItems() []V

GetItems returns a copy of the current slice.

func (*RollingCache[V]) Rotate added in v0.3.0

func (c *RollingCache[V]) Rotate() []V

Rotate returns the current slice and replaces it with an empty slice of the initial length.

Example

Example for RollingCache Rotate

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewRollingCache[int](10)

	// Append values
	c.Append(1)
	c.Append(2)
	c.Append(3)

	// Rotate the cache
	rotated := c.Rotate()
	fmt.Println("Rotated items:", rotated)

	// Cache should now be empty
	fmt.Println("Items after rotation:", c.GetItems())
}
Output:

Rotated items: [1 2 3]
Items after rotation: []

func (*RollingCache[V]) Size added in v0.3.0

func (c *RollingCache[V]) Size() int

Size returns the number of elements currently in the cache.

Example

Example for RollingCache Size

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewRollingCache[int](10)

	// Initially empty
	fmt.Println("Size initially:", c.Size())

	// Append values
	c.Append(1)
	c.Append(2)
	fmt.Println("Size after appending:", c.Size())

	// Append another value
	c.Append(3)
	fmt.Println("Size after appending more:", c.Size())

}
Output:

Size initially: 0
Size after appending: 2
Size after appending more: 3

type SingleflightGroup added in v0.1.0

type SingleflightGroup[V any] struct {
	// contains filtered or unexported fields
}

SingleflightGroup manages single concurrent requests per key, ensuring that only one execution of a function occurs for a given key at a time.

This implementation is simplified compared to the official singleflight package and lacks advanced error handling and other features, such as:

  • Panic and runtime.Goexit handling: This implementation does not handle cases where the function panics or terminates abnormally. In the official implementation, errors from panic and Goexit are handled to prevent blocked goroutines.
  • Shared result indicator: The official singleflight implementation includes a boolean return value to indicate if the result was shared among multiple callers. This implementation does not include this feature.
  • Immediate synchronous cleanup: In this implementation, the completed result is removed from the map asynchronously. In the official implementation, cleanup is handled synchronously within the doCall function to ensure immediate memory release.
Example
package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	sf := cache.NewSingleflightGroup[string]()

	v, err := sf.Do("example_key", func() (string, error) {
		return "result", nil
	})
	if err != nil {
		fmt.Println("Error:", err)
		return
	}
	fmt.Println("Value:", v)
}
Output:

Value: result

func NewSingleflightGroup added in v0.1.0

func NewSingleflightGroup[V any]() *SingleflightGroup[V]

NewSingleflightGroup creates a new instance of SingleflightGroup, initialized with an empty map to store calls by key.

func (*SingleflightGroup[V]) Do added in v0.1.0

func (sf *SingleflightGroup[V]) Do(key string, fn func() (V, error)) (V, error)

Do ensures that for a given key, only one execution of fn occurs at a time. If a call for the key is already in progress, other calls wait for its completion and return the same result. Once complete, the result is stored and used for subsequent calls until it's removed from the map.

Unlike the official singleflight, this function does not provide:

  • Panic and Goexit handling
  • Shared result flag to indicate if the result was reused for multiple callers

type WriteHeavyCache

type WriteHeavyCache[K comparable, V any] struct {
	sync.Mutex // WriteHeavyCache uses Mutex for all operations
	// contains filtered or unexported fields
}

WriteHeavyCache is a cache optimized for write-heavy operations. It uses a Mutex to synchronize access to the cache items.

Example

Example for WriteHeavyCache

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCache[int, string]()

	c.Set(1, "apple")
	value, found := c.Get(1)

	if found {
		fmt.Println("Found:", value)
	} else {
		fmt.Println("Not found")
	}
}
Output:

Found: apple

func NewWriteHeavyCache

func NewWriteHeavyCache[K comparable, V any]() *WriteHeavyCache[K, V]

NewWriteHeavyCache creates a new instance of WriteHeavyCache

func (*WriteHeavyCache[K, V]) Clear added in v0.0.2

func (c *WriteHeavyCache[K, V]) Clear()

Clear removes all items from WriteHeavyCache

func (*WriteHeavyCache[K, V]) Delete added in v0.2.0

func (c *WriteHeavyCache[K, V]) Delete(key K)

Delete removes a key from WriteHeavyCache.

func (*WriteHeavyCache[K, V]) Get

func (c *WriteHeavyCache[K, V]) Get(key K) (V, bool)

Get retrieves a value from WriteHeavyCache, locking for read as well

func (*WriteHeavyCache[K, V]) GetItems added in v0.2.1

func (c *WriteHeavyCache[K, V]) GetItems() map[K]V

GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.

Example

Example for WriteHeavyCache GetItems

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCache[int, string]()

	c.Set(1, "apple")
	c.Set(2, "banana")

	items := c.GetItems()
	fmt.Println("Items:", items)
}
Output:

Items: map[1:apple 2:banana]

func (*WriteHeavyCache[K, V]) Set

func (c *WriteHeavyCache[K, V]) Set(key K, value V)

Set sets a value in WriteHeavyCache, locking for the write operation

func (*WriteHeavyCache[K, V]) SetItems added in v0.2.1

func (c *WriteHeavyCache[K, V]) SetItems(items map[K]V)

SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.

Example

Example for WriteHeavyCache SetItems

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCache[int, string]()

	c.SetItems(map[int]string{
		1: "grape",
		2: "cherry",
	})

	items := c.GetItems()
	fmt.Println("Items after SetItems:", items)
}
Output:

Items after SetItems: map[1:grape 2:cherry]

func (*WriteHeavyCache[K, V]) Size added in v0.2.1

func (c *WriteHeavyCache[K, V]) Size() int

Size returns the number of items currently in the cache.

Example

Example for WriteHeavyCache Size

package main

import (
	"fmt"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCache[int, string]()

	c.Set(1, "apple")
	c.Set(2, "banana")

	fmt.Println("Size:", c.Size())
}
Output:

Size: 2

type WriteHeavyCacheExpired added in v0.0.4

type WriteHeavyCacheExpired[K comparable, V any] struct {
	sync.Mutex
	// contains filtered or unexported fields
}

WriteHeavyCacheExpired is a cache optimized for write-heavy operations with expiration support. It uses a Mutex to synchronize access and stores values with expiration times.

Example

Example for WriteHeavyCacheExpired

package main

import (
	"fmt"
	"time"

	"github.com/catatsuy/cache"
)

func main() {
	c := cache.NewWriteHeavyCacheExpired[int, string]()

	c.Set(1, "apple", 1*time.Second)

	if value, found := c.Get(1); found {
		fmt.Println("Found:", value)
	} else {
		fmt.Println("Not found")
	}

	// Expire immediately without waiting
	c.Set(1, "apple", -1*time.Second)
	if _, found := c.Get(1); !found {
		fmt.Println("Item has expired")
	}
}
Output:

Found: apple
Item has expired

func NewWriteHeavyCacheExpired added in v0.0.4

func NewWriteHeavyCacheExpired[K comparable, V any]() *WriteHeavyCacheExpired[K, V]

NewWriteHeavyCacheExpired creates a new instance of WriteHeavyCacheExpired

func (*WriteHeavyCacheExpired[K, V]) Clear added in v0.2.0

func (c *WriteHeavyCacheExpired[K, V]) Clear()

Clear removes all items from WriteHeavyCache

func (*WriteHeavyCacheExpired[K, V]) Delete added in v0.2.0

func (c *WriteHeavyCacheExpired[K, V]) Delete(key K)

Delete removes a key from WriteHeavyCacheExpired.

func (*WriteHeavyCacheExpired[K, V]) Get added in v0.0.4

func (c *WriteHeavyCacheExpired[K, V]) Get(key K) (V, bool)

Get method for WriteHeavyCacheExpired

func (*WriteHeavyCacheExpired[K, V]) GetWithExpireStatus added in v0.5.1

func (c *WriteHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)

GetWithExpireStatus retrieves a value from WriteHeavyCacheExpired. It returns the value, whether it was found, and whether it is expired. When the item is expired, it still returns the stored value with expired=true. This is useful for implementing stale-while-revalidate behavior.

func (*WriteHeavyCacheExpired[K, V]) Set added in v0.0.4

func (c *WriteHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)

Set method for WriteHeavyCacheExpired with a specified expiration duration

type WriteHeavyCacheInteger

type WriteHeavyCacheInteger[K comparable, V interface {
	~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}] struct {
	sync.Mutex // WriteHeavyCacheInteger uses Mutex for write-heavy scenarios
	// contains filtered or unexported fields
}

WriteHeavyCacheInteger is a cache optimized for write-heavy operations for integer-like types. It uses a Mutex to synchronize access to the cache items.

func NewWriteHeavyCacheInteger

func NewWriteHeavyCacheInteger[K comparable, V interface {
	~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}]() *WriteHeavyCacheInteger[K, V]

NewWriteHeavyCacheInteger creates a new write-heavy cache for integer types

func (*WriteHeavyCacheInteger[K, V]) Clear added in v0.0.2

func (c *WriteHeavyCacheInteger[K, V]) Clear()

Clear removes all items from WriteHeavyCacheInteger.

func (*WriteHeavyCacheInteger[K, V]) Delete added in v0.2.0

func (c *WriteHeavyCacheInteger[K, V]) Delete(key K)

Delete removes a key from WriteHeavyCacheInteger.

func (*WriteHeavyCacheInteger[K, V]) Get

func (c *WriteHeavyCacheInteger[K, V]) Get(key K) (V, bool)

Get retrieves a value from WriteHeavyCacheInteger, locking for read as well

func (*WriteHeavyCacheInteger[K, V]) GetItems added in v0.2.1

func (c *WriteHeavyCacheInteger[K, V]) GetItems() map[K]V

GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.

func (*WriteHeavyCacheInteger[K, V]) Incr

func (c *WriteHeavyCacheInteger[K, V]) Incr(key K, value V)

Incr increments a value in WriteHeavyCacheInteger, locking for the operation

func (*WriteHeavyCacheInteger[K, V]) Set

func (c *WriteHeavyCacheInteger[K, V]) Set(key K, value V)

Set sets a value in WriteHeavyCacheInteger, locking for the write operation

func (*WriteHeavyCacheInteger[K, V]) SetItems added in v0.2.1

func (c *WriteHeavyCacheInteger[K, V]) SetItems(items map[K]V)

SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.

func (*WriteHeavyCacheInteger[K, V]) Size added in v0.2.1

func (c *WriteHeavyCacheInteger[K, V]) Size() int

Size returns the number of items currently in the cache.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL