Documentation
¶
Index ¶
- type LockManager
- type ReadHeavyCache
- func (c *ReadHeavyCache[K, V]) Clear()
- func (c *ReadHeavyCache[K, V]) Delete(key K)
- func (c *ReadHeavyCache[K, V]) Get(key K) (V, bool)
- func (c *ReadHeavyCache[K, V]) GetItems() map[K]V
- func (c *ReadHeavyCache[K, V]) Set(key K, value V)
- func (c *ReadHeavyCache[K, V]) SetItems(items map[K]V)
- func (c *ReadHeavyCache[K, V]) Size() int
- type ReadHeavyCacheExpired
- func (c *ReadHeavyCacheExpired[K, V]) Clear()
- func (c *ReadHeavyCacheExpired[K, V]) Delete(key K)
- func (c *ReadHeavyCacheExpired[K, V]) Get(key K) (V, bool)
- func (c *ReadHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)
- func (c *ReadHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)
- type ReadHeavyCacheInteger
- func (c *ReadHeavyCacheInteger[K, V]) Clear()
- func (c *ReadHeavyCacheInteger[K, V]) Delete(key K)
- func (c *ReadHeavyCacheInteger[K, V]) Get(key K) (V, bool)
- func (c *ReadHeavyCacheInteger[K, V]) GetItems() map[K]V
- func (c *ReadHeavyCacheInteger[K, V]) Incr(key K, value V)
- func (c *ReadHeavyCacheInteger[K, V]) Set(key K, value V)
- func (c *ReadHeavyCacheInteger[K, V]) SetItems(items map[K]V)
- func (c *ReadHeavyCacheInteger[K, V]) Size() int
- type RollingCache
- type SingleflightGroup
- type WriteHeavyCache
- func (c *WriteHeavyCache[K, V]) Clear()
- func (c *WriteHeavyCache[K, V]) Delete(key K)
- func (c *WriteHeavyCache[K, V]) Get(key K) (V, bool)
- func (c *WriteHeavyCache[K, V]) GetItems() map[K]V
- func (c *WriteHeavyCache[K, V]) Set(key K, value V)
- func (c *WriteHeavyCache[K, V]) SetItems(items map[K]V)
- func (c *WriteHeavyCache[K, V]) Size() int
- type WriteHeavyCacheExpired
- func (c *WriteHeavyCacheExpired[K, V]) Clear()
- func (c *WriteHeavyCacheExpired[K, V]) Delete(key K)
- func (c *WriteHeavyCacheExpired[K, V]) Get(key K) (V, bool)
- func (c *WriteHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)
- func (c *WriteHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)
- type WriteHeavyCacheInteger
- func (c *WriteHeavyCacheInteger[K, V]) Clear()
- func (c *WriteHeavyCacheInteger[K, V]) Delete(key K)
- func (c *WriteHeavyCacheInteger[K, V]) Get(key K) (V, bool)
- func (c *WriteHeavyCacheInteger[K, V]) GetItems() map[K]V
- func (c *WriteHeavyCacheInteger[K, V]) Incr(key K, value V)
- func (c *WriteHeavyCacheInteger[K, V]) Set(key K, value V)
- func (c *WriteHeavyCacheInteger[K, V]) SetItems(items map[K]V)
- func (c *WriteHeavyCacheInteger[K, V]) Size() int
Examples ¶
- LockManager
- LockManager (WithLockAndUnlock)
- LockManager.GetAndLock
- ReadHeavyCache
- ReadHeavyCache.GetItems
- ReadHeavyCache.SetItems
- ReadHeavyCache.Size
- ReadHeavyCacheExpired
- RollingCache
- RollingCache (DynamicGrowth)
- RollingCache.Rotate
- RollingCache.Size
- SingleflightGroup
- WriteHeavyCache
- WriteHeavyCache.GetItems
- WriteHeavyCache.SetItems
- WriteHeavyCache.Size
- WriteHeavyCacheExpired
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type LockManager ¶ added in v0.0.3
type LockManager[K comparable] struct { // contains filtered or unexported fields }
LockManager manages a set of mutexes identified by keys of type K. It is designed to provide fine-grained locking for operations on individual keys.
Example ¶
ExampleLockManager provides an example usage of LockManager.
package main
import (
"fmt"
"sync"
"github.com/catatsuy/cache"
)
func main() {
// Create a new LockManager for integer keys
lm := cache.NewLockManager[int]()
var wg sync.WaitGroup
firstDone := make(chan struct{})
// Simulate concurrent access to the same key
key := 1
// Main goroutine does work first deterministically
lm.Lock(key)
fmt.Println("Locked")
// Simulate some work
fmt.Println("Doing work")
lm.Unlock(key)
fmt.Println("Unlocked")
// First goroutine locks and performs some work
wg.Go(func() {
lm.Lock(key)
fmt.Println("Goroutine 1: Locked")
// Simulate some work
fmt.Println("Goroutine 1: Doing work")
lm.Unlock(key)
fmt.Println("Goroutine 1: Unlocked")
close(firstDone)
})
wg.Go(func() {
<-firstDone // ensure goroutine 1 runs first
defer lm.GetAndLock(key).Unlock()
fmt.Println("Goroutine 2: Locked")
// Simulate some work
fmt.Println("Goroutine 2: Doing work")
fmt.Println("Goroutine 2: Unlocked")
})
wg.Wait()
}
Output: Locked Doing work Unlocked Goroutine 1: Locked Goroutine 1: Doing work Goroutine 1: Unlocked Goroutine 2: Locked Goroutine 2: Doing work Goroutine 2: Unlocked
Example (WithLockAndUnlock) ¶
Example for LockManager with Lock and Unlock
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
lm := cache.NewLockManager[int]()
lm.Lock(1)
fmt.Println("Resource 1 is locked")
fmt.Println("Resource 1 is being used")
lm.Unlock(1)
fmt.Println("Resource 1 is unlocked")
}
Output: Resource 1 is locked Resource 1 is being used Resource 1 is unlocked
func NewLockManager ¶ added in v0.0.3
func NewLockManager[K comparable]() *LockManager[K]
NewLockManager creates a new instance of LockManager.
func (*LockManager[K]) GetAndLock ¶ added in v0.0.3
func (lm *LockManager[K]) GetAndLock(id K) *sync.Mutex
GetAndLock retrieves the mutex associated with the given key, locks it, and returns the locked mutex. This is useful for cases where you want to obtain and lock the mutex in a single line. For example, you can use it like this:
defer lm.GetAndLock(id).Unlock()
This pattern allows you to ensure the mutex is unlocked when the surrounding function exits.
Example ¶
Example for LockManager with GetAndLock
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
var lm = cache.NewLockManager[int]()
heavyOperation := func(id int) {
defer lm.GetAndLock(id).Unlock()
fmt.Printf("Starting heavy operation on resource %d\n", id)
// simulate heavy work without slow test
// time.Sleep(2 * time.Second)
fmt.Printf("Completed heavy operation on resource %d\n", id)
}
heavyOperation(1)
heavyOperation(2)
}
Output: Starting heavy operation on resource 1 Completed heavy operation on resource 1 Starting heavy operation on resource 2 Completed heavy operation on resource 2
func (*LockManager[K]) Lock ¶ added in v0.0.3
func (lm *LockManager[K]) Lock(id K)
Lock locks the mutex associated with the given key.
func (*LockManager[K]) Unlock ¶ added in v0.0.3
func (lm *LockManager[K]) Unlock(id K)
Unlock unlocks the mutex associated with the given key.
type ReadHeavyCache ¶
type ReadHeavyCache[K comparable, V any] struct { sync.RWMutex // ReadHeavyCache allows concurrent read access with RWMutex // contains filtered or unexported fields }
ReadHeavyCache is a cache optimized for read-heavy operations. It uses an RWMutex to allow concurrent reads and synchronized writes.
Example ¶
Example for ReadHeavyCache
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewReadHeavyCache[int, string]()
c.Set(1, "orange")
value, found := c.Get(1)
if found {
fmt.Println("Found:", value)
} else {
fmt.Println("Not found")
}
}
Output: Found: orange
func NewReadHeavyCache ¶
func NewReadHeavyCache[K comparable, V any]() *ReadHeavyCache[K, V]
NewReadHeavyCache creates a new instance of ReadHeavyCache
func (*ReadHeavyCache[K, V]) Clear ¶ added in v0.0.2
func (c *ReadHeavyCache[K, V]) Clear()
Clear removes all items from ReadHeavyCache
func (*ReadHeavyCache[K, V]) Delete ¶ added in v0.2.0
func (c *ReadHeavyCache[K, V]) Delete(key K)
Delete removes a key from ReadHeavyCache.
func (*ReadHeavyCache[K, V]) Get ¶
func (c *ReadHeavyCache[K, V]) Get(key K) (V, bool)
Get retrieves a value from ReadHeavyCache, using a read lock
func (*ReadHeavyCache[K, V]) GetItems ¶ added in v0.2.1
func (c *ReadHeavyCache[K, V]) GetItems() map[K]V
GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.
Example ¶
Example for ReadHeavyCache GetItems
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewReadHeavyCache[int, string]()
c.Set(1, "orange")
c.Set(2, "lemon")
items := c.GetItems()
fmt.Println("Items:", items)
}
Output: Items: map[1:orange 2:lemon]
func (*ReadHeavyCache[K, V]) Set ¶
func (c *ReadHeavyCache[K, V]) Set(key K, value V)
Set sets a value in ReadHeavyCache, locking for the write operation
func (*ReadHeavyCache[K, V]) SetItems ¶ added in v0.2.1
func (c *ReadHeavyCache[K, V]) SetItems(items map[K]V)
SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.
Example ¶
Example for ReadHeavyCache SetItems
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewReadHeavyCache[int, string]()
c.SetItems(map[int]string{
1: "peach",
2: "plum",
})
items := c.GetItems()
fmt.Println("Items after SetItems:", items)
}
Output: Items after SetItems: map[1:peach 2:plum]
func (*ReadHeavyCache[K, V]) Size ¶ added in v0.2.1
func (c *ReadHeavyCache[K, V]) Size() int
Size returns the number of items currently in the cache.
Example ¶
Example for ReadHeavyCache Size
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewReadHeavyCache[int, string]()
c.Set(1, "orange")
c.Set(2, "lemon")
fmt.Println("Size:", c.Size())
}
Output: Size: 2
type ReadHeavyCacheExpired ¶ added in v0.0.4
type ReadHeavyCacheExpired[K comparable, V any] struct { sync.RWMutex // contains filtered or unexported fields }
ReadHeavyCacheExpired is a cache optimized for read-heavy operations with expiration support. It uses an RWMutex to allow concurrent reads and synchronized writes, storing values with expiration times.
Example ¶
Example for ReadHeavyCacheExpired
package main
import (
"fmt"
"time"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewReadHeavyCacheExpired[int, string]()
c.Set(1, "orange", 1*time.Second)
if value, found := c.Get(1); found {
fmt.Println("Found:", value)
} else {
fmt.Println("Not found")
}
// Expire immediately without waiting
c.Set(1, "orange", -1*time.Second)
if _, found := c.Get(1); !found {
fmt.Println("Item has expired")
}
}
Output: Found: orange Item has expired
func NewReadHeavyCacheExpired ¶ added in v0.0.4
func NewReadHeavyCacheExpired[K comparable, V any]() *ReadHeavyCacheExpired[K, V]
NewReadHeavyCacheExpired creates a new instance of ReadHeavyCacheExpired
func (*ReadHeavyCacheExpired[K, V]) Clear ¶ added in v0.2.0
func (c *ReadHeavyCacheExpired[K, V]) Clear()
Clear removes all items from WriteHeavyCache
func (*ReadHeavyCacheExpired[K, V]) Delete ¶ added in v0.2.0
func (c *ReadHeavyCacheExpired[K, V]) Delete(key K)
Delete removes a key from ReadHeavyCacheExpired.
func (*ReadHeavyCacheExpired[K, V]) Get ¶ added in v0.0.4
func (c *ReadHeavyCacheExpired[K, V]) Get(key K) (V, bool)
Get method for ReadHeavyCacheExpired
func (*ReadHeavyCacheExpired[K, V]) GetWithExpireStatus ¶ added in v0.5.1
func (c *ReadHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)
GetWithExpireStatus retrieves a value from ReadHeavyCacheExpired. It returns the value, whether it was found, and whether it is expired. When the item is expired, it still returns the stored value with expired=true. This is useful for implementing stale-while-revalidate behavior.
func (*ReadHeavyCacheExpired[K, V]) Set ¶ added in v0.0.4
func (c *ReadHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)
Set method for ReadHeavyCacheExpired with a specified expiration duration
type ReadHeavyCacheInteger ¶
type ReadHeavyCacheInteger[K comparable, V interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr }] struct { sync.RWMutex // ReadHeavyCacheInteger uses RWMutex for read-heavy scenarios // contains filtered or unexported fields }
ReadHeavyCacheInteger is a cache optimized for read-heavy operations for integer-like types. It uses an RWMutex to allow concurrent reads and synchronized writes.
func NewReadHeavyCacheInteger ¶
func NewReadHeavyCacheInteger[K comparable, V interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr }]() *ReadHeavyCacheInteger[K, V]
NewReadHeavyCacheInteger creates a new read-heavy cache for integer types
func (*ReadHeavyCacheInteger[K, V]) Clear ¶ added in v0.0.2
func (c *ReadHeavyCacheInteger[K, V]) Clear()
Clear removes all items from ReadHeavyCacheExpired.
func (*ReadHeavyCacheInteger[K, V]) Delete ¶ added in v0.2.0
func (c *ReadHeavyCacheInteger[K, V]) Delete(key K)
Delete removes a key from ReadHeavyCacheInteger.
func (*ReadHeavyCacheInteger[K, V]) Get ¶
func (c *ReadHeavyCacheInteger[K, V]) Get(key K) (V, bool)
Get retrieves a value from ReadHeavyCacheInteger, using a read lock
func (*ReadHeavyCacheInteger[K, V]) GetItems ¶ added in v0.2.1
func (c *ReadHeavyCacheInteger[K, V]) GetItems() map[K]V
GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.
func (*ReadHeavyCacheInteger[K, V]) Incr ¶
func (c *ReadHeavyCacheInteger[K, V]) Incr(key K, value V)
Incr increments a value in ReadHeavyCacheInteger, locking for the operation
func (*ReadHeavyCacheInteger[K, V]) Set ¶
func (c *ReadHeavyCacheInteger[K, V]) Set(key K, value V)
Set sets a value in ReadHeavyCacheInteger, locking for the write operation
func (*ReadHeavyCacheInteger[K, V]) SetItems ¶ added in v0.2.1
func (c *ReadHeavyCacheInteger[K, V]) SetItems(items map[K]V)
SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.
func (*ReadHeavyCacheInteger[K, V]) Size ¶ added in v0.2.1
func (c *ReadHeavyCacheInteger[K, V]) Size() int
Size returns the number of items currently in the cache.
type RollingCache ¶ added in v0.3.0
RollingCache is a thread-safe cache that uses a slice for storing elements. It supports Append and Rotate operations, and maintains an initial length for reset.
Example ¶
Example for RollingCache Append and GetItems
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewRollingCache[int](10)
// Append values
c.Append(1)
c.Append(2)
c.Append(3)
// Get the items
items := c.GetItems()
fmt.Println("Items:", items)
}
Output: Items: [1 2 3]
Example (DynamicGrowth) ¶
Example for RollingCache with dynamic growth
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewRollingCache[int](3)
// Append more values than the initial length
c.Append(1)
c.Append(2)
c.Append(3)
c.Append(4) // This grows the slice beyond the initial length
items := c.GetItems()
fmt.Println("Items after appending more:", items)
}
Output: Items after appending more: [1 2 3 4]
func NewRollingCache ¶ added in v0.3.0
func NewRollingCache[V any](length int) *RollingCache[V]
NewRollingCache creates a new RollingCache with the specified initial length.
func (*RollingCache[V]) Append ¶ added in v0.3.0
func (c *RollingCache[V]) Append(value V)
Append adds a value to the cache. The slice grows dynamically.
func (*RollingCache[V]) GetItems ¶ added in v0.3.0
func (c *RollingCache[V]) GetItems() []V
GetItems returns a copy of the current slice.
func (*RollingCache[V]) Rotate ¶ added in v0.3.0
func (c *RollingCache[V]) Rotate() []V
Rotate returns the current slice and replaces it with an empty slice of the initial length.
Example ¶
Example for RollingCache Rotate
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewRollingCache[int](10)
// Append values
c.Append(1)
c.Append(2)
c.Append(3)
// Rotate the cache
rotated := c.Rotate()
fmt.Println("Rotated items:", rotated)
// Cache should now be empty
fmt.Println("Items after rotation:", c.GetItems())
}
Output: Rotated items: [1 2 3] Items after rotation: []
func (*RollingCache[V]) Size ¶ added in v0.3.0
func (c *RollingCache[V]) Size() int
Size returns the number of elements currently in the cache.
Example ¶
Example for RollingCache Size
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewRollingCache[int](10)
// Initially empty
fmt.Println("Size initially:", c.Size())
// Append values
c.Append(1)
c.Append(2)
fmt.Println("Size after appending:", c.Size())
// Append another value
c.Append(3)
fmt.Println("Size after appending more:", c.Size())
}
Output: Size initially: 0 Size after appending: 2 Size after appending more: 3
type SingleflightGroup ¶ added in v0.1.0
type SingleflightGroup[V any] struct { // contains filtered or unexported fields }
SingleflightGroup manages single concurrent requests per key, ensuring that only one execution of a function occurs for a given key at a time.
This implementation is simplified compared to the official singleflight package and lacks advanced error handling and other features, such as:
- Panic and runtime.Goexit handling: This implementation does not handle cases where the function panics or terminates abnormally. In the official implementation, errors from panic and Goexit are handled to prevent blocked goroutines.
- Shared result indicator: The official singleflight implementation includes a boolean return value to indicate if the result was shared among multiple callers. This implementation does not include this feature.
- Immediate synchronous cleanup: In this implementation, the completed result is removed from the map asynchronously. In the official implementation, cleanup is handled synchronously within the doCall function to ensure immediate memory release.
Example ¶
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
sf := cache.NewSingleflightGroup[string]()
v, err := sf.Do("example_key", func() (string, error) {
return "result", nil
})
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Value:", v)
}
Output: Value: result
func NewSingleflightGroup ¶ added in v0.1.0
func NewSingleflightGroup[V any]() *SingleflightGroup[V]
NewSingleflightGroup creates a new instance of SingleflightGroup, initialized with an empty map to store calls by key.
func (*SingleflightGroup[V]) Do ¶ added in v0.1.0
func (sf *SingleflightGroup[V]) Do(key string, fn func() (V, error)) (V, error)
Do ensures that for a given key, only one execution of fn occurs at a time. If a call for the key is already in progress, other calls wait for its completion and return the same result. Once complete, the result is stored and used for subsequent calls until it's removed from the map.
Unlike the official singleflight, this function does not provide:
- Panic and Goexit handling
- Shared result flag to indicate if the result was reused for multiple callers
type WriteHeavyCache ¶
type WriteHeavyCache[K comparable, V any] struct { sync.Mutex // WriteHeavyCache uses Mutex for all operations // contains filtered or unexported fields }
WriteHeavyCache is a cache optimized for write-heavy operations. It uses a Mutex to synchronize access to the cache items.
Example ¶
Example for WriteHeavyCache
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewWriteHeavyCache[int, string]()
c.Set(1, "apple")
value, found := c.Get(1)
if found {
fmt.Println("Found:", value)
} else {
fmt.Println("Not found")
}
}
Output: Found: apple
func NewWriteHeavyCache ¶
func NewWriteHeavyCache[K comparable, V any]() *WriteHeavyCache[K, V]
NewWriteHeavyCache creates a new instance of WriteHeavyCache
func (*WriteHeavyCache[K, V]) Clear ¶ added in v0.0.2
func (c *WriteHeavyCache[K, V]) Clear()
Clear removes all items from WriteHeavyCache
func (*WriteHeavyCache[K, V]) Delete ¶ added in v0.2.0
func (c *WriteHeavyCache[K, V]) Delete(key K)
Delete removes a key from WriteHeavyCache.
func (*WriteHeavyCache[K, V]) Get ¶
func (c *WriteHeavyCache[K, V]) Get(key K) (V, bool)
Get retrieves a value from WriteHeavyCache, locking for read as well
func (*WriteHeavyCache[K, V]) GetItems ¶ added in v0.2.1
func (c *WriteHeavyCache[K, V]) GetItems() map[K]V
GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.
Example ¶
Example for WriteHeavyCache GetItems
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewWriteHeavyCache[int, string]()
c.Set(1, "apple")
c.Set(2, "banana")
items := c.GetItems()
fmt.Println("Items:", items)
}
Output: Items: map[1:apple 2:banana]
func (*WriteHeavyCache[K, V]) Set ¶
func (c *WriteHeavyCache[K, V]) Set(key K, value V)
Set sets a value in WriteHeavyCache, locking for the write operation
func (*WriteHeavyCache[K, V]) SetItems ¶ added in v0.2.1
func (c *WriteHeavyCache[K, V]) SetItems(items map[K]V)
SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.
Example ¶
Example for WriteHeavyCache SetItems
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewWriteHeavyCache[int, string]()
c.SetItems(map[int]string{
1: "grape",
2: "cherry",
})
items := c.GetItems()
fmt.Println("Items after SetItems:", items)
}
Output: Items after SetItems: map[1:grape 2:cherry]
func (*WriteHeavyCache[K, V]) Size ¶ added in v0.2.1
func (c *WriteHeavyCache[K, V]) Size() int
Size returns the number of items currently in the cache.
Example ¶
Example for WriteHeavyCache Size
package main
import (
"fmt"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewWriteHeavyCache[int, string]()
c.Set(1, "apple")
c.Set(2, "banana")
fmt.Println("Size:", c.Size())
}
Output: Size: 2
type WriteHeavyCacheExpired ¶ added in v0.0.4
type WriteHeavyCacheExpired[K comparable, V any] struct { sync.Mutex // contains filtered or unexported fields }
WriteHeavyCacheExpired is a cache optimized for write-heavy operations with expiration support. It uses a Mutex to synchronize access and stores values with expiration times.
Example ¶
Example for WriteHeavyCacheExpired
package main
import (
"fmt"
"time"
"github.com/catatsuy/cache"
)
func main() {
c := cache.NewWriteHeavyCacheExpired[int, string]()
c.Set(1, "apple", 1*time.Second)
if value, found := c.Get(1); found {
fmt.Println("Found:", value)
} else {
fmt.Println("Not found")
}
// Expire immediately without waiting
c.Set(1, "apple", -1*time.Second)
if _, found := c.Get(1); !found {
fmt.Println("Item has expired")
}
}
Output: Found: apple Item has expired
func NewWriteHeavyCacheExpired ¶ added in v0.0.4
func NewWriteHeavyCacheExpired[K comparable, V any]() *WriteHeavyCacheExpired[K, V]
NewWriteHeavyCacheExpired creates a new instance of WriteHeavyCacheExpired
func (*WriteHeavyCacheExpired[K, V]) Clear ¶ added in v0.2.0
func (c *WriteHeavyCacheExpired[K, V]) Clear()
Clear removes all items from WriteHeavyCache
func (*WriteHeavyCacheExpired[K, V]) Delete ¶ added in v0.2.0
func (c *WriteHeavyCacheExpired[K, V]) Delete(key K)
Delete removes a key from WriteHeavyCacheExpired.
func (*WriteHeavyCacheExpired[K, V]) Get ¶ added in v0.0.4
func (c *WriteHeavyCacheExpired[K, V]) Get(key K) (V, bool)
Get method for WriteHeavyCacheExpired
func (*WriteHeavyCacheExpired[K, V]) GetWithExpireStatus ¶ added in v0.5.1
func (c *WriteHeavyCacheExpired[K, V]) GetWithExpireStatus(key K) (V, bool, bool)
GetWithExpireStatus retrieves a value from WriteHeavyCacheExpired. It returns the value, whether it was found, and whether it is expired. When the item is expired, it still returns the stored value with expired=true. This is useful for implementing stale-while-revalidate behavior.
func (*WriteHeavyCacheExpired[K, V]) Set ¶ added in v0.0.4
func (c *WriteHeavyCacheExpired[K, V]) Set(key K, value V, duration time.Duration)
Set method for WriteHeavyCacheExpired with a specified expiration duration
type WriteHeavyCacheInteger ¶
type WriteHeavyCacheInteger[K comparable, V interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr }] struct { sync.Mutex // WriteHeavyCacheInteger uses Mutex for write-heavy scenarios // contains filtered or unexported fields }
WriteHeavyCacheInteger is a cache optimized for write-heavy operations for integer-like types. It uses a Mutex to synchronize access to the cache items.
func NewWriteHeavyCacheInteger ¶
func NewWriteHeavyCacheInteger[K comparable, V interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr }]() *WriteHeavyCacheInteger[K, V]
NewWriteHeavyCacheInteger creates a new write-heavy cache for integer types
func (*WriteHeavyCacheInteger[K, V]) Clear ¶ added in v0.0.2
func (c *WriteHeavyCacheInteger[K, V]) Clear()
Clear removes all items from WriteHeavyCacheInteger.
func (*WriteHeavyCacheInteger[K, V]) Delete ¶ added in v0.2.0
func (c *WriteHeavyCacheInteger[K, V]) Delete(key K)
Delete removes a key from WriteHeavyCacheInteger.
func (*WriteHeavyCacheInteger[K, V]) Get ¶
func (c *WriteHeavyCacheInteger[K, V]) Get(key K) (V, bool)
Get retrieves a value from WriteHeavyCacheInteger, locking for read as well
func (*WriteHeavyCacheInteger[K, V]) GetItems ¶ added in v0.2.1
func (c *WriteHeavyCacheInteger[K, V]) GetItems() map[K]V
GetItems returns a direct reference to the internal map of cache items. WARNING: This method does not create a copy of the map. Concurrent modifications to the returned map may cause race conditions and undefined behavior. Use this method with caution in concurrent environments.
func (*WriteHeavyCacheInteger[K, V]) Incr ¶
func (c *WriteHeavyCacheInteger[K, V]) Incr(key K, value V)
Incr increments a value in WriteHeavyCacheInteger, locking for the operation
func (*WriteHeavyCacheInteger[K, V]) Set ¶
func (c *WriteHeavyCacheInteger[K, V]) Set(key K, value V)
Set sets a value in WriteHeavyCacheInteger, locking for the write operation
func (*WriteHeavyCacheInteger[K, V]) SetItems ¶ added in v0.2.1
func (c *WriteHeavyCacheInteger[K, V]) SetItems(items map[K]V)
SetItems replaces the internal map of cache items with the provided map. WARNING: This method does not copy the provided map. Ensure that no concurrent access is occurring while calling this method to avoid race conditions and undefined behavior.
func (*WriteHeavyCacheInteger[K, V]) Size ¶ added in v0.2.1
func (c *WriteHeavyCacheInteger[K, V]) Size() int
Size returns the number of items currently in the cache.