Documentation
¶
Overview ¶
Package redlock provides Redis-backed distributed lock implementations.
This package offers two complementary lock types for different deployment scenarios and emphasizes strict safety through fencing tokens and consensus.
For in-depth usage examples, idiom references, and integration guides, see the package README.md.
Core Components ¶
- Lock: A distributed lock backed by a single Redis instance. It supports automatic retries, configurable backoff (via Waiter), and atomic operations using Lua scripts.
- DistributedLock: An implementation of the Redlock algorithm for environments requiring high availability. It acquires locks across multiple independent Redis instances and uses quorum-based consensus (N/2 + 1) to determine ownership.
- Waiter: An interface for controlling retry behavior and backoff strategies during lock acquisition. Built-in implementations include JitterWait and ExponentialWait.
Trade-offs ¶
- Performance vs Safety: Lock provides faster acquisition via a single network hop but sacrifices availability if the Redis node fails. DistributedLock guarantees safety and availability during node failures at the cost of higher latency.
- Auto-renewal constraints: Background watchdogs (Watch, WatchWithInterval, WatchDog) rely exclusively on context cancellation for termination. They will NOT halt automatically if the underlying lock is lost or TTL extension fails.
Fencing Tokens ¶
Fencing tokens are UUID strings generated upon successful lock acquisition. These tokens must be passed to release or extend operations to prevent race conditions and unsafe lock hand-offs. They can also be used by downstream systems to detect stale locks.
Configuration ¶
Both lock types use functional options for configuration:
- Lock: WithWaiter (configuring JitterWaitOption or ExponentialWaitOption).
- DistributedLock: WithClockDriftFactor, WithClockDriftBuffer, WithReleaseTimeout, and WithDistWaiter.
Sentinel Errors ¶
The package exports sentinel errors for reliable error checking with errors.Is:
- ErrLockAlreadyHeld: The requested lock is currently owned by another client.
- ErrLockNotHeld: Attempting to extend or release an unowned lock.
- ErrMaxRetryExceeded: Lock acquisition failed after exhausting all retry attempts.
- ErrValidityExpired: The distributed lock was acquired, but its validity duration was completely consumed by clock drift or acquisition latency.
References ¶
- Redis Redlock Algorithm: https://redis.io/docs/latest/develop/clients/patterns/distributed-locks/
- Martin Kleppmann's Analysis: https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html
Index ¶
- Variables
- func Watch(ctx context.Context, locker Locker, key, fencing string, ttl time.Duration)
- func WatchWithInterval(ctx context.Context, locker Locker, key, fencing string, ...)
- type DistributedLock
- func (dl *DistributedLock) Acquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
- func (dl *DistributedLock) AcquireOrExtend(ctx context.Context, key, fencing string, ttl time.Duration) (err error)
- func (dl *DistributedLock) AcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *DistributedLock) Extend(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *DistributedLock) Release(ctx context.Context, key, fencing string) error
- func (dl *DistributedLock) ReleaseWithCount(ctx context.Context, key, fencing string) (ReleaseStatus, error)
- func (dl *DistributedLock) TryAcquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
- func (dl *DistributedLock) TryAcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) (err error)
- func (dl *DistributedLock) TryExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
- type DistributedLockOption
- func WithClockDriftBuffer(buffer time.Duration) DistributedLockOption
- func WithClockDriftFactor(factor float64) DistributedLockOption
- func WithDistMaxJitterDuration(maxJitter time.Duration) DistributedLockOptiondeprecated
- func WithDistMaxRetry(maxRetry int) DistributedLockOptiondeprecated
- func WithDistMinRetryDelay(minDelay time.Duration) DistributedLockOptiondeprecated
- func WithDistWaiter(waiter Waiter) DistributedLockOption
- func WithReleaseTimeout(timeout time.Duration) DistributedLockOption
- type ExponentialWait
- type ExponentialWaitOption
- type JitterWait
- type JitterWaitOption
- type Lock
- func (dl *Lock) Acquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
- func (dl *Lock) AcquireOrExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *Lock) AcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *Lock) Extend(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *Lock) Release(ctx context.Context, key, fencing string) error
- func (dl *Lock) TryAcquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
- func (dl *Lock) TryAcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
- func (dl *Lock) TryExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
- type LockOption
- type Locker
- type ReleaseStatus
- type WaitInfo
- type Waiter
- type WatchDog
- type WatchDogCallback
- type WatchDogOption
- func WithCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOptiondeprecated
- func WithErrorCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOption
- func WithExtensionCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOption
- func WithItem(key, fencing string, ttl, interval time.Duration) WatchDogOption
- func WithItems(items ...*WatchItem) WatchDogOption
- type WatchItem
- Bugs
Constants ¶
This section is empty.
Variables ¶
var ( // ErrLockAlreadyHeld is returned when a requested lock is currently owned by another client. ErrLockAlreadyHeld = errors.New("lock already held") // ErrLockNotHeld is returned when attempting to extend or release an unowned lock. ErrLockNotHeld = errors.New("lock not held") // ErrMaxRetryExceeded is returned when a lock acquisition fails after exhausting all retry attempts. ErrMaxRetryExceeded = errors.New("max retry exceeded") // ErrValidityExpired is returned when a distributed lock is acquired, but the validity duration // is entirely consumed by clock drift or acquisition latency across instances. ErrValidityExpired = errors.New("lock validity expired") )
Sentinel errors returned by lock operations.
Functions ¶
func Watch ¶ added in v0.5.0
Watch spawns a background goroutine to periodically prolong a lock's TTL. It is intended for operations of an unknown duration and extends the lock at an interval of half the TTL. The watchdog terminates only when the context is cancelled.
Use WatchDog for advanced error handling, logging, or premature termination.
WARNING(trviph): Do not pass context.Background() without a cancellation mechanism (e.g., context.WithCancel), otherwise the watchdog goroutine will leak and never terminate.
BUG(trviph): The Watch function relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use Watch with DistributedLock if you are not comfortable with this uncertainty.
func WatchWithInterval ¶ added in v1.1.0
func WatchWithInterval(ctx context.Context, locker Locker, key, fencing string, ttl, interval time.Duration)
WatchWithInterval spans a background goroutine to periodically prolong a lock's TTL using a custom interval between extension attempts. The watchdog terminates only when the context is cancelled.
Use WatchDog for advanced error handling, logging, or premature termination.
WARNING(trviph): Do not pass context.Background() without a cancellation mechanism (e.g., context.WithCancel), otherwise the watchdog goroutine will leak and never terminate.
BUG(trviph): The WatchWithInterval function relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use WatchWithInterval with DistributedLock if you are not comfortable with this uncertainty.
Types ¶
type DistributedLock ¶
type DistributedLock struct {
// contains filtered or unexported fields
}
DistributedLock implements the Redlock algorithm for distributed locking across multiple independent Redis instances. It provides stronger guarantees than a single-instance lock by requiring a quorum (N/2 + 1) of instances to agree on lock ownership.
BUG(trviph): The Extend and TryExtend methods do not release partially-extended locks on quorum failure. Unlike Acquire/TryAcquire/AcquireOrExtend, if Extend/TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use Extend/TryExtend if you are not comfortable with this uncertainty.
func NewDistributedLock ¶ added in v0.4.0
func NewDistributedLock(locks []*Lock, opts ...DistributedLockOption) *DistributedLock
NewDistributedLock creates a new DistributedLock with the given Lock instances. Each Lock should be connected to an independent Redis instance. For optimal fault tolerance, use an odd number of instances (e.g., 3, 5, or 7). By default, the lock retries indefinitely with 300ms max jitter (using `JitterWait`).
func (*DistributedLock) Acquire ¶
func (dl *DistributedLock) Acquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
Acquire attempts to claim the lock across all Redis instances concurrently. It generates a unique fencing token and requires a quorum (N/2 + 1) to succeed. If quorum is not reached, any acquired locks are automatically released.
The lock is validated post-acquisition to ensure the elapsed time plus clock drift does not exceed the TTL. It retries automatically based on the Waiter configuration.
Returns the fencing token on success, or an error if quorum fails, clock drift expires the lock, or the context is cancelled.
func (*DistributedLock) AcquireOrExtend ¶
func (dl *DistributedLock) AcquireOrExtend(ctx context.Context, key, fencing string, ttl time.Duration) (err error)
AcquireOrExtend acquires a new lock or extends an existing one if the fencing token matches. It executes concurrently across all Redis instances and requires a quorum (N/2 + 1) to succeed.
WARNING(trviph): On failure (e.g., quorum not achieved or clock drift expired), this method automatically releases ALL locks across ALL instances, including any locks that were already held before the call. To safely extend without risking release on failure, use DistributedLock.Extend.
func (*DistributedLock) AcquireWithFencing ¶
func (dl *DistributedLock) AcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
AcquireWithFencing attempts to claim the lock using the provided fencing token across all Redis instances concurrently. It requires a quorum (N/2 + 1) to succeed. If quorum is not reached, any acquired locks are automatically released.
The lock is validated post-acquisition to ensure the elapsed time plus clock drift does not exceed the TTL. It retries automatically based on the Waiter configuration.
Returns an error if quorum fails, clock drift expires the lock, or the context is cancelled.
func (*DistributedLock) Extend ¶ added in v0.4.0
func (dl *DistributedLock) Extend(ctx context.Context, key, fencing string, ttl time.Duration) error
Extend prolongs the TTL of an existing lock concurrently across all Redis instances. It requires a quorum (N/2 + 1) to succeed, and validates the lock against clock drift. The operation retries automatically based on the Waiter configuration.
Returns an error if quorum fails, clock drift expires the lock, or the context is cancelled.
func (*DistributedLock) Release ¶
func (dl *DistributedLock) Release(ctx context.Context, key, fencing string) error
Release removes the lock from all Redis instances using the provided fencing token.
An error return implies clean-up failed on one or more nodes, though the lock may already be effectively released from a quorum perspective. Use DistributedLock.ReleaseWithCount for granular release status.
func (*DistributedLock) ReleaseWithCount ¶ added in v1.4.0
func (dl *DistributedLock) ReleaseWithCount(ctx context.Context, key, fencing string) (ReleaseStatus, error)
ReleaseWithCount removes the lock from all Redis instances and returns detailed execution metrics, including whether a quorum was successfully reached during release. It returns an aggregated error if any instance fails to release.
func (*DistributedLock) TryAcquire ¶ added in v0.4.0
func (dl *DistributedLock) TryAcquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
TryAcquire generates a unique fencing token and attempts to claim the lock exactly once across all Redis instances. It requires a quorum (N/2 + 1) to succeed. If quorum is not reached, any acquired locks are automatically released.
Returns the fencing token on success, or ErrLockAlreadyHeld if quorum cannot be achieved.
func (*DistributedLock) TryAcquireWithFencing ¶ added in v0.4.0
func (dl *DistributedLock) TryAcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) (err error)
TryAcquireWithFencing attempts to claim the lock exactly once using the provided fencing token across all Redis instances. It requires a quorum (N/2 + 1) to succeed. If quorum is not reached, any acquired locks are automatically released.
Returns ErrLockAlreadyHeld if quorum cannot be achieved.
func (*DistributedLock) TryExtend ¶ added in v0.4.0
func (dl *DistributedLock) TryExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
TryExtend attempts to prolong the TTL of an existing lock exactly once across all Redis instances. It requires a quorum (N/2 + 1) to succeed. Returns ErrLockNotHeld if quorum cannot be achieved.
type DistributedLockOption ¶ added in v0.4.0
type DistributedLockOption func(*DistributedLock)
func WithClockDriftBuffer ¶ added in v1.0.0
func WithClockDriftBuffer(buffer time.Duration) DistributedLockOption
WithClockDriftBuffer sets a fixed buffer duration to subtract from lock validity. This accounts for network round-trip variance and other timing uncertainties. The default value is 2ms. Combined with the clock drift factor, the validity is calculated as: TTL - elapsed - (TTL * driftFactor) - driftBuffer.
func WithClockDriftFactor ¶ added in v0.4.0
func WithClockDriftFactor(factor float64) DistributedLockOption
WithClockDriftFactor sets the clock drift factor for the distributed lock. The default value is 0.01 (1%), as recommended by the Redlock paper. This factor is used to account for clock drift between Redis instances when calculating lock validity time.
func WithDistMaxJitterDuration
deprecated
added in
v0.4.0
func WithDistMaxJitterDuration(maxJitter time.Duration) DistributedLockOption
Deprecated: Use WithDistWaiter instead. The Waiter interface provides a more robust retry configuration.
WithDistMaxJitterDuration sets the maximum jitter duration for retry delays. The actual jitter is a random value between 0 and this duration. Default is 300ms.
func WithDistMaxRetry
deprecated
added in
v0.4.0
func WithDistMaxRetry(maxRetry int) DistributedLockOption
Deprecated: Use WithDistWaiter instead. The Waiter interface provides a more robust retry configuration.
WithDistMaxRetry sets the maximum number of retries for Acquire/AcquireWithFencing/Extend. Set to -1 for infinite retries (default). These methods retry by calling the corresponding Try* method until success, context cancellation, or max retries exceeded.
func WithDistMinRetryDelay
deprecated
added in
v0.4.0
func WithDistMinRetryDelay(minDelay time.Duration) DistributedLockOption
Deprecated: Use WithDistWaiter instead. The Waiter interface provides a more robust retry configuration.
WithDistMinRetryDelay sets the minimum delay between retries. The actual delay is minRetryDelay + random jitter. Default is 0.
func WithDistWaiter ¶ added in v1.3.0
func WithDistWaiter(waiter Waiter) DistributedLockOption
WithDistWaiter sets the Waiter for the distributed lock. Default is JitterWait with default configuration DefaultJitterWait.
func WithReleaseTimeout ¶ added in v0.4.0
func WithReleaseTimeout(timeout time.Duration) DistributedLockOption
WithReleaseTimeout sets the timeout for cleanup release operations. When a lock acquisition fails, partially acquired locks are released using this timeout. Default is 5 seconds.
type ExponentialWait ¶ added in v1.3.0
type ExponentialWait struct {
// contains filtered or unexported fields
}
ExponentialWait implements a retry strategy using exponential backoff. The delay increases exponentially with each subsequent attempt:
Delay = MinDelay * (Factor ^ (Attempts - 1))
This strategy is ideal for environments where prolonged outages require aggressively expanding wait times to reduce load on the Redis cluster.
func DefaultExponentialWait ¶ added in v1.3.0
func DefaultExponentialWait() *ExponentialWait
DefaultExponentialWait returns the default configuration: infinite retries, 100ms minDelay, 1min maxDelay, factor 2.0.
func NewExponentialWait ¶ added in v1.3.0
func NewExponentialWait(opts ...ExponentialWaitOption) *ExponentialWait
NewExponentialWait creates a new ExponentialWait with the given options.
type ExponentialWaitOption ¶ added in v1.3.0
type ExponentialWaitOption func(*ExponentialWait)
ExponentialWaitOption defines a function that configures an ExponentialWait instance.
func WithExpFactor ¶ added in v1.3.0
func WithExpFactor(factor float64) ExponentialWaitOption
WithExpFactor sets the multiplier for each retry.
func WithExpMaxDelay ¶ added in v1.3.0
func WithExpMaxDelay(maxDelay time.Duration) ExponentialWaitOption
WithExpMaxDelay sets the maximum delay cap.
func WithExpMaxIteration ¶ added in v1.3.0
func WithExpMaxIteration(maxIteration int) ExponentialWaitOption
WithExpMaxIteration sets the maximum number of retry attempts.
func WithExpMinDelay ¶ added in v1.3.0
func WithExpMinDelay(minDelay time.Duration) ExponentialWaitOption
WithExpMinDelay sets the initial delay duration.
type JitterWait ¶ added in v1.3.0
type JitterWait struct {
// contains filtered or unexported fields
}
JitterWait implements a retry strategy using a constant base delay plus random jitter. The actual delay for each retry is calculated as:
Delay = MinDelay + random(0, MaxJitterDuration)
This strategy helps prevent thundering herd problems by spreading out concurrent retry attempts across multiple clients.
func DefaultJitterWait ¶ added in v1.3.0
func DefaultJitterWait() *JitterWait
DefaultJitterWait returns the default retry configuration: infinite retries (MaxIteration: -1), 0 MinDelay, and 300ms MaxJitterDuration.
func NewJitterWait ¶ added in v1.3.0
func NewJitterWait(opts ...JitterWaitOption) *JitterWait
NewJitterWait creates a new JitterWait with the given options. It initializes the waiter with DefaultJitterWait() and then applies the provided options.
func (*JitterWait) NextDelay ¶ added in v1.3.0
func (jr *JitterWait) NextDelay(retries int) time.Duration
NextDelay calculates the duration for the upcoming retry attempt. The retries parameter indicates the number of attempts made so far (e.g., 1 for the first retry).
func (*JitterWait) Wait ¶ added in v1.3.0
func (jr *JitterWait) Wait(ctx context.Context, times int) <-chan WaitInfo
Wait implements the Waiter interface using a jittered delay strategy.
Behavior:
- Initial attempt (times == 0): Returns immediately.
- Retry limit exceeded (MaxIteration >= 0 and times > MaxIteration): Returns ErrMaxRetryExceeded.
- Otherwise: Waits for MinDelay + random(0, MaxJitterDuration).
If the context is cancelled while waiting, it returns immediately with the context error.
type JitterWaitOption ¶ added in v1.3.0
type JitterWaitOption func(*JitterWait)
JitterWaitOption defines a function that configures a JitterWait instance.
func WithJitterMaxIteration ¶ added in v1.3.0
func WithJitterMaxIteration(maxIteration int) JitterWaitOption
WithJitterMaxIteration sets the maximum number of retry attempts. A value of -1 indicates infinite retries.
func WithJitterMinDelay ¶ added in v1.3.0
func WithJitterMinDelay(minDelay time.Duration) JitterWaitOption
WithJitterMinDelay sets the minimum duration to wait before retrying.
func WithMaxJitterDuration ¶ added in v1.3.0
func WithMaxJitterDuration(maxJitterDuration time.Duration) JitterWaitOption
WithMaxJitterDuration sets the maximum random duration added to the delay.
type Lock ¶ added in v0.2.0
type Lock struct {
// contains filtered or unexported fields
}
Lock provides a distributed lock backed by a single Redis instance. It supports automatic retries with configurable backoff (via Waiter), atomic operations using Lua scripts, and internally generated fencing tokens to guarantee safe lock ownership.
func NewLock ¶ added in v0.2.0
func NewLock(rcli redisClient, opts ...LockOption) *Lock
NewLock creates a new Lock backed by the given Redis client. By default, the lock retries indefinitely with 300ms max jitter.
func (*Lock) Acquire ¶ added in v0.2.0
func (dl *Lock) Acquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
Acquire generates a UUID fencing token and attempts to acquire the lock. It returns the generated fencing token on success, or an error if the context is cancelled, the retry limit is exceeded, or token generation fails.
func (*Lock) AcquireOrExtend ¶ added in v0.2.0
AcquireOrExtend prolongs the lock TTL if the current owner matches the provided fencing token. If the lock does not exist, it behaves identically to AcquireWithFencing and attempts to claim it.
func (*Lock) AcquireWithFencing ¶ added in v0.2.0
func (dl *Lock) AcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
AcquireWithFencing attempts to acquire the lock using the provided fencing token. It retries according to the underlying Waiter configuration.
func (*Lock) Extend ¶ added in v0.4.0
Extend prolongs the TTL of an existing lock if the fencing token matches the current owner. Unlike AcquireOrExtend, it does not attempt to claim an unowned lock. It returns ErrLockNotHeld if the lock does not exist or the token is incorrect.
func (*Lock) Release ¶ added in v0.2.0
Release atomically deletes the lock using a Lua script if the fencing token matches. An error is returned only if the underlying script execution fails entirely.
func (*Lock) TryAcquire ¶ added in v0.3.0
func (dl *Lock) TryAcquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
TryAcquire generates a UUID fencing token and attempts to acquire the lock exactly once. It returns ErrLockAlreadyHeld if the resource is currently locked by another client.
func (*Lock) TryAcquireWithFencing ¶ added in v0.4.0
func (dl *Lock) TryAcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
TryAcquireWithFencing attempts to acquire the lock exactly once using the provided fencing token. It returns ErrLockAlreadyHeld if the resource is currently locked.
type LockOption ¶ added in v0.4.0
type LockOption func(*Lock)
LockOption configures a Lock instance.
func WithJitterDuration
deprecated
added in
v0.4.0
func WithJitterDuration(jitter time.Duration) LockOption
Deprecated: Use WithWaiter instead. The Waiter interface provides a more robust retry configuration.
WithJitterDuration sets the maximum jitter duration for retry backoff. When a lock acquisition fails (due to the lock being held or network errors), the retry delay will include a random jitter between 0 and this duration. Default is 300 milliseconds.
func WithMaxRetry
deprecated
added in
v0.4.0
func WithMaxRetry(maxRetry int) LockOption
Deprecated: Use WithWaiter instead. The Waiter interface provides a more robust retry configuration.
WithMaxRetry sets the maximum number of retry attempts for lock acquisition. Set to a negative value (default) to retry indefinitely until context cancellation. Set to 0 to disable retries (equivalent to TryAcquire behavior).
func WithMinRetryDelay
deprecated
added in
v0.4.0
func WithMinRetryDelay(minDelay time.Duration) LockOption
Deprecated: Use WithWaiter instead. The Waiter interface provides a more robust retry configuration.
WithMinRetryDelay sets the minimum delay between retry attempts. The actual delay will be this value plus a random jitter (see WithJitterDuration). Default is 0 (only jitter delay).
func WithWaiter ¶ added in v1.3.0
func WithWaiter(waiter Waiter) LockOption
WithWaiter sets the Waiter for the lock. Default is JitterWait with default configuration DefaultJitterWait.
type Locker ¶ added in v0.4.0
type Locker interface {
// Acquire generates a random fencing token, then attempts to acquire the lock.
// It retries according to the underlying Waiter configuration until success, context cancellation,
// or the retry limit is reached.
Acquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
// TryAcquire attempts to acquire the lock exactly once without retrying.
// It returns ErrLockAlreadyHeld if the lock is currently held by another client.
TryAcquire(ctx context.Context, key string, ttl time.Duration) (fencing string, err error)
// AcquireWithFencing attempts to acquire the lock using the provided fencing token.
// It retries according to the underlying Waiter configuration.
AcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
// TryAcquireWithFencing attempts to acquire the lock exactly once using the provided fencing token.
TryAcquireWithFencing(ctx context.Context, key, fencing string, ttl time.Duration) error
// AcquireOrExtend extends the lock if the provided fencing token matches the current owner.
// If the lock does not exist, it behaves identically to AcquireWithFencing.
AcquireOrExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
// Extend prolongs the TTL of an existing lock if the fencing token matches.
// Unlike AcquireOrExtend, this will not acquire the lock if it is currently unowned.
Extend(ctx context.Context, key, fencing string, ttl time.Duration) error
// TryExtend attempts to extend the TTL exactly once without retrying.
// It returns ErrLockNotHeld if the lock does not exist or the fencing token does not match.
TryExtend(ctx context.Context, key, fencing string, ttl time.Duration) error
// Release atomically deletes the lock if the caller's fencing token matches the current owner.
// For DistributedLock, returning an error implies the release failed on at least one instance,
// which may require manual inspection via the joined error.
Release(ctx context.Context, key, fencing string) error
}
Locker defines the common interface implemented by both Lock and DistributedLock.
type ReleaseStatus ¶ added in v1.4.0
ReleaseStatus contains the result of a release operation.
type Waiter ¶ added in v1.3.0
type Waiter interface {
// Wait returns a channel that receives a WaitInfo struct after the appropriate retry delay.
//
// The ctx parameter monitors for cancellation. If cancelled, Wait returns immediately
// with a WaitInfo containing the context error.
// The times parameter represents the current attempt number (0-indexed). 0 indicates
// the initial attempt and should generally return immediately without delay.
//
// The returned channel should be buffered (e.g., make(chan WaitInfo, 1)) to guarantee
// the sender does not block indefinitely if the receiver stops listening.
Wait(ctx context.Context, times int) <-chan WaitInfo
}
Waiter controls the retry behavior and backoff strategies for lock acquisitions. Implementations dictate the delay duration between retry attempts and the termination condition.
type WatchDog ¶ added in v1.2.0
type WatchDog struct {
// contains filtered or unexported fields
}
WatchDog is responsible for automatically extending the TTL of locks.
BUG(trviph): The WatchDog works similarly to Watch/WatchWithInterval and relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use WatchDog with DistributedLock if you are not comfortable with this uncertainty.
func NewWatchDog ¶ added in v1.2.0
func NewWatchDog(lock Locker, opts ...WatchDogOption) *WatchDog
NewWatchDog creates a new WatchDog instance with the corresponding lock provider and options. For simpler usage, where you don't care about error handling, see Watch and WatchWithInterval.
func (*WatchDog) Run ¶ added in v1.2.0
Run starts the watchdog loop to continuously monitor and prolong locks. It executes indefinitely until the provided context is cancelled.
WARNING(trviph): Do not pass context.Background() without a cancellation mechanism (e.g., context.WithCancel), otherwise the watchdog goroutine will leak and never terminate.
type WatchDogCallback ¶ added in v1.2.0
WatchDogCallback defines a function executed when an event occurs during the watchdog loop. In the event of context cancellation, the context error is passed as the final error.
type WatchDogOption ¶ added in v1.2.0
type WatchDogOption func(*WatchDog)
WatchDogOption configures the WatchDog.
func WithCallbacks
deprecated
added in
v1.2.0
func WithCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOption
Deprecated: Use WithErrorCallbacks instead. This function was renamed to clarify that it only handles error callbacks. For successful lock extensions, use WithExtensionCallbacks.
WithCallbacks overrides the context for error callbacks and appends new callbacks.
func WithErrorCallbacks ¶ added in v1.3.0
func WithErrorCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOption
WithErrorCallbacks overrides the context for error callbacks and appends new callbacks.
func WithExtensionCallbacks ¶ added in v1.3.0
func WithExtensionCallbacks(cbCtx context.Context, callbacks ...WatchDogCallback) WatchDogOption
WithExtensionCallbacks overrides the context for extension callbacks and appends new callbacks.
func WithItem ¶ added in v1.2.0
func WithItem(key, fencing string, ttl, interval time.Duration) WatchDogOption
WithItem appends a single lock item to be watched. If interval is less than or equal to 0, it defaults to ttl/2.
func WithItems ¶ added in v1.2.0
func WithItems(items ...*WatchItem) WatchDogOption
WithItems appends multiple lock items to be watched. If an item's interval is less than or equal to 0, it defaults to ttl/2.
Notes ¶
Bugs ¶
The Extend and TryExtend methods do not release partially-extended locks on quorum failure. Unlike Acquire/TryAcquire/AcquireOrExtend, if Extend/TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use Extend/TryExtend if you are not comfortable with this uncertainty.
The WatchDog works similarly to Watch/WatchWithInterval and relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use WatchDog with DistributedLock if you are not comfortable with this uncertainty.
The Watch function relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use Watch with DistributedLock if you are not comfortable with this uncertainty.
The WatchWithInterval function relies on TryExtend. When using a DistributedLock, it suffers from the partial extension issue on quorum failure. If DistributedLock.TryExtend fails to achieve quorum, the successfully extended instances remain locked until TTL expires.
This bug will probably never be fixed. Do not use WatchWithInterval with DistributedLock if you are not comfortable with this uncertainty.