Documentation ¶
Index ¶
Constants ¶
const ( RetryMax = 5 RecordAgeLimit = -1 * 30 * time.Minute )
Variables ¶
This section is empty.
Functions ¶
func RunJob ¶
func RunJob(j Job)
Run a job with a delay in front. This will block the calling thread. If you do not want to block use `RunJobAsync`
func RunJobAsync ¶
func RunJobAsync(j Job)
Run a Job with a delay but in the background so it does _not_ block the calling routine.
Types ¶
type AsyncDestroyJob ¶
type AsyncDestroyJob struct { Headers []kafka.Header `json:"headers"` Tenant int64 `json:"tenant"` WaitSeconds int `json:"wait_seconds"` Model string `json:"model"` Id int64 `json:"id"` }
func (AsyncDestroyJob) Arguments ¶
func (ad AsyncDestroyJob) Arguments() map[string]interface{}
func (AsyncDestroyJob) Delay ¶
func (ad AsyncDestroyJob) Delay() time.Duration
func (AsyncDestroyJob) Name ¶
func (ad AsyncDestroyJob) Name() string
func (AsyncDestroyJob) Run ¶
func (ad AsyncDestroyJob) Run() error
func (AsyncDestroyJob) ToJSON ¶
func (ad AsyncDestroyJob) ToJSON() []byte
type BackgroundWorkerHealthChecker ¶
type BackgroundWorkerHealthChecker struct {
// contains filtered or unexported fields
}
BackgroundWorkerHealthChecker is a struct that contains the time for the health checker and a channel to pass the shared memory for the time through time channel
type Job ¶
type Job interface { // how long to wait until performing (just do a sleep) Delay() time.Duration // any arguments for said job Arguments() map[string]interface{} // pretty name to print Name() string // run the job, using any args on the struct Run() error // serialize the job into JSON ToJSON() []byte }
Interface to codify jobs against, mostly consisting of a few methods:
- Delay, how long to wait before executing. Will mostly be a method that returns 0 for jobs that don't need to be delayed, otherwise it'll be a set amount of time.
2. Arguments, for pretty logging
3. Name, for pretty logging
4. Run, what do we do?
5. ToJSON, serialize the job into a byte array for sending off to redis
type JobRequest ¶
func (JobRequest) MarshalBinary ¶
func (jr JobRequest) MarshalBinary() (data []byte, err error)
implementing binary mashaler/unmarshaler interfaces for redis encoding/decoding.
func (*JobRequest) Parse ¶
func (jr *JobRequest) Parse() error
func (*JobRequest) UnmarshalBinary ¶
func (jr *JobRequest) UnmarshalBinary(data []byte) error
type RetryCreateJob ¶
type RetryCreateJob struct{}
func (*RetryCreateJob) Arguments ¶
func (r *RetryCreateJob) Arguments() map[string]interface{}
func (*RetryCreateJob) Delay ¶
func (r *RetryCreateJob) Delay() time.Duration
implementing the interface - but these functions aren't really needed since this is a scheduled job.
func (*RetryCreateJob) Name ¶
func (r *RetryCreateJob) Name() string
func (*RetryCreateJob) Run ¶
func (r *RetryCreateJob) Run() error
run the job, using any args on the struct
func (*RetryCreateJob) ToJSON ¶
func (r *RetryCreateJob) ToJSON() []byte
type ScheduledJob ¶
ScheduledJob is a struct which stores any jobs we want ran on a consistent schedule.
it has 2 fields:
Interval: how often to run the job Job: self-explanatory.
type SuperkeyDestroyJob ¶
type SuperkeyDestroyJob struct { Headers []kafka.Header `json:"headers"` Identity string `json:"identity"` Tenant int64 `json:"tenant"` Model string `json:"model"` Id int64 `json:"id"` }
func (SuperkeyDestroyJob) Arguments ¶
func (sk SuperkeyDestroyJob) Arguments() map[string]interface{}
func (SuperkeyDestroyJob) Delay ¶
func (sk SuperkeyDestroyJob) Delay() time.Duration
func (SuperkeyDestroyJob) Name ¶
func (sk SuperkeyDestroyJob) Name() string
func (SuperkeyDestroyJob) Run ¶
func (sk SuperkeyDestroyJob) Run() error
func (SuperkeyDestroyJob) ToJSON ¶
func (sk SuperkeyDestroyJob) ToJSON() []byte