Documentation
¶
Index ¶
- Constants
- type AlarmEvent
- type AnalysisState
- type Monitor
- func (m *Monitor) AbstractClasses() map[string]string
- func (m *Monitor) AddAlarmWatcher() chan *AlarmEvent
- func (m *Monitor) AddWatcher(cameraID int64) chan *AnalysisState
- func (m *Monitor) AddWatcherAllCameras() chan *AnalysisState
- func (m *Monitor) AllClasses() []string
- func (m *Monitor) ClassToIdx(cls string) int
- func (m *Monitor) Close()
- func (m *Monitor) InjectTestCamera()
- func (m *Monitor) InjectTestFrame(cameraIndex int, imgID int64, pts time.Time, img *accel.YUVImage)
- func (m *Monitor) LatestFrame(cameraID int64) (*cimg.Image, *nn.DetectionResult, *AnalysisState, error)
- func (m *Monitor) NNQueueLength() int
- func (m *Monitor) NumNNThreadsActive() int
- func (m *Monitor) RemoveAlarmWatcher(ch chan *AlarmEvent)
- func (m *Monitor) RemoveWatcher(cameraID int64, ch chan *AnalysisState)
- func (m *Monitor) RemoveWatcherAllCameras(ch chan *AnalysisState)
- func (m *Monitor) SetCameras(cameras []*camera.Camera)
- func (m *Monitor) UnrecognizedClassIdx() int
- type MonitorOptions
- type NNThreadState
- type ResizeQuality
- type TimeAndPosition
- type TrackedObject
Constants ¶
const WatcherChannelSize = 100
SYNC-WATCHER-CHANNEL-SIZE
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AlarmEvent ¶
Sent when the alarm is triggered
type AnalysisState ¶
type AnalysisState struct {
CameraID int64 `json:"cameraID"`
Input *nn.DetectionResult `json:"input"`
Objects []TrackedObject `json:"objects"`
}
Result of post-process analysis on the Object Detection neural network output SYNC-ANALYSIS-STATE
type Monitor ¶
Monitor runs our neural networks on the camera streams
We process camera frames in phases: 1. Read frames from cameras (frameReader) 2. Process frames with a neural network (nnThread) 3. Analyze results from the neural network (analyzer)
We connect these phases with channels.
func NewMonitor ¶
func NewMonitor(logger logs.Log, options *MonitorOptions) (*Monitor, error)
Create a new monitor
func (*Monitor) AbstractClasses ¶
Returns the map of concrete -> abstract NN classes
func (*Monitor) AddAlarmWatcher ¶
func (m *Monitor) AddAlarmWatcher() chan *AlarmEvent
Add a new agent that is going to watch for alarm triggers
func (*Monitor) AddWatcher ¶
func (m *Monitor) AddWatcher(cameraID int64) chan *AnalysisState
Register to receive detection results for a specific camera.
func (*Monitor) AddWatcherAllCameras ¶
func (m *Monitor) AddWatcherAllCameras() chan *AnalysisState
Add a watcher that is interested in all camera activity
func (*Monitor) AllClasses ¶
Return the list of all classes that the NN detects
func (*Monitor) ClassToIdx ¶
Returns the special index of the "class unrecognized" class if 'cls' is not recognized
func (*Monitor) InjectTestCamera ¶
func (m *Monitor) InjectTestCamera()
Create a fake camera for unit tests to reference
func (*Monitor) InjectTestFrame ¶
Inject a frame for NN analysis, for use by unit tests
func (*Monitor) LatestFrame ¶
func (m *Monitor) LatestFrame(cameraID int64) (*cimg.Image, *nn.DetectionResult, *AnalysisState, error)
Return the most recent frame and detection result for a camera
func (*Monitor) NNQueueLength ¶
Returns the number of items in the NN queue
func (*Monitor) NumNNThreadsActive ¶
Returns the number of NN threads that are busy with work
func (*Monitor) RemoveAlarmWatcher ¶
func (m *Monitor) RemoveAlarmWatcher(ch chan *AlarmEvent)
Unregister an alarm watcher
func (*Monitor) RemoveWatcher ¶
func (m *Monitor) RemoveWatcher(cameraID int64, ch chan *AnalysisState)
Unregister from detection results for a specific camera
func (*Monitor) RemoveWatcherAllCameras ¶
func (m *Monitor) RemoveWatcherAllCameras(ch chan *AnalysisState)
Unregister from detection results of all cameras
func (*Monitor) SetCameras ¶
Set cameras and start monitoring
func (*Monitor) UnrecognizedClassIdx ¶
Returns the class index of the special "class unrecognized" class
type MonitorOptions ¶
type MonitorOptions struct {
// EnableFrameReader is allowed to be false for unit tests, so that the tests can feed the monitor
// frames directly, without having the monitor pull frames from the cameras.
EnableFrameReader bool
// ModelNameLQ is the low quality NN model name, such as "yolov8m"
ModelNameLQ string
// ModelName is the high quality NN model name, such as "yolov8l"
ModelNameHQ string
// ModelsDir is the directory where we store NN models
ModelsDir string
// If not zero, override the default number of NN threads
NNThreads int
// Run an additional high quality model, which is used to confirm the detection of a new object.
// If EnableDualModel is true, then ModelWidth and ModelHeight are ignored.
EnableDualModel bool
// If specified along with ModelHeight, this is the desired size of the neural network resolution.
// This was created for unit tests, where we'd test different resolutions.
// Ignored if EnableDualModel is true.
ModelWidth int
// See ModelWidth for details. Either ModelWidth and ModelHeight must be zero, or both must be non-zero.
ModelHeight int
// Emit verbose log messsages about object tracking
DebugTracking bool
// Force batch size to 1 for all neural network models. Used by unit tests to validate frame by frame.
ForceBatchSizeOne bool
}
func DefaultMonitorOptions ¶
func DefaultMonitorOptions() *MonitorOptions
DefaultMonitorOptions returns a new MonitorOptions object with default values
type NNThreadState ¶
type NNThreadState int
const ( NNThreadStateIdle NNThreadState = iota NNThreadStateRunning )
type ResizeQuality ¶
type ResizeQuality int
const ( ResizeQualityLow ResizeQuality = iota ResizeQualityHigh )
type TimeAndPosition ¶
type TimeAndPosition struct {
Time time.Time `json:"-"`
Box nn.Rect `json:"box"`
Confidence float32 `json:"confidence"`
}
SYNC-TIME-AND-POSITION
type TrackedObject ¶
type TrackedObject struct {
ID uint32 `json:"id"`
Class int `json:"class"`
// The class confidence margin, which is the difference between the confidence of the most likely class
// and the confidence of the second most likely class.
// I was initially hoping that this could help assist in rejecting false positives, but it doesn't.
// Training the correct NN is the only way to truly address false positives.
ConfidenceMargin float32 `json:"confidenceMargin"`
// Number of frames that we have considered this object genuine.
// If Genuine = 0, then we still don't consider it genuine.
// If Genuine = 1, then this is the first time we consider it genuine.
// If Genuine > 1, then we've considered it genuine for this many frames.
Genuine int `json:"genuine"`
// If Genuine = 1, then Frames contains all the historical frames that we know about.
// In all other cases, Frames contains only the single most recent frame.
// Frames is never empty.
Frames []TimeAndPosition `json:"frames"`
}
An object that was detected by the Object Detector, and is now being tracked by a post-process SYNC-TRACKED-OBJECT
func (*TrackedObject) LastFrame ¶
func (t *TrackedObject) LastFrame() TimeAndPosition