dastard

package module
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 13, 2024 License: NIST-PD-fallback Imports: 35 Imported by: 0

README

Dastard

A data acquisition program for NIST transition-edge sensor (TES) microcalorimeters. Designed to replace the earlier programs ndfb_server and matter (see their bitbucket repository).

Installation

Requires Go version 1.16 or higher (released February 2021) because gonum requires it. Dastard is tested automatically on versions 1.16 and LATEST (as of February 2023, Go version 1.20 is the most recent).

We recommend always using the Makefile to build Dastard. That's not a typical go usage, but we have a simple trick built into the Makefile that allows it to execute go build with linker arguments to set values of two global variables. By this step, we are able to get the git hash and the build date of the current version to be known inside Dastard. Hooray! The lesson is always use one of the following:

# To build locally and run that copy
make build && ./dastard

or

# To install as $(go env GOPATH)/bin/dastard, which generally means $HOME/go/bin/dastard
make install   # also implies make build
dastard

There was temporarily a problem with Go 1.20 and TDM systems If you are using TDM systems with the Lancero device, beware that we had trouble making the device read correctly with Go 1.20 (as of April 20, 2023). This turns out to be the result of a bug in Go 1.20, surprisingly. We believe this problem was fixed in May 2023 (Dastard v0.2.19) with a workaround, but we leave this note here just in case you notice problems we didn't. Certainly, for data sources other than Lancero, Go 1.20+ will be fine.

Ubuntu 22.04, 20.04, 18.04, and (maybe?) 16.04

One successful installation of the dependencies looked like this. Before pasting the following, be sure to run some simple command as sudo first; otherwise, the password entering step will screw up your multi-line paste.

# Dastard dependencies
sudo apt-get -y update
sudo apt-get install -y libsodium-dev libzmq3-dev git gcc pkg-config

# Install go
sudo add-apt-repository -y ppa:longsleep/golang-backports
sudo apt-get -y update
sudo apt-get -y install golang-go
go version

# If you need lancero-TDM, install go 1.19,
# then ensure that version go-1.19 is the preferred version in /usr/bin
# If the above reports version 1.19, or if you don't need lancero-TDM, then skip what follows
# If the above reports version 1.18 or lower, obviously change the instances of "1.19" in what follows.
sudo apt-get -y install golang-1.19-go
pushd /usr/bin
sudo ln -sf ../lib/go-1.19/bin/go go
sudo ln -sf ../lib/go-1.19/bin/gofmt gofmt
popd


# Install Dastard as a development copy
# (These lines should have an equivalent "go install" like maybe "go install gitub.com/usnistgov/dastard@latest"??)
DASTARD_DEV_PATH=$(go env GOPATH)/src/github.com/usnistgov
mkdir -p $DASTARD_DEV_PATH
cd $DASTARD_DEV_PATH
git clone https://github.com/usnistgov/dastard
cd dastard

# Build, then try to run the local copy
# Using the Makefile is preferred, because it's the only way we're aware of to get the git hash and build date
# embedded in the built binary file. Run the new binary and check its version output.
make build && ./dastard --version

# Check whether the GOPATH is in your bash path. If not, update ~/.bashrc to make it so.
# This will fix the current terminal AND all that run ~/.bashrc in the future, but not
# any other existing terminals (until you do "source ~/.bashrc" in them).
source update-path.sh

# Now that PATH includes go's bin, try to install and run the installed copy
make install
dastard --version

Be sure to add ~/go/bin to your PATH environment variable, or replace ~/go with the results of your specific result when you run go env GOPATH.

The following seem to be out-of-date and apply only to Ubuntu 16.04 LTS, but just in case they are useful to you, here are the old instructions.

sudo add-apt-repository -y 'deb http://download.opensuse.org/repositories/network:/messaging:/zeromq:/git-stable/xUbuntu_16.04/ ./'
cd ~/Downloads
wget http://download.opensuse.org/repositories/network:/messaging:/zeromq:/git-stable/xUbuntu_16.04/Release.key
sudo apt-key add - < Release.key
sudo apt-get -y update
sudo apt-get install -y libsodium-dev libczmq-dev git
Gentoo Linux

The following seemed to work on a NASA balloon flight computer in March 2023:

USE="static-libs -systemd" emerge --ask net-libs/zeromq dev-libs/libsodium
emerge --ask dev-vcs/git dev-lang/go
mkdir -p $(go env GOPATH)/bin/

After this, follow the Ubuntu instructions starting with "# Install Dastard as a development copy". Make sure that the update-path.sh script worked and properly updated your PATH variable. That script hasn't been tested in Gentoo, as far as I know.

MacOS dependencies (Homebrew or MacPorts)

As appropriate, use one of

brew install go zmq libsodium pkg-config

or

sudo port install go zmq libsodium-dev pkg-config
Also install these companion programs
And boost your UDP receive buffer

This only applies to users who will need µMUX, or any data source that eventually sends UDP packets to Dastard. The size of the standard Linux receive buffer for network sockets is only 208 kB, which is not nearly sufficient and leads to dropped packets. We recommend increasing the buffer size to 64 MB. In Linux, that buffer size parameter is called net.core.rmem_max. In Mac OS X it seems to be called net.inet.udp.recvspace. For one-time testing, this means you increase the buffer with:

sudo sysctl -w net.core.rmem_max=67108864

If that works as intended, you can make the configuration permanent (i.e., it will persist after rebooting) if you add the following line to /etc/sysctl.conf

net.core.rmem_max=67108864

That's only for Linux machines. OS X doesn't seem to have such a file, but might instead require a startup script, possibly in /Library/StartupItems/.

Purpose

DASTARD is the Data Acquisition System for Triggering, Analyzing, and Recording Data. It is to be used with TES microcalorimeter arrays and the NIST Time-Division Multiplexed (TDM) readout or--in the future--the Microwave Multiplexed readout.

Assuming that the TDM readout system is properly configured first (beyond the scope of this project), one can use Dastard to:

  1. read the high-rate data stream;
  2. search it for x-ray pulse triggers;
  3. generate "triggered records" of a fixed length;
  4. store these records to disk in the LJH data format;
  5. "publish" the full records to a socket for other programs to plot or use as they see fit; and
  6. compute important derived quantities from these records.
Goals, or, why did we need Dastard?

Some reasons to replace the earlier data acquisition programs with Dastard in Go:

  1. To permit simple operation of more than one data source of the same type under one program (e.g., multiple uMUX boxes or multiple PCI-express interfaces to the TDM crate). With previous DAQ programs, multiple PCI-e cards required multiple invocations of the data server, which were difficult to coordinate.
  2. To employ multi-threading to parallelize over multiple cores, and do so in a language with natural concurrency constructs. (This was attempted in Matter, but it never worked because of the algorithm used there to generate group triggering.)
  3. To separate the GUI from the "back-end" code. This keeps the core functions from having to share time with the GUI event loop, and it allows for more natural use of "external commanding", such as we require at synchrotron beamlines.
  4. To break the rigid assumption that channels fall into error/feedback pairs. For microwave MUX systems, this will be quite useful.
  5. To use code testing from the outset and ensure code correctness.
  6. To start with a clean slate, so that we can more easily add new features, such as
    • Generalize the group trigger (each channel can receive triggers from its own list of sources). This will help with cross-talk studies and with the Kaon project HEATES.
    • Allow writing of group-triggered ("secondary") pulses to different files.
    • Generate "variable-length records" for certain high-rate analyses. (This is still hypothetical.)

Some goals for DASTARD, in addition to these reasons:

  • Scale up to operate even with future, larger arrays.
  • No longer require C++ expertise to work on DAQ software, and to use only Python for the GUI portions.

Project structure

DASTARD is a back-end program written in Go. It handles the "server" aspects of ndfb_server, as well as the triggering and data-recording duties of matter. It is complemented by two GUI-based projects: a control GUI and a data plotter.

  • dastardcommander is a Python-Qt5 GUI to control Dastard.
  • microscope is a Qt-based plotting GUI to plot microcalorimeter pulse records, discrete Fourier transforms, and related data. It is written in C++ and contains code from matter

We envision future control clients other than Dastard-commander. They should enable commanding from, for example, the beamline control system of a synchrotron.

Futher reading:

  • DASTARD.md: info about Dastard internals (very technical!)
  • BINARY_FORMATS.md: info about the binary data packets sent by Dastard (very technical!)
Configuration

DASTARD caches its configuration via the Viper "complete configuration solution for Go applications". You can find your user file at $HOME/.dastard/config.yaml. It's human-readable, though you should normally never need to read or edit it. If you have configuration problems and cannot find another way out, it is okay to delete this file (or hide it temporarily in, for example, /tmp/). If you want some kind of global defaults that you want to persist even if that file is deleted, it is possible to create a global /etc/dastard/config.yaml file with a subset of the values you want to set by default.

But again, this is expert usage! You should not normally need to touch or look at the configuration file. It's there for internal use to allow settings to persist from one run of Dastard to another.

Contributors and acknowledgments
  • Joe Fowler (@joefowler or joe.fowler@nist). Sept 2017-present
  • Galen O'Neil (@ggggggggg or oneilg@nist). May 2018-present

For all email address, replace "@nist" with "@nist.gov".

Many key concepts in Dastard were adapted from the programs ndfb_server, xcaldaq_client, and matter, written by Dastard's authors and by contributors from both NIST Boulder Laboratories and NASA Goddard Space Flight Center.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var Build = BuildInfo{
	Version: "0.3.2",
	Githash: "no git hash computed",
	Date:    "no build date computed",
}

Build is a global holding compile-time information about the build

View Source
var DastardStartTime time.Time

DastardStartTime is a global holding the time init() was run

View Source
var ProblemLogger *log.Logger

ProblemLogger will log warning messages to a file

View Source
var PubRecordsChan chan []*DataRecord

PubRecordsChan is used to enable multiple different DataPublishers to publish on the same zmq pub socket

View Source
var PubSummariesChan chan []*DataRecord

PubSummariesChan is used to enable multiple different DataPublishers to publish on the same zmq pub socket

View Source
var UpdateLogger *log.Logger

UpdateLogger will log client updates to a file

Functions

func CoreLoop

func CoreLoop(ds DataSource, queuedRequests chan func())

CoreLoop has the DataSource produce data until graceful stop. This will be a long-running goroutine, as long as a source is active.

func RunClientUpdater

func RunClientUpdater(statusport int, abort <-chan struct{})

RunClientUpdater forwards any message from its input channel to the ZMQ publisher socket to publish any information that clients need to know.

func RunRPCServer

func RunRPCServer(portrpc int, block bool)

RunRPCServer sets up and runs a permanent JSON-RPC server. If `block`, it will block until Ctrl-C and gracefully shut down. (The intention is that block=true in normal operation, but false for certain tests.)

func Start

func Start(ds DataSource, queuedRequests chan func(), Npresamp int, Nsamples int) error

Start will start the given DataSource, including sampling its data for # channels. Steps:

  1. Sample: a method implemented per source that determines the # of channels and other internal facts that we need to know.
  2. PrepareChannels: an AnySource method (but overridden by certain other sources). Set up the channel numbering and naming system.
  3. PrepareRun: an AnySource method to do the actions that any source needs before starting the actual acquisition phase.
  4. StartRun: a per-source method to begin data acquisition, if relevant.
  5. Loop over calls to ds.blockingRead(), a per-source method that waits for data.

When done with the loop, close all channels to DataStreamProcessor objects.

Types

type AbacoBuffersType

type AbacoBuffersType struct {
	// contains filtered or unexported fields
}

AbacoBuffersType is an internal message type used to allow a goroutine to read from the Abaco card and put data on a buffered channel

type AbacoGroup

type AbacoGroup struct {
	FrameTimingCorrepondence
	// contains filtered or unexported fields
}

AbacoGroup represents a channel group, a set of consecutively numbered channels.

func NewAbacoGroup

func NewAbacoGroup(index GroupIndex, opt AbacoUnwrapOptions) *AbacoGroup

NewAbacoGroup creates an AbacoGroup given the specified GroupIndex.

type AbacoRing

type AbacoRing struct {
	// contains filtered or unexported fields
}

AbacoRing represents a single shared-memory ring buffer that stores an Abaco card's data. Beware of a possible future: multiple cards could pack data into the same ring.

func NewAbacoRing

func NewAbacoRing(ringnum int) (dev *AbacoRing, err error)

NewAbacoRing creates a new AbacoRing and opens the underlying shared memory for reading.

func (*AbacoRing) ReadAllPackets

func (device *AbacoRing) ReadAllPackets() ([]*packets.Packet, error)

ReadAllPackets returns an array of *packet.Packet, as read from the device's RingBuffer.

type AbacoSource

type AbacoSource struct {
	// These items have to do with shared memory ring buffers
	Nrings int // number of available ring buffers

	AnySource
	// contains filtered or unexported fields
}

AbacoSource represents all AbacoRing ring buffers and AbacoUDPReceiver objects that can potentially supply data, as well as all AbacoGroups that are discovered in the SampleData phase. We currently expect the use of EITHER ring buffers or UDP receivers but not both. We won't enforce this as a requirement unless it proves important.

func NewAbacoSource

func NewAbacoSource() (*AbacoSource, error)

NewAbacoSource creates a new AbacoSource.

func (*AbacoSource) Configure

func (as *AbacoSource) Configure(config *AbacoSourceConfig) (err error)

Configure sets up the internal buffers with given size, speed, and min/max.

func (*AbacoSource) Delete

func (as *AbacoSource) Delete()

Delete closes the ring buffers for all AbacoRings.

func (*AbacoSource) PrepareChannels

func (as *AbacoSource) PrepareChannels() error

PrepareChannels configures an AbacoSource by initializing all data structures that have to do with channels and their naming/numbering.

func (*AbacoSource) Sample

func (as *AbacoSource) Sample() error

Sample determines key data facts by sampling some initial data.

func (*AbacoSource) StartRun

func (as *AbacoSource) StartRun() error

StartRun tells the hardware to switch into data streaming mode. Discard existing data, then launch a goroutine to consume data.

type AbacoSourceConfig

type AbacoSourceConfig struct {
	ActiveCards    []int
	AvailableCards []int
	HostPortUDP    []string // host:port pairs to listen for UDP packets
	AbacoUnwrapOptions
}

AbacoSourceConfig holds the arguments needed to call AbacoSource.Configure by RPC.

type AbacoUDPReceiver added in v0.2.10

type AbacoUDPReceiver struct {
	// contains filtered or unexported fields
}

AbacoUDPReceiver represents a single Abaco device producing data by UDP packets to a single UDP port.

func NewAbacoUDPReceiver added in v0.2.10

func NewAbacoUDPReceiver(hostport string) (dev *AbacoUDPReceiver, err error)

NewAbacoUDPReceiver creates a new AbacoUDPReceiver and binds as a server to the requested host:port

func (*AbacoUDPReceiver) ReadAllPackets added in v0.2.10

func (device *AbacoUDPReceiver) ReadAllPackets() ([]*packets.Packet, error)

ReadAllPackets returns an array of *packet.Packet, as read from the device's network connection. It is a non-blocking wrapper around the UDP connection read (a blocking operation).

type AbacoUnwrapOptions added in v0.2.13

type AbacoUnwrapOptions struct {
	RescaleRaw bool  // are we rescaling (left-shifting) the raw data? If not, the rest is ignored
	Unwrap     bool  // are we even unwrapping at all?
	Bias       bool  // should the unwrapping be biased?
	ResetAfter int   // reset the phase unwrap offset back to 0 after this many samples (≈auto-relock)
	PulseSign  int   // direction data will go when pulse arrives, used to calculate bias level
	InvertChan []int // invert channels with these numbers before unwrapping phase
}

AbacoUnwrapOptions contains options to control phase unwrapping.

type AnySource

type AnySource struct {
	// contains filtered or unexported fields
}

AnySource implements features common to any object that implements DataSource, including the output channels and the abort channel.

func (*AnySource) ArchiveDataBlock added in v0.3.0

func (ds *AnySource) ArchiveDataBlock(N int, file *os.File, finalName string) error

ArchiveDataBlock stores a minimum amount of data (`N` samples at least) for each channel, in the form of a `storeableDataBlock` struct, then when it's done, writes that info to the numpy-style npz file `file`. Finally, it closes that file and renames it to `finalName`.

func (*AnySource) ChanGroups

func (ds *AnySource) ChanGroups() []GroupIndex

ChanGroups returns the sorted slice of channel GroupIndex values.

func (*AnySource) ChangeGroupTrigger added in v0.2.12

func (ds *AnySource) ChangeGroupTrigger(turnon bool, gts *GroupTriggerState) error

ChangeGroupTrigger either adds or deletes the connections in `gts` (add when `turnon` is true, otherwise delete).

func (*AnySource) ChangeTriggerState

func (ds *AnySource) ChangeTriggerState(state *FullTriggerState) error

ChangeTriggerState changes the trigger state for 1 or more channels.

func (*AnySource) ChannelNames

func (ds *AnySource) ChannelNames() []string

ChannelNames returns a slice of the channel names

func (*AnySource) ChannelsWithProjectors

func (ds *AnySource) ChannelsWithProjectors() []int

ChannelsWithProjectors returns a list of the ChannelIndices of channels that have projectors loaded

func (*AnySource) ComputeFullTriggerState

func (ds *AnySource) ComputeFullTriggerState() []FullTriggerState

ComputeFullTriggerState uses a map to collect channels with identical TriggerStates, so they can be sent all together as one unit.

func (*AnySource) ComputeGroupTriggerState added in v0.2.12

func (ds *AnySource) ComputeGroupTriggerState() GroupTriggerState

ComputeGroupTriggerState returns the current `GroupTriggerState`.

func (*AnySource) ComputeWritingState

func (ds *AnySource) ComputeWritingState() *WritingState

ComputeWritingState returns a partial copy of the writingState

func (*AnySource) ConfigureMixFraction

func (ds *AnySource) ConfigureMixFraction(mfo *MixFractionObject) ([]float64, error)

ConfigureMixFraction provides a default implementation for all non-lancero sources that don't need the mix

func (*AnySource) ConfigureProjectorsBases

func (ds *AnySource) ConfigureProjectorsBases(channelIndex int, projectors *mat.Dense, basis *mat.Dense, modelDescription string) error

ConfigureProjectorsBases calls SetProjectorsBasis on ds.processors[channelIndex]

func (*AnySource) ConfigurePulseLengths

func (ds *AnySource) ConfigurePulseLengths(nsamp, npre int) error

ConfigurePulseLengths set the pulse record length and pre-samples.

func (*AnySource) GetState

func (ds *AnySource) GetState() SourceState

GetState returns the sourceState value in a race-free fashion

func (*AnySource) HandleDataDrop

func (ds *AnySource) HandleDataDrop(droppedFrames, firstFrameIndex int) error

HandleDataDrop writes to a file in the case that a data drop is detected. "Data drop" refers to a case where a read from a source (e.g., the LanceroSource) misses some frames of data.

func (*AnySource) HandleExternalTriggers

func (ds *AnySource) HandleExternalTriggers(externalTriggerRowcounts []int64) error

HandleExternalTriggers writes external trigger to a file, creates that file if neccesary, and sends out messages with the number of external triggers observed

func (*AnySource) Nchan

func (ds *AnySource) Nchan() int

Nchan returns the current number of valid channels in the data source.

func (*AnySource) PrepareChannels

func (ds *AnySource) PrepareChannels() error

PrepareChannels configures an AnySource by initializing all data structures that have to do with channels. Other objects that meet the DataSource spec might override this default version.

func (*AnySource) PrepareRun

func (ds *AnySource) PrepareRun(Npresamples int, Nsamples int) error

PrepareRun configures an AnySource by initializing all data structures that cannot be prepared until we know the number of channels. It's an error for ds.nchan to be less than 1.

func (*AnySource) ProcessSegments

func (ds *AnySource) ProcessSegments(block *dataBlock) error

ProcessSegments processes a single outstanding segment for each of ds.processors in parallel. Returns when all segments have been processed. It's more synchronous than our original plan of each dsp launching its own goroutine.

func (*AnySource) RunDoneActivate

func (ds *AnySource) RunDoneActivate()

RunDoneActivate adds one to ds.runDone, this should only be called in Start

func (*AnySource) RunDoneDeactivate

func (ds *AnySource) RunDoneDeactivate()

RunDoneDeactivate calls Done on ds.runDone, this should only be called (by defer) in Start

func (*AnySource) RunDoneWait

func (ds *AnySource) RunDoneWait()

RunDoneWait returns when the source run is done, i.e., the source is stopped

func (*AnySource) Running

func (ds *AnySource) Running() bool

Running tells whether the source is actively running.

func (*AnySource) SamplePeriod

func (ds *AnySource) SamplePeriod() time.Duration

SamplePeriod returns the sample period of the underlying source.

func (*AnySource) SetCoupling

func (ds *AnySource) SetCoupling(status CouplingStatus) error

SetCoupling sets the FB/Err coupling status. SetCoupling(NoCoupling) is allowed for generic data sources, but other values are not.

func (*AnySource) SetExperimentStateLabel

func (ds *AnySource) SetExperimentStateLabel(timestamp time.Time, stateLabel string) error

SetExperimentStateLabel writes to a file with name like XXX_experiment_state.txt the file is created upon the first call to this function for a given file writing

func (*AnySource) SetStateInactive

func (ds *AnySource) SetStateInactive() error

SetStateInactive sets the sourceState value to Inactive in a race-free fashion

func (*AnySource) SetStateStarting

func (ds *AnySource) SetStateStarting() error

SetStateStarting sets the sourceState value to Starting in a race-free fashion

func (*AnySource) ShouldAutoRestart

func (ds *AnySource) ShouldAutoRestart() bool

ShouldAutoRestart true if source should be auto-restarted after an error

func (*AnySource) Stop

func (ds *AnySource) Stop() error

Stop tells the data supply to deactivate.

func (*AnySource) StopTriggerCoupling added in v0.2.12

func (ds *AnySource) StopTriggerCoupling() error

StopTriggerCoupling turns off all trigger coupling, including all group triggers and FB/Err coupling.

func (*AnySource) VoltsPerArb

func (ds *AnySource) VoltsPerArb() []float32

VoltsPerArb returns a per-channel value scaling raw into volts.

func (*AnySource) WriteControl

func (ds *AnySource) WriteControl(config *WriteControlConfig) error

WriteControl changes the data writing start/stop/pause/unpause state For WriteLJH22 == true and/or WriteLJH3 == true all channels will have writing enabled For WriteOFF == true, only chanels with projectors set will have writing enabled

func (*AnySource) WritingIsActive

func (ds *AnySource) WritingIsActive() bool

WritingIsActive returns whether the current writers are active

type AsciiBouncer added in v0.3.2

type AsciiBouncer struct {
	// contains filtered or unexported fields
}

AsciiBouncer supplies a cute one-line ASCII art to show at the Dastard terminal that Dastard is still alive (the old way was to fill the terminal with ugly messages).

func (*AsciiBouncer) String added in v0.3.2

func (b *AsciiBouncer) String() string

type BuffersChanType

type BuffersChanType struct {
	// contains filtered or unexported fields
}

BuffersChanType is an internal message type used to allow a goroutine to read from the Lancero card and put data on a buffered channel

type BuildInfo

type BuildInfo struct {
	Version string
	Githash string
	Date    string
}

BuildInfo can contain compile-time information about the build

type ByGroup

type ByGroup []GroupIndex

ByGroup implements sort.Interface for []GroupIndex so we can sort such slices.

func (ByGroup) Len

func (g ByGroup) Len() int

func (ByGroup) Less

func (g ByGroup) Less(i, j int) bool

func (ByGroup) Swap

func (g ByGroup) Swap(i, j int)

type ClientUpdate

type ClientUpdate struct {
	// contains filtered or unexported fields
}

ClientUpdate carries the messages to be published on the status port.

type CouplingStatus

type CouplingStatus int

CouplingStatus describes the status of FB / error coupling

const (
	NoCoupling CouplingStatus = iota + 1 // NoCoupling turns off FB + error coupling
	FBToErr                              // FB triggers cause secondary triggers in error channels
	ErrToFB                              // Error triggers cause secondary triggers in FB channels
)

Specific allowed values for status of FB / error coupling

type DataPublisher

type DataPublisher struct {
	PubRecordsChan   chan []*DataRecord
	PubSummariesChan chan []*DataRecord
	LJH22            *ljh.Writer
	LJH3             *ljh.Writer3
	OFF              *off.Writer
	WritingPaused    bool
	// contains filtered or unexported fields
}

DataPublisher contains many optional methods for publishing data, any methods that are non-nil will be used in each call to PublishData

func (*DataPublisher) Flush

func (dp *DataPublisher) Flush()

Flush calls Flush for each writer that has a Flush command (LJH22, LJH3, OFF)

func (*DataPublisher) HasLJH22

func (dp *DataPublisher) HasLJH22() bool

HasLJH22 returns true if LJH22 is non-nil, used to decide if writeint to LJH22 should occur

func (*DataPublisher) HasLJH3

func (dp *DataPublisher) HasLJH3() bool

HasLJH3 returns true if LJH3 is non-nil, eg if writing to LJH3 is occuring

func (*DataPublisher) HasOFF

func (dp *DataPublisher) HasOFF() bool

HasOFF returns true if OFF is non-nil, eg if writing to OFF is occuring

func (*DataPublisher) HasPubRecords

func (dp *DataPublisher) HasPubRecords() bool

HasPubRecords return true if publishing records on PortTrigs Pub is occuring

func (*DataPublisher) HasPubSummaries

func (dp *DataPublisher) HasPubSummaries() bool

HasPubSummaries return true if publishing summaries on PortSummaries Pub is occuring

func (*DataPublisher) PublishData

func (dp *DataPublisher) PublishData(records []*DataRecord) error

PublishData looks at each member of DataPublisher, and if it is non-nil, publishes each record into that member

func (*DataPublisher) RemoveLJH22

func (dp *DataPublisher) RemoveLJH22()

RemoveLJH22 closes existing LJH22 file and assign .LJH22=nil

func (*DataPublisher) RemoveLJH3

func (dp *DataPublisher) RemoveLJH3()

RemoveLJH3 closes existing LJH3 file and assign .LJH3=nil

func (*DataPublisher) RemoveOFF

func (dp *DataPublisher) RemoveOFF()

RemoveOFF closes any existing OFF file and assign .OFF=nil

func (*DataPublisher) RemovePubRecords

func (dp *DataPublisher) RemovePubRecords()

RemovePubRecords stops publishing records on PortTrigs

func (*DataPublisher) RemovePubSummaries

func (dp *DataPublisher) RemovePubSummaries()

RemovePubSummaries stop publing summaries on PortSummaires

func (*DataPublisher) SetLJH22

func (dp *DataPublisher) SetLJH22(ChannelIndex int, Presamples int, Samples int, FramesPerSample int,
	Timebase float64, TimestampOffset time.Time,
	NumberOfRows, NumberOfColumns, NumberOfChans, SubframeDivisions, rowNum, colNum, SubframeOffset int,
	FileName, sourceName, chanName string, ChannelNumberMatchingName int, pixel Pixel)

SetLJH22 adds an LJH22 writer to dp, the .file attribute is nil, and will be instantiated upon next call to dp.WriteRecord

func (*DataPublisher) SetLJH3

func (dp *DataPublisher) SetLJH3(ChannelIndex int, Timebase float64,
	NumberOfRows, NumberOfColumns, SubframeDivisions, SubframeOffset int,
	FileName string)

SetLJH3 adds an LJH3 writer to dp, the .file attribute is nil, and will be instantiated upon next call to dp.WriteRecord

func (*DataPublisher) SetOFF

func (dp *DataPublisher) SetOFF(ChannelIndex int, Presamples int, Samples int, FramesPerSample int,
	Timebase float64, TimestampOffset time.Time,
	NumberOfRows, NumberOfColumns, NumberOfChans, SubframeDivisions, rowNum, colNum, SubframeOffset int,
	FileName, sourceName, chanName string, ChannelNumberMatchingName int,
	Projectors *mat.Dense, Basis *mat.Dense, ModelDescription string, pixel Pixel)

SetOFF adds an OFF writer to dp, the .file attribute is nil, and will be instantiated upon next call to dp.WriteRecord

func (*DataPublisher) SetPause

func (dp *DataPublisher) SetPause(pause bool)

SetPause changes the paused state to the given value of pause

func (*DataPublisher) SetPubRecords

func (dp *DataPublisher) SetPubRecords()

SetPubRecords starts publishing records with zmq4 over tcp at port=PortTrigs

func (*DataPublisher) SetPubSummaries

func (dp *DataPublisher) SetPubSummaries()

SetPubSummaries starts publishing records with zmq4 over tcp at port=PortSummaries

type DataRecord

type DataRecord struct {
	// contains filtered or unexported fields
}

DataRecord contains a single triggered pulse record.

type DataSegment

type DataSegment struct {
	// contains filtered or unexported fields
}

DataSegment is a continuous, single-channel raw data buffer, plus info about (e.g.) raw-physical units, first sample’s frame number and sample time. Not yet triggered.

func NewDataSegment

func NewDataSegment(data []RawType, framesPerSample int, firstFrame FrameIndex,
	firstTime time.Time, period time.Duration) *DataSegment

NewDataSegment generates a pointer to a new, initialized DataSegment object.

func (*DataSegment) TimeOf

func (seg *DataSegment) TimeOf(sampleNum int) time.Time

TimeOf returns the absolute time of sample # sampleNum within the segment.

type DataSource

type DataSource interface {
	Sample() error
	PrepareRun(int, int) error
	PrepareChannels() error
	StartRun() error
	Stop() error
	Running() bool
	GetState() SourceState
	SetStateStarting() error
	SetStateInactive() error

	Nchan() int
	ChanGroups() []GroupIndex
	SamplePeriod() time.Duration
	VoltsPerArb() []float32
	ComputeGroupTriggerState() GroupTriggerState
	ComputeFullTriggerState() []FullTriggerState
	ComputeWritingState() *WritingState
	WritingIsActive() bool
	ChannelNames() []string
	ConfigurePulseLengths(int, int) error
	ConfigureProjectorsBases(int, *mat.Dense, *mat.Dense, string) error
	ChangeTriggerState(*FullTriggerState) error
	ConfigureMixFraction(*MixFractionObject) ([]float64, error)
	WriteControl(*WriteControlConfig) error
	SetCoupling(CouplingStatus) error
	ChangeGroupTrigger(turnon bool, gts *GroupTriggerState) error
	StopTriggerCoupling() error
	SetExperimentStateLabel(time.Time, string) error
	ChannelsWithProjectors() []int
	ProcessSegments(*dataBlock) error
	RunDoneActivate()
	RunDoneDeactivate()
	ShouldAutoRestart() bool

	ArchiveDataBlock(int, *os.File, string) error
	// contains filtered or unexported methods
}

DataSource is the interface for hardware or simulated data sources that produce data.

type DataStream

type DataStream struct {
	DataSegment
	// contains filtered or unexported fields
}

DataStream models a continuous stream of data, though we have only a finite amount at any time. For now, it's semantically different from a DataSegment, yet they need the same information.

func NewDataStream

func NewDataStream(data []RawType, framesPerSample int, firstFrame FrameIndex,
	firstTime time.Time, period time.Duration) *DataStream

NewDataStream generates a pointer to a new, initialized DataStream object.

func (*DataStream) AppendSegment

func (stream *DataStream) AppendSegment(segment *DataSegment)

AppendSegment will append the data in segment to the DataStream. It will update the frame/time counters to be consistent with the appended segment, not necessarily with the previous values.

func (*DataStream) TrimKeepingN

func (stream *DataStream) TrimKeepingN(N int) int

TrimKeepingN will trim (discard) all but the last N values in the DataStream. N larger than the number of available values is NOT an error. Returns the number of values in the stream after trimming (should be <= N).

type DataStreamProcessor

type DataStreamProcessor struct {
	ChannelNumber        int
	Name                 string
	Broker               *TriggerBroker
	NSamples             int
	NPresamples          int
	SampleRate           float64
	LastTrigger          FrameIndex
	LastEdgeMultiTrigger FrameIndex

	DecimateState
	TriggerState
	DataPublisher
	// contains filtered or unexported fields
}

DataStreamProcessor contains all the state needed to decimate, trigger, write, and publish data.

func NewDataStreamProcessor

func NewDataStreamProcessor(channelIndex int, broker *TriggerBroker, NPresamples int, NSamples int) *DataStreamProcessor

NewDataStreamProcessor creates and initializes a new DataStreamProcessor.

func (*DataStreamProcessor) AnalyzeData

func (dsp *DataStreamProcessor) AnalyzeData(records []*DataRecord)

AnalyzeData computes pulse-analysis values in-place for all elements of a slice of DataRecord values.

func (*DataStreamProcessor) ConfigurePulseLengths

func (dsp *DataStreamProcessor) ConfigurePulseLengths(nsamp, npre int) error

ConfigurePulseLengths sets this stream's pulse length and # of presamples. Also removes any existing projectors and basis.

func (*DataStreamProcessor) ConfigureTrigger

func (dsp *DataStreamProcessor) ConfigureTrigger(state TriggerState) error

ConfigureTrigger sets this stream's trigger state.

func (*DataStreamProcessor) DecimateData

func (dsp *DataStreamProcessor) DecimateData(segment *DataSegment)

DecimateData decimates data in-place.

func (*DataStreamProcessor) HasProjectors

func (dsp *DataStreamProcessor) HasProjectors() bool

HasProjectors return true if projectors are loaded

func (*DataStreamProcessor) SetProjectorsBasis

func (dsp *DataStreamProcessor) SetProjectorsBasis(projectors *mat.Dense, basis *mat.Dense, modelDescription string) error

SetProjectorsBasis sets .projectors and .basis to the arguments, returns an error if the sizes are not right

func (*DataStreamProcessor) TriggerData

func (dsp *DataStreamProcessor) TriggerData() (records []*DataRecord)

TriggerData analyzes a DataSegment to find and generate triggered records. All edge-multitriggers are found, OR [all edge triggers are found, then level triggers, then auto, and noise triggers]. Returns slice of complete DataRecord objects, while dsp.lastTrigList stores a triggerList object just when the triggers happened.

func (*DataStreamProcessor) TriggerDataSecondary added in v0.2.12

func (dsp *DataStreamProcessor) TriggerDataSecondary(secondaryTrigList []FrameIndex) (secRecords []*DataRecord)

TriggerDataSecondary converts a slice of secondary trigger frame numbers into a slice of records, the secondary trigger records.

func (*DataStreamProcessor) TrimStream added in v0.2.12

func (dsp *DataStreamProcessor) TrimStream()

TrimStream trims a DataStreamProcessor's stream to contain a limited amount of data. When the zero-threshold model is enabled, we might look back at least 4 samples, but the EMTState.NToKeepOnTrim() method figures out the necessary number.

type DecimateState

type DecimateState struct {
	DecimateLevel   int
	Decimate        bool
	DecimateAvgMode bool
}

DecimateState contains all the state needed to decimate data.

type EMTBackwardCompatibleRPCFields added in v0.2.12

type EMTBackwardCompatibleRPCFields struct {
	EdgeMultiNoise                   bool
	EdgeMultiMakeShortRecords        bool
	EdgeMultiMakeContaminatedRecords bool
	EdgeMultiDisableZeroThreshold    bool
	EdgeMultiLevel                   int32
	EdgeMultiVerifyNMonotone         int
}

EMTBackwardCompatibleRPCFields allows the old RPC messages to still work (consider it a temporary expedient 3/16/2022).

type EMTMode added in v0.2.12

type EMTMode int

EMTMode enumerates the possible ways to handle overlapping triggers.

const (
	EMTRecordsTwoFullLength      EMTMode = iota // Generate two full-length records even if they overlap.
	EMTRecordsVariableLength                    // Generate a variable-length record when triggers are too close together.
	EMTRecordsFullLengthIsolated                // Generate no records when triggers are too close together.
)

Enumerates the EMTMode constants.

type EMTState added in v0.2.12

type EMTState struct {
	// contains filtered or unexported fields
}

EMTState enables the search for EMTs by storing the needed state.

func (EMTState) NToKeepOnTrim added in v0.2.12

func (s EMTState) NToKeepOnTrim() int

NToKeepOnTrim returns how many samples to keep when trimming a stream. GCO thinks the minimum is (s.nsamp + 4) based on the lookback in the kink model, but the function keeps more just to be conservative (so we can experiment with slight changes in trigger algorithm in the future).

type ErroringSource

type ErroringSource struct {
	AnySource
	// contains filtered or unexported fields
}

ErroringSource is used to test the behavior of errors in blockingRead an error in blocking read should change the state of the source such that the next call to blocking read will return io.EOF then source.Stop() will be called. the source should be able to be restatrted after this ErroringSource needs to exist in dastard (not just in tests) so that it can be exposed through the RPC server to test that the RPC server recovers properly as well

func NewErroringSource

func NewErroringSource() *ErroringSource

NewErroringSource returns a new ErroringSource, requires no configuration

func (*ErroringSource) Sample

func (es *ErroringSource) Sample() error

Sample is part of the required interface.

func (*ErroringSource) StartRun

func (es *ErroringSource) StartRun() error

StartRun is part of the required interface and ensures future failure

type FactorArgs

type FactorArgs struct {
	A, B int
}

FactorArgs holds the arguments to a Multiply operation (for testing!).

type FrameIdxSlice

type FrameIdxSlice []FrameIndex

FrameIdxSlice attaches the methods of sort.Interface to []FrameIndex, sorting in increasing order.

func (FrameIdxSlice) Len

func (p FrameIdxSlice) Len() int

func (FrameIdxSlice) Less

func (p FrameIdxSlice) Less(i, j int) bool

func (FrameIdxSlice) Swap

func (p FrameIdxSlice) Swap(i, j int)

type FrameIndex

type FrameIndex int64

FrameIndex is used for counting raw data frames.

type FrameTimingCorrepondence added in v0.3.1

type FrameTimingCorrepondence struct {
	TimestampCountsPerSubframe uint64 // Ratio of timestamp rate to subframe division rate
	LastFirmwareTimestamp      packets.PacketTimestamp
	LastSubframeCount          FrameIndex
}

FrameTimingCorrespondence tracks how we convert timestamps to FrameIndex

type FullTriggerState

type FullTriggerState struct {
	ChannelIndices []int
	TriggerState
}

FullTriggerState used to collect channels that share the same TriggerState

type GroupIndex

type GroupIndex struct {
	Firstchan int // first channel number in this group
	Nchan     int // how many channels in this group
}

GroupIndex represents the specifics of a channel group. Channel numbers should be globally unique across a DataSource.

type GroupTriggerState added in v0.2.12

type GroupTriggerState struct {
	Connections map[int][]int // Map sense is connections[source] = []int{rxA, rxB, ...}
}

GroupTriggerState contains all the state that controls all group trigger connections. It is also used to communicate with clients about connections to add or remove.

type Heartbeat

type Heartbeat struct {
	Running      bool
	Time         float64
	HWactualMB   float64 // raw data received from hardware
	DataMB       float64 // raw data processed (may exceed HWactualMB if missing data were filled in)
	AsciiBouncer         // include the features of an AsciiBounder
}

Heartbeat is the info sent in the regular heartbeat to clients

type LanceroDastardOutputJSON

type LanceroDastardOutputJSON struct {
	Nsamp            int
	ClockMHz         int
	AvailableCards   []int
	Lsync            int
	Settle           int
	SequenceLength   int
	PropagationDelay int
	BAD16CardDelay   int
}

LanceroDastardOutputJSON is used to return values over the JSON RPC which cannot be set over the JSON rpc

type LanceroDevice

type LanceroDevice struct {
	// contains filtered or unexported fields
}

LanceroDevice represents one lancero device.

type LanceroSource

type LanceroSource struct {
	Mix []*Mix

	AnySource
	// contains filtered or unexported fields
}

LanceroSource is a DataSource that handles 1 or more lancero devices.

func NewLanceroSource

func NewLanceroSource() (*LanceroSource, error)

NewLanceroSource creates a new LanceroSource.

func (*LanceroSource) Configure

func (ls *LanceroSource) Configure(config *LanceroSourceConfig) (err error)

Configure sets up the internal buffers with given size, speed, and min/max. FiberMask must be identical across all cards, 0xFFFF uses all fibers, 0x0001 uses only fiber 0 ClockMhz must be identical arcross all cards, as of June 2018 it's always 125 CardDelay can have one value, which is shared across all cards, or must be one entry per card ActiveCards is a slice of indices into ls.devices to activate AvailableCards is an output, contains a sorted slice of valid indices for use in ActiveCards

func (*LanceroSource) ConfigureMixFraction

func (ls *LanceroSource) ConfigureMixFraction(mfo *MixFractionObject) ([]float64, error)

ConfigureMixFraction sets the MixFraction potentially for many channels, returns the list of current mix values mix = fb + errorScale*err

func (*LanceroSource) Delete

func (ls *LanceroSource) Delete()

Delete closes all Lancero cards

func (*LanceroSource) PrepareChannels

func (ls *LanceroSource) PrepareChannels() error

PrepareChannels configures a LanceroSource by initializing all data structures that have to do with channels and their naming/numbering.

func (*LanceroSource) Sample

func (ls *LanceroSource) Sample() error

Sample determines key data facts by sampling some initial data.

func (*LanceroSource) SetCoupling

func (ls *LanceroSource) SetCoupling(status CouplingStatus) error

SetCoupling set up the trigger broker to connect err->FB, FB->err, or neither

func (*LanceroSource) StartRun

func (ls *LanceroSource) StartRun() error

StartRun tells the hardware to switch into data streaming mode. For lancero TDM systems, we need to consume any initial data that constitutes a fraction of a frame.

type LanceroSourceConfig

type LanceroSourceConfig struct {
	FiberMask         uint32
	CardDelay         []int
	ActiveCards       []int
	ShouldAutoRestart bool
	FirstRow          int // Channel number of the 1st row (default 1)
	ChanSepCards      int // Channel separation between cards (or 0 to indicate number sequentially)
	ChanSepColumns    int // Channel separation between columns (or 0 to indicate number sequentially)
	DastardOutput     LanceroDastardOutputJSON
}

LanceroSourceConfig holds the arguments needed to call LanceroSource.Configure by RPC. For now, we'll make the FiberMask equal for all cards. That need not be permanent, but I do think ClockMhz is necessarily the same for all cards.

type Map

type Map struct {
	Spacing  int
	Pixels   []Pixel
	Filename string
}

Map represents an entire array of pixel locations

type MapServer

type MapServer struct {
	Map *Map
	// contains filtered or unexported fields
}

MapServer is the RPC service that loads and broadcasts TES maps

func (*MapServer) Load

func (ms *MapServer) Load(filename *string, reply *bool) error

Load reads a map file and broadcasts it to clients

func (*MapServer) Unload

func (ms *MapServer) Unload(zero *int, reply *bool) error

Unload forgets the current map file

type Mix

type Mix struct {
	// contains filtered or unexported fields
}

Mix performns the mix for lancero data and retards the raw data stream by one sample so it can be mixed with the appropriate error sample. This corrects for a poor choice in the TDM firmware design, but so it goes.

fb_physical[n] refers to the feedback signal applied during tick [n] err_physical[n] refers to the error signal measured during tick [n]

Unfortunately, the data stream pairs them up differently: fb_data[n] = fb_physical[n+1] err_data[n] = err_physical[n]

At frame [n] we get data for the error measured during frame [n] and the feedback computed based on it, which is the feedback that will be _applied_ during frame [n+1].

We want mix[n] = fb_physical[n] + errorScale * err_physical[n], so mix[n] = fb_data[n-1] + errorScale * err_data[n], or mix[n+1] = fb_data[n] + errorScale * err_data[n+1]

Second issue: the error signal we work with is a sum of NSAMP samples from the ADC, but autotune's values assume that we work with the _mean_ (because it lets autotune communicate an NSAMP-agnostic value). So we store NOT the auto- tune value but the value that actually multiplies the error sum.

func (*Mix) MixRetardFb

func (m *Mix) MixRetardFb(fbs *[]RawType, errs *[]RawType)

MixRetardFb mixes err into fbs, alters fbs in place to contain the mixed values consecutive calls must be on consecutive data. The following ASSUMES that error signals are signed. That holds for Lancero TDM systems, at least, and that is the only source that uses Mix.

type MixFractionObject

type MixFractionObject struct {
	ChannelIndices []int
	MixFractions   []float64
}

MixFractionObject is the RPC-usable structure for ConfigureMixFraction

type NextTriggerIndResult added in v0.2.12

type NextTriggerIndResult struct {
	// contains filtered or unexported fields
}

NextTriggerIndResult is the result of one search for an EMT. It says whether one was found and (if so) at what index.

type PacketProducer added in v0.2.10

type PacketProducer interface {
	ReadAllPackets() ([]*packets.Packet, error)
	// contains filtered or unexported methods
}

PacketProducer is the interface for data sources that produce packets. Implementations wrap ring buffers and UDP servers.

type PhaseUnwrapper

type PhaseUnwrapper struct {
	// contains filtered or unexported fields
}

PhaseUnwrapper makes phase values continous by adding integers multiples of 2π phase as needed. It also optionally inverts channels' data.

func NewPhaseUnwrapper

func NewPhaseUnwrapper(fractionBits, lowBitsToDrop uint, enable bool, biasLevel, resetAfter, pulseSign int,
	invertData bool) *PhaseUnwrapper

NewPhaseUnwrapper creates a new PhaseUnwrapper object

func (*PhaseUnwrapper) UnwrapInPlace

func (u *PhaseUnwrapper) UnwrapInPlace(data *[]RawType)

UnwrapInPlace unwraps in place

type Pixel

type Pixel struct {
	X, Y int
	Name string
}

Pixel represents the physical location of a TES

type Portnumbers

type Portnumbers struct {
	RPC            int
	Status         int
	Trigs          int
	SecondaryTrigs int
	Summaries      int
}

Portnumbers structs can contain all TCP port numbers used by Dastard.

var Ports Portnumbers

Ports globally holds all TCP port numbers used by Dastard.

type ProjectorsBasisObject

type ProjectorsBasisObject struct {
	ChannelIndex     int
	ProjectorsBase64 string
	BasisBase64      string
	ModelDescription string
}

ProjectorsBasisObject is the RPC-usable structure for ConfigureProjectorsBases

type RawType

type RawType uint16

RawType holds raw signal data.

type RecordSlice

type RecordSlice []*DataRecord

RecordSlice attaches the methods of sort.Interface to slices of *DataRecords, sorting in increasing order.

func (RecordSlice) Len

func (p RecordSlice) Len() int

func (RecordSlice) Less

func (p RecordSlice) Less(i, j int) bool

func (RecordSlice) Swap

func (p RecordSlice) Swap(i, j int)

type RecordSpec added in v0.2.12

type RecordSpec struct {
	// contains filtered or unexported fields
}

RecordSpec is the specification for a single variable-length record.

type RoachDevice

type RoachDevice struct {
	// contains filtered or unexported fields
}

RoachDevice represents a single ROACH device producing data by UDP packets

func NewRoachDevice

func NewRoachDevice(host string, rate float64) (dev *RoachDevice, err error)

NewRoachDevice creates a new RoachDevice.

type RoachSource

type RoachSource struct {
	Ndevices int

	AnySource
	// contains filtered or unexported fields
}

RoachSource represents multiple ROACH devices

func NewRoachSource

func NewRoachSource() (*RoachSource, error)

NewRoachSource creates a new RoachSource.

func (*RoachSource) Configure

func (rs *RoachSource) Configure(config *RoachSourceConfig) (err error)

Configure sets up the internal buffers with given size, speed, and min/max.

func (*RoachSource) Delete

func (rs *RoachSource) Delete()

Delete closes all active RoachDevices.

func (*RoachSource) PrepareChannels added in v0.2.10

func (rs *RoachSource) PrepareChannels() error

PrepareChannels configures a RoachSource by initializing all data structures that have to do with channels and their naming/numbering.

func (*RoachSource) Sample

func (rs *RoachSource) Sample() error

Sample determines key data facts by sampling some initial data.

func (*RoachSource) StartRun

func (rs *RoachSource) StartRun() error

StartRun tells the hardware to switch into data streaming mode. For ROACH µMUX systems, this is always happening. What we do have to do is to start 1 goroutine per UDP source to wait on the data and package it properly.

type RoachSourceConfig

type RoachSourceConfig struct {
	HostPort []string
	Rates    []float64
	AbacoUnwrapOptions
}

RoachSourceConfig holds the arguments needed to call RoachSource.Configure by RPC.

type RowColCode

type RowColCode uint64

RowColCode holds an 8-byte summary of the row-column geometry

type ServerStatus

type ServerStatus struct {
	Running                bool
	SourceName             string
	Nchannels              int
	Nsamples               int
	Npresamp               int
	SamplePeriod           time.Duration // time per sample
	ChanGroups             []GroupIndex  // the channel groups
	ChannelsWithProjectors []int         // move this to something that reports mix also? and experimentStateLabel

}

ServerStatus the status that SourceControl reports to clients.

type SimPulseSource

type SimPulseSource struct {
	AnySource
	// contains filtered or unexported fields
}

SimPulseSource simulates simple pulsed sources

func NewSimPulseSource

func NewSimPulseSource() *SimPulseSource

NewSimPulseSource creates a new SimPulseSource with given size, speed.

func (*SimPulseSource) Configure

func (sps *SimPulseSource) Configure(config *SimPulseSourceConfig) error

Configure sets up the internal buffers with given size, speed, and pedestal and amplitude.

func (*SimPulseSource) Sample

func (sps *SimPulseSource) Sample() error

Sample determines key data facts by sampling some initial data. It's a no-op for simulated (software) sources

func (*SimPulseSource) StartRun

func (sps *SimPulseSource) StartRun() error

StartRun launches the repeated loop that generates Triangle data.

type SimPulseSourceConfig

type SimPulseSourceConfig struct {
	Nchan      int
	SampleRate float64
	Pedestal   float64
	Amplitudes []float64
	Nsamp      int
}

SimPulseSourceConfig holds the arguments needed to call SimPulseSource.Configure by RPC

type SizeObject

type SizeObject struct {
	Nsamp int
	Npre  int
}

SizeObject is the RPC-usable structure for ConfigurePulseLengths to change pulse record sizes.

type SourceControl

type SourceControl struct {
	ActiveSource DataSource
	// contains filtered or unexported fields
}

SourceControl is the sub-server that handles configuration and operation of the Dastard data sources. TODO: consider renaming -> DastardControl (5/11/18)

func NewSourceControl

func NewSourceControl() *SourceControl

NewSourceControl creates a new SourceControl object with correctly initialized contents.

func (*SourceControl) AddGroupTriggerCoupling added in v0.2.12

func (s *SourceControl) AddGroupTriggerCoupling(gts GroupTriggerState, reply *bool) error

AddGroupTriggerCoupling adds all the trigger couplings listed in `gts`

func (*SourceControl) ConfigureAbacoSource

func (s *SourceControl) ConfigureAbacoSource(args *AbacoSourceConfig, reply *bool) error

ConfigureAbacoSource configures the Abaco cards.

func (*SourceControl) ConfigureLanceroSource

func (s *SourceControl) ConfigureLanceroSource(args *LanceroSourceConfig, reply *bool) error

ConfigureLanceroSource configures the lancero cards.

func (*SourceControl) ConfigureMixFraction

func (s *SourceControl) ConfigureMixFraction(mfo *MixFractionObject, reply *bool) error

ConfigureMixFraction sets the MixFraction for the channel associated with ChannelIndex mix = fb + mixFraction*err/Nsamp This MixFractionObject contains mix fractions as reported by autotune, where error/Nsamp is used. Thus, we will internally store not MixFraction, but errorScale := MixFraction/Nsamp. Supported by LanceroSource only.

This does not need to be sent on the queuedRequests channel, because internally LanceroSource.ConfigureMixFraction will queue these requests. The reason is that queuedRequests is for keeping RPC requests separate from the data-*processing* step. But changes to the mix settings need to be kept separate from LanceroSource.distrubuteData, which is part of the data-*production* step, not the data-processing step.

func (*SourceControl) ConfigureProjectorsBasis

func (s *SourceControl) ConfigureProjectorsBasis(pbo *ProjectorsBasisObject, reply *bool) error

ConfigureProjectorsBasis takes ProjectorsBase64 which must a base64 encoded string with binary data matching that from mat.Dense.MarshalBinary

func (*SourceControl) ConfigurePulseLengths

func (s *SourceControl) ConfigurePulseLengths(sizes SizeObject, reply *bool) error

ConfigurePulseLengths is the RPC-callable service to change pulse record sizes.

func (*SourceControl) ConfigureRoachSource

func (s *SourceControl) ConfigureRoachSource(args *RoachSourceConfig, reply *bool) error

ConfigureRoachSource configures the abaco cards.

func (*SourceControl) ConfigureSimPulseSource

func (s *SourceControl) ConfigureSimPulseSource(args *SimPulseSourceConfig, reply *bool) error

ConfigureSimPulseSource configures the source of simulated pulses.

func (*SourceControl) ConfigureTriangleSource

func (s *SourceControl) ConfigureTriangleSource(args *TriangleSourceConfig, reply *bool) error

ConfigureTriangleSource configures the source of simulated pulses.

func (*SourceControl) ConfigureTriggers

func (s *SourceControl) ConfigureTriggers(state *FullTriggerState, reply *bool) error

ConfigureTriggers configures the trigger state for 1 or more channels.

func (*SourceControl) CoupleErrToFB

func (s *SourceControl) CoupleErrToFB(couple *bool, reply *bool) error

CoupleErrToFB turns on or off coupling of Error -> FB

func (*SourceControl) CoupleFBToErr

func (s *SourceControl) CoupleFBToErr(couple *bool, reply *bool) error

CoupleFBToErr turns on or off coupling of FB -> Error

func (*SourceControl) DeleteGroupTriggerCoupling added in v0.2.12

func (s *SourceControl) DeleteGroupTriggerCoupling(gts *GroupTriggerState, reply *bool) error

DeleteGroupTriggerCoupling removes all the trigger couplings listed in `gts`

func (*SourceControl) Multiply

func (s *SourceControl) Multiply(args *FactorArgs, reply *int) error

Multiply is a silly RPC service that multiplies its two arguments (for testing!).

func (*SourceControl) ReadComment

func (s *SourceControl) ReadComment(zero *int, reply *string) error

ReadComment reads the contents of comment.txt if it exists, otherwise returns err

func (*SourceControl) SendAllStatus

func (s *SourceControl) SendAllStatus(dummy *string, reply *bool) error

SendAllStatus causes a broadcast to clients containing all broadcastable status info

func (*SourceControl) SetExperimentStateLabel

func (s *SourceControl) SetExperimentStateLabel(config *StateLabelConfig, reply *bool) error

SetExperimentStateLabel sets the experiment state label in the _experiment_state file The timestamp is fixed as soon as the RPC command is received

func (*SourceControl) Start

func (s *SourceControl) Start(sourceName *string, reply *bool) error

Start will identify the source given by sourceName and Sample then Start it.

func (*SourceControl) Stop

func (s *SourceControl) Stop(dummy *string, reply *bool) error

Stop stops the running data source, if any

func (*SourceControl) StopTriggerCoupling added in v0.2.12

func (s *SourceControl) StopTriggerCoupling(dummy *bool, reply *bool) error

StopTriggerCoupling turns off all trigger coupling

func (*SourceControl) StoreRawDataBlock added in v0.3.0

func (s *SourceControl) StoreRawDataBlock(N int, reply *string) error

StoreRawDataBlock causes a block of raw data to be stored in a temporary file.

func (*SourceControl) WaitForStopTestingOnly

func (s *SourceControl) WaitForStopTestingOnly(dummy *string, reply *bool) error

WaitForStopTestingOnly will block until the running data source is finished and thus sets s.isSourceActive to false

func (*SourceControl) WriteComment

func (s *SourceControl) WriteComment(comment *string, reply *bool) error

WriteComment writes the comment to comment.txt

func (*SourceControl) WriteControl

func (s *SourceControl) WriteControl(config *WriteControlConfig, reply *bool) error

WriteControl requests start/stop/pause/unpause data writing

type SourceState

type SourceState int

SourceState is used to indicate the active/inactive/transition state of data sources

const (
	Inactive SourceState = iota // Source is not active
	Starting                    // Source is in transition to Active state
	Active                      // Source is actively acquiring data
	Stopping                    // Source is in transition to Inactive state
)

Names for the possible values of SourceState

type StateLabelConfig

type StateLabelConfig struct {
	Label        string
	WaitForError bool // False (the default) will return ASAP and panic if there is an error

}

StateLabelConfig is the argument type of SetExperimentStateLabel

type TriangleSource

type TriangleSource struct {
	AnySource
	// contains filtered or unexported fields
}

TriangleSource is a DataSource that synthesizes triangle waves.

func NewTriangleSource

func NewTriangleSource() *TriangleSource

NewTriangleSource creates a new TriangleSource.

func (*TriangleSource) Configure

func (ts *TriangleSource) Configure(config *TriangleSourceConfig) error

Configure sets up the internal buffers with given size, speed, and min/max.

func (*TriangleSource) Sample

func (ts *TriangleSource) Sample() error

Sample determines key data facts by sampling some initial data. It's a no-op for simulated (software) sources

func (*TriangleSource) StartRun

func (ts *TriangleSource) StartRun() error

StartRun launches the repeated loop that generates Triangle data.

type TriangleSourceConfig

type TriangleSourceConfig struct {
	Nchan      int
	SampleRate float64
	Min, Max   RawType
}

TriangleSourceConfig holds the arguments needed to call TriangleSource.Configure by RPC

type TriggerBroker

type TriggerBroker struct {
	// contains filtered or unexported fields
}

TriggerBroker communicates with DataChannel objects to allow them to operate independently yet still share group triggering information.

func NewTriggerBroker

func NewTriggerBroker(nchan int) *TriggerBroker

NewTriggerBroker creates a new TriggerBroker object for nchan channels to share group triggers.

func (*TriggerBroker) AddConnection

func (broker *TriggerBroker) AddConnection(source, receiver int) error

AddConnection connects source -> receiver for group triggers. It is safe to add connections that already exist.

func (*TriggerBroker) DeleteConnection

func (broker *TriggerBroker) DeleteConnection(source, receiver int) error

DeleteConnection disconnects source -> receiver for group triggers. It is safe to delete connections whether they exist or not.

func (*TriggerBroker) Distribute added in v0.2.12

func (broker *TriggerBroker) Distribute(primaries map[int]triggerList) (map[int][]FrameIndex, error)

Distribute runs one pass of brokering trigger frame #s from sources to receivers given the map of primary triggers as a map[int]triggerList..

func (*TriggerBroker) GenerateTriggerMessages added in v0.2.12

func (broker *TriggerBroker) GenerateTriggerMessages()

GenerateTriggerMessages makes one or more trigger rate message. It combines all channels' trigger rate info into a single message, and it sends that message onto `clientMessageChan`. There might be more than one count stored in the triggerCounters[].messages, so this might generate multiple messages.

func (*TriggerBroker) SourcesForReceiver added in v0.2.12

func (broker *TriggerBroker) SourcesForReceiver(receiver int) map[int]bool

SourcesForReceiver returns a set of all sources for the given receiver.

func (*TriggerBroker) StopTriggerCoupling added in v0.2.12

func (broker *TriggerBroker) StopTriggerCoupling() error

StopTriggerCoupling ends all trigger coupling: both group triggering and TDM-style FB-Err coupling.

type TriggerCounter

type TriggerCounter struct {
	// contains filtered or unexported fields
}

TriggerCounter is a per-channel struct that counts triggers over an interval of FrameIndex values and stores a slice of messages about the count. It does not send these messages anywhere; that's the job of the TriggerBroker. It takes advantage of the fact that TriggerBroker provides a synchronization point so several TriggerCounters can count triggers for all channels in sync. Counts triggers between the FrameIndex values of [lo, hi] to learn trigger rate.

func NewTriggerCounter

func NewTriggerCounter(channelIndex int, stepDuration time.Duration) TriggerCounter

NewTriggerCounter returns a TriggerCounter

type TriggerRateMessage

type TriggerRateMessage struct {
	HiTime     time.Time
	Duration   time.Duration
	CountsSeen []int
}

TriggerRateMessage is used to publish trigger rate info over zmq

type TriggerState

type TriggerState struct {
	AutoTrigger   bool // Whether to have automatic (timed) triggers
	AutoDelay     time.Duration
	AutoVetoRange RawType // Veto any auto triggers when (max-min) exceeds this value (if it's >0)

	LevelTrigger bool // Whether to trigger records when the level exceeds some value
	LevelRising  bool
	LevelLevel   RawType

	EdgeTrigger bool // Whether to trigger records when the "local derivative" exceeds some value
	EdgeRising  bool
	EdgeFalling bool
	EdgeLevel   int32
	EdgeMulti   bool // enable EdgeMulti (actually used in triggering)

	EMTBackwardCompatibleRPCFields // used to allow the old RPC messages to still work
	EMTState
}

TriggerState contains all the state that controls trigger logic

type WriteControlConfig

type WriteControlConfig struct {
	Request         string // "Start", "Stop", "Pause", or "Unpause", or "Unpause label"
	Path            string // write in a new directory under this path
	WriteLJH22      bool   // turn on one or more file formats
	WriteOFF        bool
	WriteLJH3       bool
	MapInternalOnly *Map // for dastard internal use only, used to pass map info to DataStreamProcessors
}

WriteControlConfig object to control start/stop/pause of data writing Path and FileType are ignored for any request other than Start

type WritingState

type WritingState struct {
	Active          bool
	Paused          bool
	BasePath        string
	FilenamePattern string

	ExperimentStateFilename      string
	ExperimentStateLabel         string
	ExperimentStateLabelUnixNano int64
	ExternalTriggerFilename      string

	DataDropFilename string

	sync.Mutex
	// contains filtered or unexported fields
}

WritingState monitors the state of file writing.

func (*WritingState) ComputeState

func (ws *WritingState) ComputeState() *WritingState

ComputeState will return a property-by-property copy of the WritingState. It will not copy the "active" features like open files, tickers, etc.

func (*WritingState) IsActive

func (ws *WritingState) IsActive() bool

IsActive will return ws.Active, with proper locking

func (*WritingState) SetExperimentStateLabel

func (ws *WritingState) SetExperimentStateLabel(timestamp time.Time, stateLabel string) error

SetExperimentStateLabel writes to a file with name like XXX_experiment_state.txt The file is created upon the first call to this function for a given file writing. This exported version locks the WritingState object.

func (*WritingState) Start

func (ws *WritingState) Start(filenamePattern, path string) error

Start will set the WritingState to begin writing

func (*WritingState) Stop

func (ws *WritingState) Stop() error

Stop will set the WritingState to be completely stopped

Directories

Path Synopsis
cmd
Package lancero provides an interface to all Lancero scatter-gather DMA character devices, read/write from/to registers of SOPC slaves, wait for SOPC component interrupt events and handle the cyclic mode of SGDMA.
Package lancero provides an interface to all Lancero scatter-gather DMA character devices, read/write from/to registers of SOPC slaves, wait for SOPC component interrupt events and handle the cyclic mode of SGDMA.
Package ljh provides classes that read or write from the LJH x-ray pulse data file format.
Package ljh provides classes that read or write from the LJH x-ray pulse data file format.
Package off provides classes that write OFF files OFF files store TES pulses projected into a linear basis OFF files have a JSON header followed by a single newline after the header records are written sequentially in little endian format bytes type meaning 0-3 int32 recordSamples (could be calculated from nearest neighbor pulses in princple) 4-7 int32 recordPreSamples (could be calculated from nearest neighbor pulses in princple) 8-15 int64 framecount 16-23 int64 timestamp from time.Time.UnixNano() 24-27 float32 pretriggerMean (from raw data, not from modeled pulse, really shouldn't be neccesary, just in case for now!) 28-31 float32 residualStdDev (in raw data space, not Mahalanobis distance) 32-Z float32 the NumberOfBases model coefficients of the pulse projected in to the model Z = 31+4*NumberOfBases
Package off provides classes that write OFF files OFF files store TES pulses projected into a linear basis OFF files have a JSON header followed by a single newline after the header records are written sequentially in little endian format bytes type meaning 0-3 int32 recordSamples (could be calculated from nearest neighbor pulses in princple) 4-7 int32 recordPreSamples (could be calculated from nearest neighbor pulses in princple) 8-15 int64 framecount 16-23 int64 timestamp from time.Time.UnixNano() 24-27 float32 pretriggerMean (from raw data, not from modeled pulse, really shouldn't be neccesary, just in case for now!) 28-31 float32 residualStdDev (in raw data space, not Mahalanobis distance) 32-Z float32 the NumberOfBases model coefficients of the pulse projected in to the model Z = 31+4*NumberOfBases

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL