Documentation ¶
Overview ¶
Defines all paths/file names used in S3 for storage of our data. This allows a more centralised view of where our data is in S3 and makes changing storage/paths easier
Example (MakeQuantSummaryFileName) ¶
Check the summary filename gets created correctly
fn := MakeQuantSummaryFileName("myquantid") fmt.Printf("%v", fn)
Output: summary-myquantid.json
Index ¶
- Constants
- func GetAnnotationsPath(userID string, datasetID string) string
- func GetCollectionPath(userID string, datasetID string, id string) string
- func GetConfigFilePath(fileName string) string
- func GetCustomImagePath(datasetID string, imgType string, fileName string) string
- func GetCustomMetaPath(datasetID string) string
- func GetDatasetListPath() string
- func GetDatasetUploadPath(datasetID string, fileName string) string
- func GetDatasetsAuthPath() string
- func GetDetectorConfigFilePath(configName string) string
- func GetDetectorConfigPath(configName string, version string, fileName string) string
- func GetElementSetPath(userID string) string
- func GetImageCacheFilePath(imagePath string) string
- func GetImageFilePath(imagePath string) string
- func GetJobDataPath(datasetID string, jobID string, fileName string) string
- func GetJobStatusPath(datasetID string, jobID string) string
- func GetJobSummaryPath(datasetID string) string
- func GetMultiQuantZStackPath(userID string, datasetID string) string
- func GetPublicObjectsPath() string
- func GetQuantPath(userId string, scanId string, fileName string) string
- func GetRGBMixPath(userID string) string
- func GetROIPath(userID string, datasetID string) string
- func GetScanFilePath(scanID string, fileName string) string
- func GetTagPath(userID string) string
- func GetUserContentDatasetPath(userID string, datasetID string, fileName string) string
- func GetUserContentPath(userID string, fileName string) string
- func GetUserLastPiquantOutputPath(userID string, datasetID string, piquantCommand string, fileName string) string
- func GetViewStatePath(userID string, datasetID string, fileName string) string
- func GetWorkspacePath(userID string, datasetID string, id string) string
- func MakeQuantCSVFileName(quantID string) string
- func MakeQuantDataFileName(quantID string) string
- func MakeQuantLogDirName(quantID string) string
- func MakeQuantSummaryFileName(quantID string) string
Examples ¶
Constants ¶
const Auth0PemFileName = "auth0.pem"
Auth0 PEM file which API uses to verify JWTs
const BadDatasetIDsFile = "bad-dataset-ids.json"
Contains a list of Dataset IDs to ignore when generating dataset tiles. This is hand maintained, only used when we have a bad dataset is downloaded that will never be usable This way we can prevent it from being written to <Config bucket>/PixliseConfig/datasets.json Just a work-around for having OCS fetcher put files there without our control. If a bad dataset download happens, there's no point in showing a broken tile for it forever!
const CSVFileSuffix = ".csv"
CSVFileSuffix - CSV files are <jobid>.csv
const DatasetCustomMetaFileName = "custom-meta.json"
File name for dataset custom meta file containing the title and other settings
const DatasetCustomRoot = "dataset-addons"
Root directory for all dataset add-ons. These are custom files that can be uploaded for a datset to set its title, and which the "default" image is, etc.
- dataset-addons/
- ----<dataset-id>/
- --------custom-meta.json - Custom metadata for this dataset, usually to set dataset title, but can also contain matched image scale/bias or other fields
- ------------UNALINED/
- ----------------image, *.png or *.jpg
- ------------MATCHED/
- ----------------image, *.png, *.jpg or *.tif (if TIF it's considered an RGBU multi-spectral image)
- ------------RGBU/
- ----------------images, *.tif - NOTE: Went unused, these are now all stored as MATCHED images
const DatasetFileName = "dataset.bin"
The dataset file containing all spectra, housekeeping, beam locations, etc. Created by data-converter
const DatasetImageCacheRoot = "Image-Cache"
const DatasetImagesRoot = "Images"
Paths for v4 API:
const DatasetScansRoot = "Scans"
const DatasetUploadRoot = "UploadedDatasets"
Root directory to store uploaded dataset "raw" artifacts. These are then read by dataset importer to create a dataset in the dataset bucket
- Uploaded/
- ----<dataset-id>/
- --------Files for that dataset importer type. For example, with breadboards we expect:
- --------import.json <-- Describes what's what
- --------spectra.zip <-- Spectra .msa files zipped up
- --------context_image_1.jpg <-- 1 or more context images
const DiffractionDBFileName = "diffraction-db.bin"
Diffraction peak database, generated by diffraction-detector when dataset is imported
const DiffractionPeakManualFileName = "manual-diffraction-peaks.json"
Name of user manually entered diffraction peaks file. NOTE: this file only exists as a shared file!
const DiffractionPeakStatusFileName = "diffraction-peak-statuses.json"
Name of file containing status of diffraction peaks - the diffraction DB is generated when dataset is created but users can view a peak and mark it with a status, eg to delete it because it's not a real diffraction peak OTE: this file only exists as a shared file!
const JobStatusSuffix = "-status.json"
Job status file name suffix. Appended to the job ID
const JobSummarySuffix = "-jobs.json"
Job summary file name suffix. Appended to dataset ID
const MultiQuantZStackFile = "multi-quant-z-stack.json"
Multi-quant z-stack file name
const PiquantConfigFileName = "config.json"
File name of "overall" piquant config file, which references the individual files PIQUANT will need
const PiquantConfigSubDir = "PiquantConfigs"
Piquant configs sub-dir
- NOTE: Quant creation doesn't use GetDetectorConfigPath, maybe DetectorConfig is hard-coded into docker container! TODO: remove that
const PiquantDownloadPath = "piquant"
PIQUANT binaries root file - this kind of went unused and is likely not working because our build process doesn't write to the bucket any more
- piquant/
- ----piquant-linux-*.zip - Built PIQUANT executables (zipped)
const PiquantLogSubdir = "piquant-logs"
Piquant logs sub-directory
const PiquantVersionFileName = "piquant-version.json"
Config contains the docker container to use for PIQUANT. Separate from config.json because users can configure this in UI
const QuantFileSuffix = ".bin"
QuantFileSuffix - quant files are <jobid>.bin
const QuantLastOutputFileName = "output_data"
File name of last piquant output (used with fit command). Extension added as needed
const QuantLastOutputLogName = "output.log"
File name of last piquant output log file (used with fit command)
const QuantLogsSubDirSuffix = "-logs"
QuantLogsSubDirSuffix - this goes after job ID to form a directory name that stores the quant logs
const QuantSummaryFilePrefix = "summary-"
QuantSummaryFilePrefix - summary files are summary-<jobid>.json
const RootArchive = "Archive"
Root directory containing all archived data set zips as we downloaded them
- Archive/
const RootDatasetConfig = "DatasetConfig"
Root directory containing all dataset configs
- DatasetConfig/
- ----import-times.json - Specifies when each dataset was imported (map id->unix time)
const RootDetectorConfig = "DetectorConfig"
Root directory containing all detector configs
- DetectorConfig/
- ----<config-name>/ - Name shown on UI, eg PIXL or Breadboard
- --------pixlise-config.json - UI config values for this detector, eg detector energy range, window material, etc
- --------PiquantConfigs/
- ------------<version>/ - eg v1, v2, v3
- ----------------config.json - The PIQUANT config file, used by quant "runner", in docker container. References other files
- ----------------<other files>.msa or .csv - These are referenced by config.json and passed to PIQUANT exe as parameters
const RootJobData = "JobData"
This contains temporary files generated when running a long-running job (eg PIQUANT). Contains parameters to the job, status files, log files from the job, intermediate calculation files These are in separate directories to aid listing, so instead of returning 100s of files per job you may only want a list of job statuses, where you'd only get 1 file per job
- JobData/
- ----<dataset-id>/
- --------<job-id>/
- ------------node*.pmcs - PMC list for a given node running the job
- ------------params.json - Job parameters as specified when created
- ------------output/
- ----------------node*.pmcs_result.csv - CSV generated by a single node, intermediate outpu
- ----------------combined.csv - The final output generated by combining all the node*.pmcs_result.csv files
- ------------piquant-logs/
- ----------------node*.pmcs_piquant.log - PIQUANT log file for a given node
- ----------------node*.pmcs_stdout.log - stdout for running PIQUANT on a given node
const RootJobStatus = "JobStatus"
Root directory for all job statuses. These are stored separately to JobData so we can easily list all jobs and query their statuses
- JobStatus/
- ----<dataset-id>/<job-id>-status.json
const RootJobSummaries = "JobSummaries"
Root directory for all job summaries. This is stored separately to JobData so we can easily list all jobs and get their metadata (summary) files
- JobSummaries/
- ----<dataset-id>-jobs.json - Summary files describing all jobs for a dataset
const RootPixliseConfigPath = "PixliseConfig"
Root directory of PIXLISE-specific config files
- PixliseConfig/
- ----auth0.pem - Certificate needed by Auth0 to verify a user request is valid
- ----datasets.json - Dataset list (tiles)
- ----piquant-version.json - Docker container for running PIQUANT
- ----bad-dataset-ids.json - Contains a list of Dataset IDs to ignore when generating dataset tiles
const RootQuantificationPath = "Quantifications"
const RootUserActivity = "Activity"
Root directory containing all user activity stored, to track clicks and user flows for research purposes
- Activity/
- -----------<datestamp>/<GUID>.json - User activity files (things captured by middleware logger)
const RootUserContent = "UserContent"
Root directory for bucket containing all user-created content
- UserContent/
- ----<user-id>/
- --------ElementSets.json - User created element sets
- --------DataExpressions.json - User created expressions
- --------RGBMixes.json - User created RGB mixes
- --------<dataset-id>/
- ------------ROI.json - User created ROIs
- ------------Tags.json - Dataset tags
- ------------SpectrumAnnotation.json
- ------------multi-quant-z-stack.json - The current z-stack on multi-quant panel
- ------------Quantifications/
- ----------------<quant-id>.bin - The combined.csv file converted to protobuf binary format by quant-converter
- ----------------<quant-id>.csv - Copied from Job Bucket/JobData/<job-id>/output/combined.csv
- ----------------<quant-id>-logs/ - Copied from Job Bucket/JobData/<job-id>/piquant-logs/
- ----------------summary-<quant-id>.json - Quant summary file
- ------------LastPiquantOutput/ - Last output of fit command
- ------------ViewState/ - User view states for each dataset. This stores UI info about how the view was configured
- ----------------quantification.json - The quantification loaded on UI top toolbar
- ----------------roi.json - Colours assigned to ROIs on the UI
- ----------------selection.json - The users current selection of PMCs and/or pixels on UI
- ----------------analysisLayout.json - What widgets go where, top row/bottom row
- -----------------<panel-type>-<location>.json - States of various UI panels and where they are
- ----------------- See: GetViewStatePath()
- ----------------- Workspaces/
- --------------------<workspace-name>.json - View state files (like up one directory) flattened to a file and given a workspace name. Note the file also contains the workspace name, the file name may have been modified for saving, eg removal of /
- -------------------- See: GetWorkspacePath()
- -------------------- WorkspaceCollections/
- --------------------<collection-name>.json
- -------------------- See: GetCollectionPath()
- ----shared/ - All shared objects go here. Kind of like if they belong to a user called "shared". NOTE: User diffraction/roughness files are shared by default!
const ViewStateCollectionsSubpath = "WorkspaceCollections"
Sub-dir containing all workspace collections. These are flat files containing all workspaces they were created from
const ViewStateSavedSubpath = "Workspaces"
Sub-dir containing all workspaces. These are saved copies of view states
Variables ¶
This section is empty.
Functions ¶
func GetAnnotationsPath ¶
Getting spectrum annotations file path for a user and dataset
func GetCollectionPath ¶
Getting collection file path for a user, dataset and workspace ID. Note if id is blank, this just returns the directory Validates ids to make sure they are valid (because the id is actually part of the file name)
func GetConfigFilePath ¶
Getting a config file path relative to the root of the bucket
func GetCustomImagePath ¶
Get the custom image path for a given dataset ID. Note imageType must be one of UNALIGNED, MATCHED or RGBU
func GetCustomMetaPath ¶
Get the custom meta file path for a given dataset ID
func GetDatasetListPath ¶
func GetDatasetListPath() string
Gets dataset list file (which UI shows as tiles). NOTE: This is regenerated every time Datasets/ changes by lambda function: dataset-tile-updater
func GetDatasetUploadPath ¶
Gets the path to a file in the dataset upload area, for a given dataset id and file name
func GetDatasetsAuthPath ¶
func GetDatasetsAuthPath() string
func GetDetectorConfigFilePath ¶
Get a PIXLISE detector config file path given the config name
func GetDetectorConfigPath ¶
Get a detector config path (used by PIQUANT) given the config name, version and optionally a file name. If file name is blank then the directory path above it is returned
func GetElementSetPath ¶
Getting element set file path for a user
func GetImageCacheFilePath ¶
func GetImageFilePath ¶
func GetJobDataPath ¶
Retrieves the path of a given file for dataset, job id and file name. NOTE: if job ID is blank, it's omitted from the path, and same for file name
func GetJobStatusPath ¶
Retrieves the path of a job status file for given dataset id and job id. If job id is blank, this just returns the root of all job statuses for the given datset id
func GetJobSummaryPath ¶
Gets the job summary path for a given dataset ID
func GetMultiQuantZStackPath ¶
Getting multi-quant z-stack file path for a user and dataset
func GetPublicObjectsPath ¶
func GetPublicObjectsPath() string
func GetROIPath ¶
Getting ROI file path for a user and dataset
func GetScanFilePath ¶
func GetUserContentDatasetPath ¶
Gets user content file path for a given dataset id. Also requires user id, and the file name
func GetUserContentPath ¶
Gets user content file path by user id and file name
func GetUserLastPiquantOutputPath ¶
func GetUserLastPiquantOutputPath(userID string, datasetID string, piquantCommand string, fileName string) string
Retrieve path for last outputs, eg last run fit command (command is actually "quant"), sits in here with its log file
func GetViewStatePath ¶
Getting view state file path for a user, dataset and file name. Note if file name is blank, this just returns the directory
func GetWorkspacePath ¶
Getting workspace file path for a user, dataset and workspace ID. Note if id is blank, this just returns the directory Validates ids to make sure they are valid (because the id is actually part of the file name)
func MakeQuantCSVFileName ¶
func MakeQuantDataFileName ¶
func MakeQuantLogDirName ¶
func MakeQuantSummaryFileName ¶
MakeQuantSummaryFileName - Given a quant ID, generates the file name: summary-<jobid>.json (use this for searchability/consistency)
Types ¶
This section is empty.