filepaths

package
v3.12.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 22, 2023 License: Apache-2.0 Imports: 4 Imported by: 0

Documentation

Overview

Defines all paths/file names used in S3 for storage of our data. This allows a more centralised view of where our data is in S3 and makes changing storage/paths easier

Index

Constants

View Source
const Auth0PemFileName = "auth0.pem"

Auth0 PEM file which API uses to verify JWTs

View Source
const BadDatasetIDsFile = "bad-dataset-ids.json"

Contains a list of Dataset IDs to ignore when generating dataset tiles. This is hand maintained, only used when we have a bad dataset is downloaded that will never be usable This way we can prevent it from being written to <Config bucket>/PixliseConfig/datasets.json Just a work-around for having OCS fetcher put files there without our control. If a bad dataset download happens, there's no point in showing a broken tile for it forever!

View Source
const CSVFileSuffix = ".csv"

CSVFileSuffix - CSV files are <jobid>.csv

View Source
const DatasetCustomMetaFileName = "custom-meta.json"

File name for dataset custom meta file containing the title and other settings

View Source
const DatasetCustomRoot = "dataset-addons"

Root directory for all dataset add-ons. These are custom files that can be uploaded for a datset to set its title, and which the "default" image is, etc.

  • dataset-addons/
  • ----<dataset-id>/
  • --------custom-meta.json - Custom metadata for this dataset, usually to set dataset title, but can also contain matched image scale/bias or other fields
  • ------------UNALINED/
  • ----------------image, *.png or *.jpg
  • ------------MATCHED/
  • ----------------image, *.png, *.jpg or *.tif (if TIF it's considered an RGBU multi-spectral image)
  • ------------RGBU/
  • ----------------images, *.tif - NOTE: Went unused, these are now all stored as MATCHED images
View Source
const DatasetFileName = "dataset.bin"

The dataset file containing all spectra, housekeeping, beam locations, etc. Created by data-converter

View Source
const DatasetLastImportTimesPath = RootDatasetConfig + "/import-times.json"

Dataset last import time path, used by importer

View Source
const DatasetSummaryFileName = "summary.json"

Contains metadata for the dataset, easily downloadable/browsable. dataset-tile-updater pulls these in to form the datasets.json file

View Source
const DatasetUploadRoot = "UploadedDatasets"

Root directory to store uploaded dataset "raw" artifacts. These are then read by dataset importer to create a dataset in the dataset bucket

  • Uploaded/
  • ----<dataset-id>/
  • --------Files for that dataset importer type. For example, with breadboards we expect:
  • --------import.json <-- Describes what's what
  • --------spectra.zip <-- Spectra .msa files zipped up
  • --------context_image_1.jpg <-- 1 or more context images
View Source
const DiffractionDBFileName = "diffraction-db.bin"

Diffraction peak database, generated by diffraction-detector when dataset is imported

View Source
const DiffractionPeakManualFileName = "manual-diffraction-peaks.json"

Name of user manually entered diffraction peaks file. NOTE: this file only exists as a shared file!

View Source
const DiffractionPeakStatusFileName = "diffraction-peak-statuses.json"

Name of file containing status of diffraction peaks - the diffraction DB is generated when dataset is created but users can view a peak and mark it with a status, eg to delete it because it's not a real diffraction peak OTE: this file only exists as a shared file!

View Source
const JobStatusSuffix = "-status.json"

Job status file name suffix. Appended to the job ID

View Source
const JobSummarySuffix = "-jobs.json"

Job summary file name suffix. Appended to dataset ID

View Source
const MultiQuantZStackFile = "multi-quant-z-stack.json"

Multi-quant z-stack file name

View Source
const PiquantConfigFileName = "config.json"

File name of "overall" piquant config file, which references the individual files PIQUANT will need

View Source
const PiquantConfigSubDir = "PiquantConfigs"

Piquant configs sub-dir

  • NOTE: Quant creation doesn't use GetDetectorConfigPath, maybe DetectorConfig is hard-coded into docker container! TODO: remove that
View Source
const PiquantDownloadPath = "piquant"

PIQUANT binaries root file - this kind of went unused and is likely not working because our build process doesn't write to the bucket any more

  • piquant/
  • ----piquant-linux-*.zip - Built PIQUANT executables (zipped)
View Source
const PiquantLogSubdir = "piquant-logs"

Piquant logs sub-directory

View Source
const PiquantVersionFileName = "piquant-version.json"

Config contains the docker container to use for PIQUANT. Separate from config.json because users can configure this in UI

View Source
const QuantFileSuffix = ".bin"

QuantFileSuffix - quant files are <jobid>.bin

View Source
const QuantLastOutputFileName = "output_data"

File name of last piquant output (used with fit command). Extension added as needed

View Source
const QuantLastOutputLogName = "output.log"

File name of last piquant output log file (used with fit command)

View Source
const QuantLogsSubDirSuffix = "-logs"

QuantLogsSubDirSuffix - this goes after job ID to form a directory name that stores the quant logs

View Source
const QuantSummaryFilePrefix = "summary-"

QuantSummaryFilePrefix - summary files are summary-<jobid>.json

View Source
const RootArchive = "Archive"

Root directory containing all archived data set zips as we downloaded them

  • Archive/
View Source
const RootDatasetConfig = "DatasetConfig"

Root directory containing all dataset configs

  • DatasetConfig/
  • ----import-times.json - Specifies when each dataset was imported (map id->unix time)
View Source
const RootDatasetSummaries = "DatasetSummaries"

Root directory containing dataset summaries (named <datset-id>.json). This makes it easier to list all datasets, rather than listing all files in \Datasets

  • DatasetSummaries/
View Source
const RootDatasets = "Datasets"

////////////////////////////////////////////////////////////////////////////////// Data Bucket //////////////////////////////////////////////////////////////////////////////////

Root directory containing our dataset files

  • Datasets/ ----<dataset-id>/ --------dataset.bin --------Context image files (.png or .jpg) --------RGBU multi-spectral files (.tif) --------diffraction-db.bin --------summary.json
View Source
const RootDetectorConfig = "DetectorConfig"

Root directory containing all detector configs

  • DetectorConfig/
  • ----<config-name>/ - Name shown on UI, eg PIXL or Breadboard
  • --------pixlise-config.json - UI config values for this detector, eg detector energy range, window material, etc
  • --------PiquantConfigs/
  • ------------<version>/ - eg v1, v2, v3
  • ----------------config.json - The PIQUANT config file, used by quant "runner", in docker container. References other files
  • ----------------<other files>.msa or .csv - These are referenced by config.json and passed to PIQUANT exe as parameters
View Source
const RootJobData = "JobData"

This contains temporary files generated when running a long-running job (eg PIQUANT). Contains parameters to the job, status files, log files from the job, intermediate calculation files These are in separate directories to aid listing, so instead of returning 100s of files per job you may only want a list of job statuses, where you'd only get 1 file per job

  • JobData/
  • ----<dataset-id>/
  • --------<job-id>/
  • ------------node*.pmcs - PMC list for a given node running the job
  • ------------params.json - Job parameters as specified when created
  • ------------output/
  • ----------------node*.pmcs_result.csv - CSV generated by a single node, intermediate outpu
  • ----------------combined.csv - The final output generated by combining all the node*.pmcs_result.csv files
  • ------------piquant-logs/
  • ----------------node*.pmcs_piquant.log - PIQUANT log file for a given node
  • ----------------node*.pmcs_stdout.log - stdout for running PIQUANT on a given node
View Source
const RootJobStatus = "JobStatus"

Root directory for all job statuses. These are stored separately to JobData so we can easily list all jobs and query their statuses

  • JobStatus/
  • ----<dataset-id>/<job-id>-status.json
View Source
const RootJobSummaries = "JobSummaries"

Root directory for all job summaries. This is stored separately to JobData so we can easily list all jobs and get their metadata (summary) files

  • JobSummaries/
  • ----<dataset-id>-jobs.json - Summary files describing all jobs for a dataset
View Source
const RootPixliseConfigPath = "PixliseConfig"

Root directory of PIXLISE-specific config files

  • PixliseConfig/
  • ----auth0.pem - Certificate needed by Auth0 to verify a user request is valid
  • ----datasets.json - Dataset list (tiles)
  • ----piquant-version.json - Docker container for running PIQUANT
  • ----bad-dataset-ids.json - Contains a list of Dataset IDs to ignore when generating dataset tiles
View Source
const RootUserActivity = "Activity"

Root directory containing all user activity stored, to track clicks and user flows for research purposes

  • Activity/
  • -----------<datestamp>/<GUID>.json - User activity files (things captured by middleware logger)
View Source
const RootUserContent = "UserContent"

Root directory for bucket containing all user-created content

  • UserContent/
  • ----<user-id>/
  • --------ElementSets.json - User created element sets
  • --------DataExpressions.json - User created expressions
  • --------RGBMixes.json - User created RGB mixes
  • --------<dataset-id>/
  • ------------ROI.json - User created ROIs
  • ------------Tags.json - Dataset tags
  • ------------SpectrumAnnotation.json
  • ------------multi-quant-z-stack.json - The current z-stack on multi-quant panel
  • ------------Quantifications/
  • ----------------<quant-id>.bin - The combined.csv file converted to protobuf binary format by quant-converter
  • ----------------<quant-id>.csv - Copied from Job Bucket/JobData/<job-id>/output/combined.csv
  • ----------------<quant-id>-logs/ - Copied from Job Bucket/JobData/<job-id>/piquant-logs/
  • ----------------summary-<quant-id>.json - Quant summary file
  • ------------LastPiquantOutput/ - Last output of fit command
  • ------------ViewState/ - User view states for each dataset. This stores UI info about how the view was configured
  • ----------------quantification.json - The quantification loaded on UI top toolbar
  • ----------------roi.json - Colours assigned to ROIs on the UI
  • ----------------selection.json - The users current selection of PMCs and/or pixels on UI
  • ----------------analysisLayout.json - What widgets go where, top row/bottom row
  • -----------------<panel-type>-<location>.json - States of various UI panels and where they are
  • ----------------- See: GetViewStatePath()
  • ----------------- Workspaces/
  • --------------------<workspace-name>.json - View state files (like up one directory) flattened to a file and given a workspace name. Note the file also contains the workspace name, the file name may have been modified for saving, eg removal of /
  • -------------------- See: GetWorkspacePath()
  • -------------------- WorkspaceCollections/
  • --------------------<collection-name>.json
  • -------------------- See: GetCollectionPath()
  • ----shared/ - All shared objects go here. Kind of like if they belong to a user called "shared". NOTE: User diffraction/roughness files are shared by default!
View Source
const ViewStateCollectionsSubpath = "WorkspaceCollections"

Sub-dir containing all workspace collections. These are flat files containing all workspaces they were created from

View Source
const ViewStateSavedSubpath = "Workspaces"

Sub-dir containing all workspaces. These are saved copies of view states

Variables

This section is empty.

Functions

func GetAnnotationsPath

func GetAnnotationsPath(userID string, datasetID string) string

Getting spectrum annotations file path for a user and dataset

func GetCollectionPath

func GetCollectionPath(userID string, datasetID string, id string) string

Getting collection file path for a user, dataset and workspace ID. Note if id is blank, this just returns the directory Validates ids to make sure they are valid (because the id is actually part of the file name)

func GetConfigFilePath

func GetConfigFilePath(fileName string) string

Getting a config file path relative to the root of the bucket

func GetCustomImagePath

func GetCustomImagePath(datasetID string, imgType string, fileName string) string

Get the custom image path for a given dataset ID. Note imageType must be one of UNALIGNED, MATCHED or RGBU

func GetCustomMetaPath

func GetCustomMetaPath(datasetID string) string

Get the custom meta file path for a given dataset ID

func GetDatasetFilePath

func GetDatasetFilePath(datasetID string, fileName string) string

func GetDatasetListPath

func GetDatasetListPath() string

Gets dataset list file (which UI shows as tiles). NOTE: This is regenerated every time Datasets/ changes by lambda function: dataset-tile-updater

func GetDatasetSummaryFilePath

func GetDatasetSummaryFilePath(datasetID string) string

Get a dataset summary file path given a dataset ID

func GetDatasetUploadPath

func GetDatasetUploadPath(datasetID string, fileName string) string

Gets the path to a file in the dataset upload area, for a given dataset id and file name

func GetDatasetsAuthPath added in v3.4.1

func GetDatasetsAuthPath() string

func GetDetectorConfigFilePath

func GetDetectorConfigFilePath(configName string) string

Get a PIXLISE detector config file path given the config name

func GetDetectorConfigPath

func GetDetectorConfigPath(configName string, version string, fileName string) string

Get a detector config path (used by PIQUANT) given the config name, version and optionally a file name. If file name is blank then the directory path above it is returned

func GetElementSetPath

func GetElementSetPath(userID string) string

Getting element set file path for a user

func GetJobDataPath

func GetJobDataPath(datasetID string, jobID string, fileName string) string

Retrieves the path of a given file for dataset, job id and file name. NOTE: if job ID is blank, it's omitted from the path, and same for file name

func GetJobStatusPath

func GetJobStatusPath(datasetID string, jobID string) string

Retrieves the path of a job status file for given dataset id and job id. If job id is blank, this just returns the root of all job statuses for the given datset id

func GetJobSummaryPath

func GetJobSummaryPath(datasetID string) string

Gets the job summary path for a given dataset ID

func GetMultiQuantZStackPath

func GetMultiQuantZStackPath(userID string, datasetID string) string

Getting multi-quant z-stack file path for a user and dataset

func GetPublicObjectsPath added in v3.4.1

func GetPublicObjectsPath() string

func GetRGBMixPath

func GetRGBMixPath(userID string) string

Getting RGB mix file path for a user

func GetROIPath

func GetROIPath(userID string, datasetID string) string

Getting ROI file path for a user and dataset

func GetSharedContentDatasetPath

func GetSharedContentDatasetPath(datasetID string, fileName string) string

Same as GetUserContentDatasetPath() but for shared user

func GetSharedQuantPath

func GetSharedQuantPath(datasetID string, fileName string) string

Same as GetUserQuantPath() but for shared user

func GetTagPath

func GetTagPath(userID string) string

Getting tag file path for a user

func GetUserContentDatasetPath

func GetUserContentDatasetPath(userID string, datasetID string, fileName string) string

Gets user content file path for a given dataset id. Also requires user id, and the file name

func GetUserContentPath

func GetUserContentPath(userID string, fileName string) string

Gets user content file path by user id and file name

func GetUserLastPiquantOutputPath

func GetUserLastPiquantOutputPath(userID string, datasetID string, piquantCommand string, fileName string) string

Retrieve path for last outputs, eg last run fit command (command is actually "quant"), sits in here with its log file

func GetUserQuantPath

func GetUserQuantPath(userID string, datasetID string, fileName string) string

Retrieves files for a user and dataset ID. If fileName is blank, it only returns the directory path

func GetViewStatePath

func GetViewStatePath(userID string, datasetID string, fileName string) string

Getting view state file path for a user, dataset and file name. Note if file name is blank, this just returns the directory

func GetWorkspacePath

func GetWorkspacePath(userID string, datasetID string, id string) string

Getting workspace file path for a user, dataset and workspace ID. Note if id is blank, this just returns the directory Validates ids to make sure they are valid (because the id is actually part of the file name)

func MakeQuantCSVFileName

func MakeQuantCSVFileName(quantID string) string

func MakeQuantDataFileName

func MakeQuantDataFileName(quantID string) string

func MakeQuantLogDirName

func MakeQuantLogDirName(quantID string) string

func MakeQuantSummaryFileName

func MakeQuantSummaryFileName(quantID string) string

MakeQuantSummaryFileName - Given a quant ID, generates the file name: summary-<jobid>.json (use this for searchability/consistency)

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL