Documentation
¶
Overview ¶
Package dashboard hosts the data + transport layer for the `olares-cli dashboard` CLI subtree. This is the "heavy" half of the thin-cmd / pkg-core split documented in cli/skills/olares-dashboard/SKILL.md: every fetcher, gate, builder, HTTP client, schema loader, and runtime envelope shape lives here so the cmd-side area subpackages (cli/cmd/ctl/dashboard/{overview, applications, schema, ...}) stay thin Cobra wrappers.
Files in this package:
envelope.go — Envelope / Item / Meta / TimeWindow shapes output.go — OutputFormat parsing + table / JSON renderers flags.go — CommonFlags data type + Validate + ResolveWindow runner.go — Runner / RunOnce one-shot or watch ticker client.go — *Client: HTTP wrappers, EnsureUser, RequireAdmin schema.go — Kind constants + embedded JSON Schema bundle json_shim.go — internal json marshal helpers
Per-domain subpackages (monitoring/, apps/, system/, gpu/, gates/, workloads/) host fetchers and computation that are shared across more than one cmd area. Subpackages MUST NOT import each other; cross- domain composition happens in the cmd leaf instead, never inside pkg.
Index ¶
- Constants
- Variables
- func AggregateByDeployment(data map[string]format.MonitoringResult, metric string) map[string]float64
- func AggregateByNamespace(data map[string]format.MonitoringResult, metric string) map[string]float64
- func AllKinds() []string
- func BuildCustomNamespaceFilter(customApps []WorkloadApp) string
- func BuildLsblkTreePrefix(depth int, lastStack []bool) string
- func BuildSystemDeploymentFilter(systemApps []WorkloadApp) string
- func ClassifyKind(status int) string
- func ClassifyTransportErr(err error) string
- func DisplayRole(r string) string
- func DisplayString(it Item, key string) string
- func DoMonitoring(ctx context.Context, c *Client, path string, q url.Values) (map[string]format.MonitoringResult, error)
- func EmitDefault(env Envelope, fmtMode OutputFormat) error
- func ExtractHAMIMessage(body string) string
- func FetchClusterMetrics(ctx context.Context, c *Client, cf *CommonFlags, metrics []string, ...) (map[string]format.MonitoringResult, error)
- func FetchGraphicsDetail(ctx context.Context, c *Client, uuid string) (map[string]any, error)
- func FetchGraphicsList(ctx context.Context, c *Client, filters map[string]string) ([]map[string]any, error)
- func FetchNodeMetrics(ctx context.Context, c *Client, cf *CommonFlags, metrics []string, ...) (map[string]format.MonitoringResult, error)
- func FetchNodesList(ctx context.Context, c *Client) ([]string, error)
- func FetchTaskDetail(ctx context.Context, c *Client, name, podUID, sharemode string) (map[string]any, error)
- func FetchTaskList(ctx context.Context, c *Client, filters map[string]string) ([]map[string]any, error)
- func FetchUserMetric(ctx context.Context, c *Client, cf *CommonFlags, username string, ...) (map[string]format.MonitoringResult, error)
- func FirstAnyInArray(v any) any
- func FormatFloat(v float64) string
- func FormatRateAny(v any) string
- func GPUAdvisory(ctx context.Context, c *Client, cf *CommonFlags, stderr io.Writer) (note, reason string)
- func GPUHealthLabel(raw any) string
- func GPUModeLabel(raw any) string
- func GPUStepPreset(minutes int) (string, bool)
- func GPUTrendStep(start, end time.Time) string
- func GPUTrendTimestampISO(t time.Time) string
- func GPUVRAMHuman(mibVal any) string
- func HasCUDANode(ctx context.Context, c *Client) (bool, error)
- func HasPknameLabels(rows []LsblkRow) bool
- func LastSampleFromRow(values [][]any, value []any) format.LastMonitoringSample
- func LastValueOfRow(value []interface{}, values [][]interface{}) (float64, bool)
- func MonitoringQuery(cf *CommonFlags, metricsFilter []string, w MonitoringWindow, now time.Time, ...) url.Values
- func ParseRFCTimestamp(s string) int64
- func ParseStep(raw string) (time.Duration, error)
- func PercentDirect(pct float64) string
- func PercentString(ratio float64) string
- func PodCountByDeployment(data map[string]format.MonitoringResult) map[string]int
- func PodDeploymentName(pod string) string
- func ReadSchemaFile(file string) ([]byte, error)
- func RenderTemperature(celsius float64, target format.TempUnit) string
- func ResetCUDANodeCache(c *Client)
- func ResolveParent(r LsblkRow, rootName string, nameSet map[string]bool) string
- func SafeRatio(num, den float64) float64
- func SampleFloat(s format.LastMonitoringSample) float64
- func ScalarFloat(v interface{}) (float64, bool)
- func SortWorkloadAggregates(rows []WorkloadAggregate, sortBy, dir string)
- func ToFloat(v any) float64
- func ValidOutputFormats() []string
- func WriteJSON(w io.Writer, env Envelope) error
- func WriteTable(w io.Writer, columns []TableColumn, items []Item) error
- type Client
- func (c *Client) BaseURL() string
- func (c *Client) DoEmpty(ctx context.Context, method, path string, query url.Values, body any) error
- func (c *Client) DoJSON(ctx context.Context, method, path string, query url.Values, body any, out any) error
- func (c *Client) DoRaw(ctx context.Context, method, path string, query url.Values, body any) (status int, payload []byte, err error)
- func (c *Client) EnsureSystemStatus(ctx context.Context) (*SystemStatus, error)
- func (c *Client) EnsureUser(ctx context.Context) (*UserDetail, error)
- func (c *Client) HTTPClient() *http.Client
- func (c *Client) IsOlaresOne(ctx context.Context) (bool, error)
- func (c *Client) OlaresID() string
- func (c *Client) RequireAdmin(ctx context.Context) (*UserDetail, error)
- type CommonFlags
- type Envelope
- func BuildRankingEnvelope(ctx context.Context, c *Client, cf *CommonFlags, ...) (Envelope, error)
- func GateOlaresOne(ctx context.Context, c *Client, cf *CommonFlags, kind string, now time.Time, ...) (Envelope, bool)
- func VgpuUnavailableFromError(c *Client, cf *CommonFlags, err error, kind string, now time.Time, ...) (Envelope, bool)
- type FanCurveRow
- type GraphicsListBody
- type HTTPError
- type InstantVectorSample
- type Item
- type LsblkFlatRow
- type LsblkRow
- type Meta
- type MonitoringWindow
- type OutputFormat
- type RangeVectorPoint
- type RangeVectorRange
- type RangeVectorSeries
- type RawAppListItem
- type RunOnce
- type Runner
- type SchemaEntry
- type SystemFanData
- type SystemIFSItem
- type SystemStatus
- type TableColumn
- type TimeWindow
- type UserDetail
- type WorkloadAggregate
- type WorkloadApp
- type WorkloadRequest
Constants ¶
const ( // `dashboard overview` (default action) — sections envelope with three // keys: physical / user / ranking. Each section is itself a leaf // envelope (physical / user / ranking kinds below). KindOverview = "dashboard.overview" // Overview leaf commands — one per detail page in the SPA. KindOverviewPhysical = "dashboard.overview.physical" // 9 cluster metric rows KindOverviewUser = "dashboard.overview.user" // user CPU / memory quota KindOverviewRanking = "dashboard.overview.ranking" // workload-grain rank KindOverviewCPU = "dashboard.overview.cpu" // per-node CPU details KindOverviewMemory = "dashboard.overview.memory" // per-node memory (physical|swap) KindOverviewDisk = "dashboard.overview.disk" // sections (main + per-disk partitions) KindOverviewDiskMain = "dashboard.overview.disk.main" KindOverviewDiskPart = "dashboard.overview.disk.partitions" KindOverviewPods = "dashboard.overview.pods" // per-node running-pod counters KindOverviewNetwork = "dashboard.overview.network" // per-iface system-ifs KindOverviewFan = "dashboard.overview.fan" // sections (live + curve) KindOverviewFanLive = "dashboard.overview.fan.live" KindOverviewFanCurve = "dashboard.overview.fan.curve" KindOverviewGPUList = "dashboard.overview.gpu.list" KindOverviewGPUTasks = "dashboard.overview.gpu.tasks" KindOverviewGPUDetail = "dashboard.overview.gpu.detail" KindOverviewGPUTaskDet = "dashboard.overview.gpu.task.detail" KindOverviewGPUDetailFull = "dashboard.overview.gpu.detail.full" KindOverviewGPUTaskDetFull = "dashboard.overview.gpu.task.detail.full" KindOverviewGPUGauges = "dashboard.overview.gpu.gauges" KindOverviewGPUTrends = "dashboard.overview.gpu.trends" // Applications tree. KindApplicationsList = "dashboard.applications.list" // Schema introspection. KindSchemaIndex = "dashboard.schema.index" )
Kind names every distinct payload shape the CLI can emit. Both `meta.kind` in the JSON envelope and the `dashboard schema` introspection table reference these constants — adding a new command means adding a Kind here AND a JSON Schema under schemas/.
Naming convention: `dashboard.<area>.<verb>` (lowercase, dot-separated). The leading `dashboard.` namespace prevents collisions if other CLI subtrees ever adopt the same envelope.
const ( FanSpeedMaxCPU = 2900 FanSpeedMaxGPU = 3100 )
FanSpeedMaxCPU / FanSpeedMaxGPU mirror the same constants in Fan/config.ts. Used by overview fan live to expose the "max RPM" column alongside the live RPM reading.
const SystemFrontendDeployment = "system-frontend-deployment"
SystemFrontendDeployment mirrors the SPA's `SYSTEM_FRONTEND_DEPLOYMENT` constant (Applications2/config.ts:25). Multiple "entrance" apps share the same physical deployment; the merge step has to clone the metric to each app entry so the cards line up with what users see in the UI.
Variables ¶
var ErrAlreadyReported = errors.New("dashboard: error already reported")
ErrAlreadyReported is the sentinel cmd subpackages return when their RunE has already written a user-visible diagnostic to stderr (for example, the `unknownSubcommandRunE` helper that prints a typo suggestion before returning). The dashboard root's leaf-error wrapper (cmd/ctl/dashboard/root.go::wrapLeafErrors) checks for this with errors.Is and skips the redundant Fprintln, while still propagating the error up so cobra exits non-zero.
var FanCurveTable = []FanCurveRow{
{Step: 1, CPUFanRPM: 0, GPUFanRPM: 0, CPUTempRange: "0 - 54", GPUTempRange: "0 - 48"},
{Step: 2, CPUFanRPM: 1100, GPUFanRPM: 1300, CPUTempRange: "47 - 64", GPUTempRange: "39 - 58"},
{Step: 3, CPUFanRPM: 1300, GPUFanRPM: 1500, CPUTempRange: "54 - 71", GPUTempRange: "48 - 65"},
{Step: 4, CPUFanRPM: 1500, GPUFanRPM: 1700, CPUTempRange: "64 - 74", GPUTempRange: "58 - 68"},
{Step: 5, CPUFanRPM: 1800, GPUFanRPM: 2000, CPUTempRange: "71 - 77", GPUTempRange: "65 - 71"},
{Step: 6, CPUFanRPM: 2100, GPUFanRPM: 2300, CPUTempRange: "74 - 80", GPUTempRange: "68 - 74"},
{Step: 7, CPUFanRPM: 2300, GPUFanRPM: 2500, CPUTempRange: "77 - 83", GPUTempRange: "71 - 77"},
{Step: 8, CPUFanRPM: 2300, GPUFanRPM: 2700, CPUTempRange: "80 - 86", GPUTempRange: "75 - 80"},
{Step: 9, CPUFanRPM: 2700, GPUFanRPM: 2900, CPUTempRange: "83 - 88", GPUTempRange: "77 - 83"},
{Step: 10, CPUFanRPM: 2900, GPUFanRPM: 3100, CPUTempRange: "86 - 96", GPUTempRange: "80 - 86"},
}
FanCurveTable is the hardcoded fan-curve specification — 1:1 with the SPA's `apps/packages/app/src/apps/dashboard/pages/Overview2/Fan/config.ts` `tableData` constant. NEVER drift from upstream without updating both sides; the iteration red-line in SKILL.md pins this.
var SchemaFS embed.FS
Functions ¶
func AggregateByDeployment ¶
func AggregateByDeployment(data map[string]format.MonitoringResult, metric string) map[string]float64
AggregateByDeployment groups the per-pod result rows by deployment name (derived via PodDeploymentName(metric.pod)) and sums the last sample of each pod under that deployment. Matches how the SPA ends up with one row per deployment after the lodash chain in `getTabOptions`.
func AggregateByNamespace ¶
func AggregateByNamespace(data map[string]format.MonitoringResult, metric string) map[string]float64
AggregateByNamespace returns the last-sample of `metric` keyed by `metric.namespace`. Each namespace appears at most once per metric in the BFF response, so no summation is needed (unlike the per-pod path).
func AllKinds ¶
func AllKinds() []string
AllKinds returns every Kind known to this CLI. Used by the schema-completeness unit test to ensure new commands don't forget to register their JSON Schema.
func BuildCustomNamespaceFilter ¶
func BuildCustomNamespaceFilter(customApps []WorkloadApp) string
BuildCustomNamespaceFilter mirrors `resources_filter_custom` — pipe-joined list of namespaces, no anchors.
func BuildLsblkTreePrefix ¶
BuildLsblkTreePrefix mirrors the SPA's `buildLsblkTreePrefix` (Overview2/Disk/config.ts:332). `lastStack[i]==true` means the ancestor at depth `i` is the last sibling at that level — we draw " " under it so the trunk doesn't dribble down past a finished sibling.
func BuildSystemDeploymentFilter ¶
func BuildSystemDeploymentFilter(systemApps []WorkloadApp) string
BuildSystemDeploymentFilter turns systemApps into the SPA's per-deployment pipe-joined regex with `.*` suffix per deployment, mirroring `resources_filter_system` in Applications2/config.ts.
func ClassifyKind ¶
ClassifyKind maps an HTTP status code to the error_kind enum surfaced via Meta.ErrorKind / HTTPError.ErrorKind. Centralised so leaves don't drift.
func ClassifyTransportErr ¶
ClassifyTransportErr maps an arbitrary error returned by hc.Do into an ErrorKind enum. Used by runner.go to populate Meta.ErrorKind for failed iterations. Order matters: typed credential errors first, then HTTPError, then generic transport.
func DisplayRole ¶
DisplayRole pretty-prints an empty / unknown role string for the stderr hint so humans see "(unset)" rather than two consecutive spaces.
func DisplayString ¶
DisplayString is a small helper for table column getters: pull `key` out of item.Display and stringify it, falling back to "-" when missing or empty. Centralises the rendering of `nil` / "" / 0-length so callers stay declarative.
func DoMonitoring ¶
func DoMonitoring(ctx context.Context, c *Client, path string, q url.Values) (map[string]format.MonitoringResult, error)
DoMonitoring shells out to the BFF, decodes the standard `{ results: [...] }` envelope, and returns a metric_name → MonitoringResult map ready for format.GetLastMonitoringData.
func EmitDefault ¶
func EmitDefault(env Envelope, fmtMode OutputFormat) error
EmitDefault is a tiny helper for leaf commands that don't have custom table columns: emit JSON in JSON mode, fall back to a generic key / value dump in table mode (sorted column headers based on the union of all items' Display keys). Most leaves prefer their own TableColumn slice and don't call this — it's the catch-all for free-form GPU / task detail responses where the column set isn't fixed.
Hoisted to the pkg layer so cmd-area subpackages don't each redeclare it. Settings precedent allows light duplication, but this one is non-trivial enough (~25 lines) to centralize.
func ExtractHAMIMessage ¶
ExtractHAMIMessage tries to surface the `message` field from a HAMI JSON-shaped body (`{"code": <int>, "message": "..."}`); falls back to the trimmed body itself capped at 256 bytes. Caller pre-strips the body via the *HTTPError struct.
func FetchClusterMetrics ¶
func FetchClusterMetrics(ctx context.Context, c *Client, cf *CommonFlags, metrics []string, w MonitoringWindow, now time.Time, instant bool) (map[string]format.MonitoringResult, error)
FetchClusterMetrics issues GET /kapis/monitoring.kubesphere.io/v1alpha3/cluster with metrics_filter and returns the per-metric raw payload (one entry per `metric_name`). Used by overview physical / overview ranking sections.
Wire shape (from the SPA's getClusterMonitoring):
GET /kapis/monitoring.kubesphere.io/v1alpha3/cluster?metrics_filter=...$&start=...&end=...&step=600s×=20
Response: { results: [ { metric_name, data: { result: [ { metric, values, value } ] } } ] }
func FetchGraphicsDetail ¶
FetchGraphicsDetail returns HAMI's `/v1/gpu` payload directly — the SPA's `GraphicsDetailsResponse` is a flat object, no `data` envelope.
func FetchGraphicsList ¶
func FetchGraphicsList(ctx context.Context, c *Client, filters map[string]string) ([]map[string]any, error)
FetchGraphicsList posts to /hami/api/vgpu/v1/gpus and returns the (HAMI-flat) `list` array. Caller is responsible for the 404 (no HAMI integration) and 5xx (HAMI unhealthy) branches via VgpuUnavailableFromError / IsHTTPError.
func FetchNodeMetrics ¶
func FetchNodeMetrics(ctx context.Context, c *Client, cf *CommonFlags, metrics []string, w MonitoringWindow, now time.Time, instant bool) (map[string]format.MonitoringResult, error)
FetchNodeMetrics issues GET /kapis/.../v1alpha3/nodes — used by every per-node detail page (CPU / Memory / Disk / Pods / Fan-via-monitoring, when applicable). Response shape is identical to FetchClusterMetrics.
func FetchNodesList ¶
FetchNodesList returns the sorted set of node names. Used by overview physical (to label cluster-level rows with hostnames).
func FetchTaskDetail ¶
func FetchTaskDetail(ctx context.Context, c *Client, name, podUID, sharemode string) (map[string]any, error)
FetchTaskDetail returns HAMI's `/v1/container` payload directly — like the GPU detail, the response is a flat object.
func FetchTaskList ¶
func FetchTaskList(ctx context.Context, c *Client, filters map[string]string) ([]map[string]any, error)
FetchTaskList posts to /hami/api/vgpu/v1/containers and returns the flat `items` array. Same 404 / 5xx semantics as FetchGraphicsList.
func FetchUserMetric ¶
func FetchUserMetric(ctx context.Context, c *Client, cf *CommonFlags, username string, metrics []string, w MonitoringWindow, now time.Time, instant bool) (map[string]format.MonitoringResult, error)
FetchUserMetric issues GET /kapis/.../v1alpha3/users/<username> — used by overview user. Same monitoring-response shape as cluster / nodes.
func FirstAnyInArray ¶
FirstAnyInArray returns the first element of a slice-shaped value (e.g. `[]any` or `[]string`) decoded from JSON. HAMI returns per-device fields like `devicesCoreUtilizedPercent` as arrays — the SPA uses `val[0]` because tasks observed in the wild only ever bind a single device. We mirror that decision here, while preserving the full slice in `Raw` for multi-GPU consumers down the line.
func FormatFloat ¶
FormatFloat renders v with the minimum number of decimals required for a round-trip representation (Go's 'f' verb with prec=-1). Used everywhere SPA's number column expects an unbounded-precision string before being run through format.GetDiskSize / format.GetThroughput / format.WorthValue.
Hoisted to the pkg layer so cmd area subpackages don't each duplicate the one-liner.
func FormatRateAny ¶
FormatRateAny coerces an arbitrary tx/rx value (could be number or string) to a SPA-style "X B/s" throughput line. The system-ifs payload returns strings on some Olares versions and numbers on others; this function unifies both shapes.
func GPUAdvisory ¶
func GPUAdvisory(ctx context.Context, c *Client, cf *CommonFlags, stderr io.Writer) (note, reason string)
GPUAdvisory is the soft-gate companion to GateOlaresOne. The SPA's GPU detail pages (`Overview2/GPU/IndexPage.vue`) carry NO admin or CUDA gate themselves — the only hard gate in the SPA is at the sidebar card (Overview2/ClusterResource.vue:232+278-293) which just hides the entry. Anyone landing on the URL directly hits HAMI without pre-checks.
To match that behaviour the CLI no longer blocks data fetches; it only emits a one-line stderr advisory and tags the envelope `meta.note` with the reason the SPA would have hidden the card. Two soft signals:
- non-admin profile → "gpu_sidebar_hidden_non_admin"
- no CUDA-capable node → "gpu_sidebar_hidden_no_cuda_node"
Both are advisory-only; the caller continues to fetch and renders data when HAMI returns it. Returns (note, "") when no advisory applies, or (note, reason) — both empty when EnsureUser / HasCUDANode fail (we fall silent rather than misleading agents).
func GPUHealthLabel ¶
GPUHealthLabel turns HAMI's boolean `health` into a human-readable status. The SPA leaves it as "true"/"false"; we surface the friendlier "healthy"/"unhealthy" pair (raw envelope still carries the original bool for agents that prefer the wire shape).
func GPUModeLabel ¶
GPUModeLabel translates HAMI's `shareMode` string into the SPA-rendered label. SPA mapping (constant/index.ts:VRAMModeLabel):
"0" → "App exclusive" "1" → "Memory slicing" "2" → "Time slicing"
Real fixtures sometimes return "3" (observed on `olarestest005` — HAMI's WebUI silently falls back to showing the raw value). To avoid surfacing an empty cell in the CLI table we pass unknown values through unchanged, prefixed with `mode=` so a human can tell that we preserved the wire byte instead of mistranslating.
func GPUStepPreset ¶
GPUStepPreset reproduces the `timeReflection` table in utils.js. Returns the preset step + true when minutes match a known window.
func GPUTrendStep ¶
GPUTrendStep is a 1:1 port of `timeRangeFormate(diff_s, 16)` from packages/app/src/apps/controlPanelCommon/containers/Monitoring/utils.js. Algorithm:
- Convert (end-start) to whole minutes (rounded down).
- If the minutes count matches one of the SPA's preset windows (10/20/30, 60, 120, 180, 300, 480, 720, 1440, 4320, 10080), use the matching preset step.
- Otherwise compute `floor(minutes/16)m` (same as `getStep(value, 16)`), then enforce a [1m..60m] range.
func GPUTrendTimestampISO ¶
GPUTrendTimestampISO formats a time.Time the way the SPA's `timeParse(date)` does for monitor queries: `YYYY-MM-DD HH:mm:ss` in the caller's timezone (no offset suffix). HAMI's WebUI accepts either Unix-seconds or this human-readable form; the SPA exclusively sends the latter, so we match.
func GPUVRAMHuman ¶
GPUVRAMHuman formats a MiB count (HAMI's `memoryTotal` / `memoryUsed` units) as a SPA-style "1.5GiB"-shaped string. Mirrors the SPA's `getDiskSize(val * 1024 * 1024)` call; treats 0 as "-" so the table doesn't show a misleading "0B" for honest "no allocation" cases.
func HasCUDANode ¶
HasCUDANode reports whether the cluster has at least one node with label `gpu.bytetrade.io/cuda-supported=true`. Mirrors the SPA's `checkGpu` (Overview2/ClusterResource.vue:278-293) which iterates the nodes list and looks for the cuda-supported label.
Cached per-Client; the second call inside the same CLI invocation is free. The label-only fast path keeps payloads small even on large clusters since we just need a presence check.
func HasPknameLabels ¶
HasPknameLabels mirrors `Overview2/Disk/config.ts:267` — trim/non-empty pkname on at least one row turns the resolver onto the label-aware path; otherwise we fall back to prefix matching.
func LastSampleFromRow ¶
func LastSampleFromRow(values [][]any, value []any) format.LastMonitoringSample
LastSampleFromRow picks the last (timestamp, value) tuple out of a PromQL-style range result. It prefers `values[-1]` (the actual range data) and falls back to `value` (the instant-vector shape some leaves also accept) when the range is empty. Returns Empty=true when neither shape carries usable data.
Hoisted to pkg so disk's per-partition rendering and overview's per-node rendering share one canonical decoder.
func LastValueOfRow ¶
LastValueOfRow extracts the numeric last sample from either Values (matrix range) or Value (instant). Returns (v, true) on success.
func MonitoringQuery ¶
func MonitoringQuery(cf *CommonFlags, metricsFilter []string, w MonitoringWindow, now time.Time, instant bool) url.Values
MonitoringQuery composes the URL query for a /cluster or /nodes monitoring fetch. Caller passes `cf` so the helper honours --since / --start / --end without reaching into a global; everything else (step, times, instant) defaults to the SPA contract.
func ParseRFCTimestamp ¶
ParseRFCTimestamp converts an RFC3339 timestamp string to milliseconds since epoch (the unit format.FormatTime expects). Returns 0 on parse failure.
func ParseStep ¶
ParseStep is a small helper for `--step` flags that accept either a Go duration ("30s") or a Prometheus-flavoured integer of seconds ("30"). Returned as time.Duration. Used by metric commands that expose --step.
func PercentDirect ¶
PercentDirect formats a value already expressed as a percent (e.g. HAMI returns `coreUtilizedPercent: 25.5`, NOT 0.255). The SPA renders these with `round(val, 2) + '%'`; we match that and trim trailing zeros for readability ("25%" instead of "25.00%").
func PercentString ¶
PercentString formats a 0..1 ratio as "N.NN%" (SPA style — utilisation metrics are percent of unit interval).
func PodCountByDeployment ¶
func PodCountByDeployment(data map[string]format.MonitoringResult) map[string]int
PodCountByDeployment counts result rows per deployment from the pod_cpu_usage metric (one row per pod). Mirrors `podCountByDeploymentFromPodMetrics` in Applications2/config.ts:280.
func PodDeploymentName ¶
PodDeploymentName mirrors `podDeploymentName(item)` in Applications2/config.ts:277 — strip the trailing two `-` segments (replicaset hash + pod index) from `metric.pod` to recover the deployment name.
func ReadSchemaFile ¶
ReadSchemaFile returns the raw bytes of the named schema file from the embedded FS. Used by the cmd-side `dashboard schema <path>` command to pretty-print one document. Returned errors come straight from fs — the caller wraps them with the user-friendly file path.
func RenderTemperature ¶
RenderTemperature picks the right unit suffix for ConvertTemperature. Used by overview cpu / overview fan live.
func ResetCUDANodeCache ¶
func ResetCUDANodeCache(c *Client)
ResetCUDANodeCache forgets the cached HasCUDANode result for `c`. Test-only escape hatch — production code never calls this; tests use it between fixtures to guarantee each scenario re-issues the upstream label scan.
func ResolveParent ¶
ResolveParent picks the parent for a row, mirroring `Overview2/Disk/config.ts:313`:
- root row has no parent
- trimmed pkname wins if it points at a row in the set
- otherwise pick the longest other-name prefix of `r.Name`
- last-ditch fallback: the root itself
func SafeRatio ¶
SafeRatio returns num/den, but yields 0 (instead of NaN/Inf) when den is zero. Mirrors the SPA's many `total > 0 ? value / total : 0` guards.
func SampleFloat ¶
func SampleFloat(s format.LastMonitoringSample) float64
SampleFloat extracts the numeric reading from a single last-sample entry returned by format.GetLastMonitoringData. Empty samples (status "no_data" or absent metric) yield 0 rather than NaN; non-numeric raw values also fall through to 0 so leaf renderers can keep their arithmetic simple.
func ScalarFloat ¶
ScalarFloat turns a JSON-decoded scalar into a float64. Returns (0, false) for nil / empty / unparsable input.
func SortWorkloadAggregates ¶
func SortWorkloadAggregates(rows []WorkloadAggregate, sortBy, dir string)
SortWorkloadAggregates orders rows by `sortBy` (cpu|memory|net_in|net_out) in `dir` (asc|desc). Ties break on Title (mirrors the SPA's `orderBy(total, ['value','title'], [sort,'asc'])` in formatResult).
func ToFloat ¶
ToFloat coerces an arbitrary JSON-decoded scalar into a float64. Returns 0 for nil / empty / unparsable inputs. Used by the cmd-side leaves' display building plus GPUVRAMHuman.
func ValidOutputFormats ¶
func ValidOutputFormats() []string
ValidOutputFormats returns the values cobra uses for ValidArgs / shell completion of the `--output` flag.
func WriteJSON ¶
WriteJSON marshals env as a single-line JSON document terminated by `\n`. Used for both one-shot output and individual NDJSON lines in `--watch` mode (the iteration / Error fields handle the per-line state).
func WriteTable ¶
func WriteTable(w io.Writer, columns []TableColumn, items []Item) error
WriteTable emits a tabwriter-based table for items. Empty inputs emit a single "-" row so the user has a visible signal that the call succeeded but produced nothing.
Header / footer matching the SPA aesthetic (two-space gutter, no border chars) is intentional — bash piping (`| awk '{print $2}'`) stays simple.
Types ¶
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client is a thin wrapper around the factory's authenticated *http.Client. Every command in the dashboard tree obtains one via the area-local `prepareClient` declared in each cmd subpackage's `common.go` (cli/cmd/ctl/dashboard/<area>/common.go) and calls DoJSON / DoEmpty / Get* helpers; the underlying refreshingTransport handles X-Authorization injection + transparent /api/refresh on 401/403.
Why a typed Client (vs. a raw http.Client + URL constants per file)? Because we need to reformat 4xx/5xx into agent-friendly errors, expose an `EnsureUser` cache (used by every --user-aware command), and keep base-URL stitching in one place for the schemas / e2e tests to mock.
func NewClient ¶
func NewClient(hc *http.Client, rp *credential.ResolvedProfile) *Client
NewClient builds a Client from a factory-provided http.Client (already wired with refreshingTransport) and the resolved profile. Strips a trailing "/" from rp.DashboardURL so callers can concatenate paths without juggling slashes.
func (*Client) DoEmpty ¶
func (c *Client) DoEmpty(ctx context.Context, method, path string, query url.Values, body any) error
DoEmpty is DoJSON for callers that just want to check the status, e.g. health probes. Body is read+discarded for connection re-use.
func (*Client) DoJSON ¶
func (c *Client) DoJSON(ctx context.Context, method, path string, query url.Values, body any, out any) error
DoJSON issues `method <baseURL><path>?<query>` with optional JSON body and decodes the 2xx response into `out`. `out` may be nil to discard.
Behaviour on non-2xx:
- 401/403 → reformatted as "server rejected the access token (HTTP X)" with the standard CTA. The factory's refreshingTransport already had a chance to refresh+retry; reaching us means the grant is dead.
- 4xx → *HTTPError with ErrorKind="http_4xx".
- 5xx → *HTTPError with ErrorKind="http_5xx".
A response body up to 512 bytes is captured into HTTPError.Body so error messages stay actionable without leaking arbitrary upstream payloads.
func (*Client) DoRaw ¶
func (c *Client) DoRaw(ctx context.Context, method, path string, query url.Values, body any) (status int, payload []byte, err error)
DoRaw is the escape hatch for endpoints that don't return JSON or where the caller wants to peek at the status before deciding how to decode (used by the GPU `404=no-integration` three-state). The returned body is already drained — caller treats it as []byte.
func (*Client) EnsureSystemStatus ¶
func (c *Client) EnsureSystemStatus(ctx context.Context) (*SystemStatus, error)
EnsureSystemStatus fetches `/user-service/api/system/status` once per Client and caches the trimmed `SystemStatus` view. Subsequent callers — including any nested `overview fan / overview gpu` invocations inside an aggregated `dashboard overview` — reuse the result.
Behaviour on error mirrors EnsureUser: the error is cached so a failure is sticky for the rest of the process; re-running the whole command is the supported path for a transient outage.
func (*Client) EnsureUser ¶
func (c *Client) EnsureUser(ctx context.Context) (*UserDetail, error)
EnsureUser fetches /capi/app/detail and caches the result for the lifetime of the Client. Subsequent callers — including the watch loop iterating — reuse the same result; the cache is never invalidated mid-process.
Any HTTP error from /capi/app/detail is cached too; callers should not retry by re-calling EnsureUser. (Re-running the whole command is the supported path for a transient outage.)
func (*Client) HTTPClient ¶
HTTPClient returns the underlying authenticated *http.Client. Surface area for callers that need to issue raw requests bypassing DoJSON/DoRaw — kept small on purpose so the auth + error-reformatting story stays centralised.
func (*Client) IsOlaresOne ¶
IsOlaresOne is a convenience wrapper for callers that just need the boolean. Returns false on any underlying error so gates fail open.
func (*Client) OlaresID ¶
OlaresID returns the OlaresID of the active profile (for Meta.Profile and reformatted error messages).
func (*Client) RequireAdmin ¶
func (c *Client) RequireAdmin(ctx context.Context) (*UserDetail, error)
RequireAdmin is a guard for `--user`-aware commands. It calls EnsureUser and returns a friendly error if the active profile is not an admin.
type CommonFlags ¶
type CommonFlags struct {
// Output is the resolved --output / -o value. Defaults to OutputTable.
Output OutputFormat
// OutputRaw is the raw flag string before normalisation. Cobra
// binds onto this; Validate() turns it into Output.
OutputRaw string
// Watch toggles the polling ticker (see runner.go). Defaults off.
Watch bool
// WatchInterval is the cadence between iterations. 0 means "use the
// command's RecommendedPollSeconds". Lower values emit a stderr
// warning but proceed.
WatchInterval time.Duration
// WatchIterations caps the total number of iterations. 0 = unbounded.
WatchIterations int
// WatchTimeout caps the total wall-clock duration. 0 = unbounded.
WatchTimeout time.Duration
// Since is the relative sliding window used by metric commands in
// `--watch` mode. e.g. "1h" → request the last 1h on every
// iteration. Mutually exclusive with Start/End.
Since time.Duration
// Start / End are the absolute fixed window. Same window on every
// iteration; useful for replaying a known incident. Mutually
// exclusive with Since.
Start time.Time
End time.Time
// StartRaw / EndRaw / SinceRaw / TimezoneRaw store the raw flag
// strings before parsing so Validate() can produce friendly error
// messages.
StartRaw string
EndRaw string
SinceRaw string
TimezoneRaw string
// Timezone, when non-nil, overrides time.Local for FormatTime /
// Meta.FetchedAt rendering. Defaults to time.Local.
Timezone *format.Location
// TempUnit is the user's preferred temperature display unit (C / F /
// K). Defaults to TempC. JSON `raw` always emits Celsius regardless;
// this only affects table view + display.rendered fields.
TempUnit format.TempUnit
// TempUnitRaw is the raw flag string before normalisation.
TempUnitRaw string
// User, when non-empty, targets a different user than the active
// profile (admin-only). Surfaced via Meta.User.
User string
// Limit / Page mirror the BFF's pagination knobs for endpoints that
// support them (apps list, etc.). 0 means "use default" (no
// override sent).
Limit int
Page int
// Head truncates the rendered output to the first N rows after
// sorting (client-side). 0 = no truncation.
Head int
}
CommonFlags holds the persistent + shared flag values every leaf command in the dashboard tree consumes. cmd-side area subpackages embed pointers to it (set by the cobra binding in cli/cmd/ctl/dashboard/options.go) and PreRun calls Validate() before the leaf's RunE fires.
Why a single struct rather than per-command flags? Because every `dashboard *` invocation goes through the same authentication / output / timezone / temperature pipeline, and we want one place that documents the cross-flag rules (e.g. --start/--end mutually exclusive with --since; --watch requires a non-zero recommended-poll-seconds).
Raw fields (OutputRaw, StartRaw, EndRaw, SinceRaw, TimezoneRaw, TempUnitRaw) are EXPORTED so the cobra binding layer in cmd-side options.go can wire pflags onto them without poking at unexported state. Validate() turns each raw string into the typed field above.
func (*CommonFlags) HasAbsoluteWindow ¶
func (cf *CommonFlags) HasAbsoluteWindow() bool
HasAbsoluteWindow reports whether --start/--end were both provided.
func (*CommonFlags) ResolveWindow ¶
func (cf *CommonFlags) ResolveWindow(now time.Time, defaultDuration time.Duration) (start, end time.Time)
ResolveWindow returns the effective [start, end] for the iteration at `now`. For absolute windows the same pair is returned every call; for sliding windows we anchor end=now and back off by Since.
When neither flag is set we fall back to defaultDuration ending at `now` — defaultDuration is whatever the per-page config.ts uses (e.g. 5m for CPU). Caller passes it.
func (*CommonFlags) Validate ¶
func (cf *CommonFlags) Validate() error
Validate parses the raw flag strings into the typed fields and enforces cross-flag rules. cmd-side BindPersistent calls this from PreRunE, after cobra has populated the raw strings.
Invariants enforced here (the test suite asserts each one):
- --output must be one of {table, json}
- --temp-unit must be one of {C, F, K}
- --since is mutually exclusive with --start/--end
- --start and --end must come together; --start must be < --end
- --watch-iterations / --watch-timeout require --watch
- negative durations are rejected
Validate is idempotent — calling it twice returns the same result.
type Envelope ¶
type Envelope struct {
// Kind is one of the constants in schema.go. Required.
Kind string `json:"kind"`
// Meta carries non-payload context: timestamps, the active profile,
// pagination hints, polling cadence, and (for failed `--watch`
// iterations) the typed error message.
Meta Meta `json:"meta"`
// Items is the leaf shape: a flat list of records the command
// produced. Each Item carries a stable machine-friendly `raw` and an
// SPA-aligned `display`.
Items []Item `json:"items,omitempty"`
// Sections is the aggregated shape: a map of section-key → nested
// Envelope. Only `dashboard overview` (default action) uses this.
// Section keys MUST be stable: `identity`, `quota`, `cluster`,
// `ranking`. Iteration order is fixed by the parent command (encoded
// JSON keeps insertion order via json.Marshaler-ish wrapping).
Sections map[string]Envelope `json:"sections,omitempty"`
}
Envelope is the top-level JSON document a command renders. Exactly one of Items or Sections is populated; the other is left zero / nil so its key is suppressed by `omitempty`.
func BuildRankingEnvelope ¶
func BuildRankingEnvelope(ctx context.Context, c *Client, cf *CommonFlags, target, sortBy, sortDir string, now time.Time) (Envelope, error)
BuildRankingEnvelope is the shared workload-grain ranking builder behind both `dashboard overview ranking` (default sortBy="cpu") and `dashboard applications` (which exposes --sort-by). It mirrors the SPA's `formatResult` in Applications2/config.ts: discover the user's app inventory via fetchAppsList, fan out to fetchWorkloadsMetrics, then emit one Item per workload carrying title / icon / state / pod count + the four metric values (cpu / memory / net_in / net_out).
Hoisted to the pkg layer because it is the ONE legitimate horizontal share between cmd area subpackages: `applications` MUST NOT import `overview/ranking`. The cf parameter replaces the old cmd-side `common` global so this function stays cobra-agnostic.
target is the optional --user override (empty = active profile owner). sortBy must be one of "cpu" / "memory" / "net_in" / "net_out". sortDir must be "asc" or "desc".
func GateOlaresOne ¶
func GateOlaresOne(ctx context.Context, c *Client, cf *CommonFlags, kind string, now time.Time, stderr io.Writer) (Envelope, bool)
GateOlaresOne returns (gatedEnvelope, true) when the active device is not Olares One; the caller should emit `gatedEnvelope` and skip any data fetch. The hint message is also written to `stderr` (when non-nil and `cf.Output != OutputJSON`) so humans see why the table is empty.
On error from EnsureSystemStatus we let the caller proceed (gated=false, nil envelope) — the downstream BFF call will surface the real error itself rather than masking it with a confused "not Olares One" hint.
func VgpuUnavailableFromError ¶
func VgpuUnavailableFromError(c *Client, cf *CommonFlags, err error, kind string, now time.Time, stderr io.Writer) (Envelope, bool)
VgpuUnavailableFromError converts a HAMI-side error into the (empty=true, empty_reason=vgpu_unavailable) envelope when the upstream came back with a 5xx. The caller is responsible for the 404 branch (no_vgpu_integration) which keeps existing semantics.
`err` is the result of one of the fetch* helpers; `kind` / `now` / `c` provide envelope context. Returns (env, true) when the error matches the 5xx HAMI-down pattern; (zero, false) otherwise so the caller can re-raise.
We extract a short body message (capped at 256 bytes) and stash it in `meta.error` so agents can drill in without parsing free-form strings. Stderr in non-JSON mode prints a single advisory line.
type FanCurveRow ¶
type FanCurveRow struct {
Step int
CPUFanRPM int
GPUFanRPM int
CPUTempRange string
GPUTempRange string
}
FanCurveRow is one row of FanCurveTable.
type GraphicsListBody ¶
type GraphicsListBody struct {
Filters map[string]string `json:"filters"`
PageRequest map[string]string `json:"pageRequest"`
}
GraphicsListBody mirrors the SPA's GraphicsListParams. The fields are emitted UNCONDITIONALLY (no `omitempty`) — HAMI's WebUI rejects a body missing the `filters` key with a 500 "unknown request error" because downstream code dereferences the (would-be) Filters struct without a nil guard. The SPA always sends `"filters": {}` (see `Overview2/GPU/GPUsTable.vue:195-201`); we match that wire shape.
History: an earlier revision used `omitempty` on both fields. With a nil-input filter map, encoding/json emits `{"pageRequest":{...}}` — HAMI then panics, the gin recovery middleware returns a generic 5xx, and `olares-cli dashboard overview gpu` lights up `vgpu_unavailable` while the SPA in the same browser tab continues to render data. `TestGraphicsListBody_AlwaysIncludesFiltersKey` is the regression net for this.
type HTTPError ¶
type HTTPError struct {
Status int
URL string
Body string // truncated to 512 bytes
ErrorKind string
}
HTTPError is the typed error every Do* helper returns for non-200 responses. ErrorKind is the small enum surfaced via Meta.ErrorKind so agents can branch without parsing free-form text.
func IsHTTPError ¶
IsHTTPError unwraps err looking for *HTTPError. Returns the typed error + true on hit; nil + false otherwise.
type InstantVectorSample ¶
type InstantVectorSample struct {
Metric map[string]string `json:"metric"`
Value float64 `json:"value"`
Timestamp string `json:"timestamp"`
}
InstantVectorSample mirrors HAMI's `data[i]` row. Value is a number on the wire but float64 covers HAMI's full range (it caps at 1e308).
func FetchInstantVector ¶
func FetchInstantVector(ctx context.Context, c *Client, query string) ([]InstantVectorSample, error)
FetchInstantVector posts `query` to /hami/api/vgpu/v1/monitor/query/instant-vector and returns the decoded `data` array. HAMI returns one element per matching series; most CLI gauges just read `data[0]`, but a query can theoretically expand into >1 series so we hand back the slice.
type Item ¶
type Item struct {
Raw map[string]any `json:"raw,omitempty"`
Display map[string]any `json:"display,omitempty"`
}
Item is one row of a leaf Envelope.
The split between Raw and Display is deliberate:
Raw carries machine-friendly canonical values (numbers as numbers, timestamps as Unix-seconds float64, no thousand separators, no temperature unit conversion). Agents read this.
Display mirrors the SPA's rendered strings (units appended, percentages formatted, `--temp-unit` honored). Humans read the table view, which pulls columns out of Display.
type LsblkFlatRow ¶
LsblkFlatRow is one rendered row in the partitions table. `Depth` is 0 for the root and increments per nesting level; `TreePrefix` is the ASCII-art prefix to prepend to `Name` for human display. `Parent` carries the resolved parent name so agents can rebuild the tree from the flat list without re-reading pkname / prefix logic.
func FlattenLsblkHierarchy ¶
func FlattenLsblkHierarchy(rows []LsblkRow, rootName string) []LsblkFlatRow
FlattenLsblkHierarchy walks the rows pre-order, decorating each row with depth + tree prefix + resolved parent. Mirrors `Overview2/Disk/config.ts:342`. When `rootName` isn't in the row set we degrade to a flat list (no tree), matching the SPA.
type LsblkRow ¶
type LsblkRow struct {
Name string
Node string
Pkname string
Size string
Fstype string
Mountpoint string
Fsused string
FsusePercent string
}
LsblkRow is one row pulled from the per-node lsblk metric.
func CollectSubtreeByPkname ¶
CollectSubtreeByPkname BFS-walks the pkname graph from `rootName`, returning the rows in their original order. Mirrors `collectSubtreeByPkname` in Overview2/Disk/config.ts:273.
When the BFS hits an empty `seen` set (e.g. root itself absent from the rows) the SPA recursively gathers descendants directly — we replicate that fallback so empty rooted views still produce sane data.
type Meta ¶
type Meta struct {
// FetchedAt is the wall-clock time at which the CLI received the
// terminal HTTP response. RFC3339 with timezone, honoring the
// `--timezone` override.
FetchedAt string `json:"fetched_at"`
// Profile is the OlaresID of the currently-selected profile
// (switch with `olares-cli profile use <name>`). Surfaced for
// log auditing — agents should NOT use it for routing; routing
// is implicit in the URL.
Profile string `json:"profile,omitempty"`
// User, when present, is the per-command target user a `--user`
// override resolved to. Empty for self-targeting commands.
User string `json:"user,omitempty"`
// RecommendedPollSeconds is the cadence the SPA polls this endpoint
// at. The watch loop refuses `--watch` against commands with 0 here
// (one-shot commands like `applications users` etc.).
RecommendedPollSeconds int `json:"recommended_poll_seconds,omitempty"`
// Iteration / TotalIterations are populated only by `--watch`. The
// first iteration is 1, not 0, to mirror the way humans count.
Iteration int `json:"iteration,omitempty"`
TotalIterations int `json:"total_iterations,omitempty"`
// Error, when non-empty, signals this iteration / section failed but
// the surrounding stream / aggregate continued. The CLI exits non-
// zero only when the whole command failed; per-iteration / per-
// section failures keep the stream alive (NDJSON discipline) so
// agents can post-hoc detect outages.
Error string `json:"error,omitempty"`
// ErrorKind classifies Error into a small enum so agents can branch
// without parsing free-form text. Values: "timeout", "http_4xx",
// "http_5xx", "transport", "decode", "auth", "unknown".
ErrorKind string `json:"error_kind,omitempty"`
// Empty signals that the upstream returned no data — distinct from
// "data was loaded and happens to be []". Used by GPU and other
// optional-hardware endpoints; lets agents distinguish "feature not
// installed" from "no items match".
Empty bool `json:"empty,omitempty"`
// EmptyReason is the human-friendly cause of Empty. Common values:
// "no_vgpu_integration" — HAMI integration not installed (HTTP 404)
// "vgpu_unavailable" — HAMI installed but unhealthy (HTTP 5xx);
// Meta.Error carries the upstream message,
// Meta.HTTPStatus carries the original status
// "no_gpu_detected" — HAMI installed and healthy but the
// list / detail returned an empty payload
// "no_pods" / "no_users" — query had no matches
// "not_olares_one" — fan / cooling features need Olares One hardware
// "no_fan_integration" — capi /system/fan absent on this BFF
//
// GPU subtree NEVER hard-blocks on admin role or CUDA labels — those
// surface as advisory `Note` instead, mirroring the SPA which only
// hides the sidebar card. Reasons "requires_admin" / "no_cuda_node"
// are reserved for soft hints; current code path emits them via Note.
EmptyReason string `json:"empty_reason,omitempty"`
// Note is a free-form, single-sentence explanation that complements
// EmptyReason for human readers. JSON consumers should branch on
// EmptyReason; Note exists so a `--watch` NDJSON stream stays self-
// describing without an agent having to memorise the reason enum.
Note string `json:"note,omitempty"`
// DeviceName mirrors the `device_name` field of
// `/user-service/api/system/status` — populated by gates that depend
// on the Olares One vs. generic-box distinction so agents can branch
// on hardware profile without re-querying.
DeviceName string `json:"device_name,omitempty"`
// HTTPStatus is the upstream HTTP status when it's worth keeping
// (mostly the empty-by-404 cases). Suppressed for 200s.
HTTPStatus int `json:"http_status,omitempty"`
// Window describes the time range used to build this envelope, when
// applicable. Populated by the GPU detail-full / task-detail-full
// commands so agents can replay the same Prom-style range query
// without re-deriving start/end/step.
Window *TimeWindow `json:"window,omitempty"`
// Warnings collects per-section / per-query soft failures that did
// NOT abort the command. Typical use: in detail-full, one of N
// gauges hit a 5xx — its raw entry carries `error` and the parent
// envelope's Warnings gets an entry like
// `gauges[2] (gpu_utilization): HAMI returned HTTP 502`. Agents
// branching on partial data should check len(Warnings) first.
Warnings []string `json:"warnings,omitempty"`
}
Meta is the context block attached to every Envelope (top-level and per- section). Optional fields are suppressed when zero so the JSON stays terse for one-shot leaf commands and richer for multi-iteration `--watch` runs.
type MonitoringWindow ¶
type MonitoringWindow struct {
Step time.Duration // upstream "step" param (Prometheus-flavoured)
Times int // upstream "times" param (sample count)
DefaultDur time.Duration // fallback when --since / --start are unset
}
MonitoringWindow encapsulates the time-window flags (--since / --start / --end) and the SPA's default fallback. The SPA's controlPanelCommon/containers/Monitoring/config.ts uses step=600s, times=20 for the cluster headline and step=60s for detail pages; we mirror the cluster default and let leaves override.
func DefaultClusterWindow ¶
func DefaultClusterWindow() MonitoringWindow
DefaultClusterWindow is the headline cluster window. 600s × 20 = 200min, matching the SPA's `getParams({})` defaults for the overview page.
func DefaultDetailWindow ¶
func DefaultDetailWindow() MonitoringWindow
DefaultDetailWindow is the per-detail-page sliding window. 60s × 50 = 50min, matching the SPA's CPU/Memory/Pods detail page `timeRangeDefault`.
type OutputFormat ¶
type OutputFormat string
OutputFormat is the on-the-wire choice for `--output / -o`. Default is `table` (human-readable, fixed columns) — agents pass `json` for the canonical envelope defined in envelope.go.
const ( OutputTable OutputFormat = "table" OutputJSON OutputFormat = "json" )
func ParseOutputFormat ¶
func ParseOutputFormat(s string) (OutputFormat, error)
ParseOutputFormat normalises and validates a user-supplied `--output` value. Empty defaults to `table` (matches the SPA's "human view" default and what `olares-cli files ls` already does).
type RangeVectorPoint ¶
RangeVectorPoint is one (timestamp, value) pair inside a series. Both fields are strings on the wire (per the SPA's `RangeVector.values` type definition); the CLI parses them lazily on render.
type RangeVectorRange ¶
type RangeVectorRange struct {
Start string `json:"start"`
End string `json:"end"`
Step string `json:"step"`
}
RangeVectorRange mirrors the SPA's `RangeVectorParams.range` object.
type RangeVectorSeries ¶
type RangeVectorSeries struct {
Metric map[string]string `json:"metric"`
Values []RangeVectorPoint `json:"values"`
}
RangeVectorSeries mirrors HAMI's `data[i]` series row.
func FetchRangeVector ¶
func FetchRangeVector(ctx context.Context, c *Client, query, start, end, step string) ([]RangeVectorSeries, error)
FetchRangeVector posts a range query (start/end/step) to HAMI's /v1/monitor/query/range-vector. SPA's `getStepWithTimeRange` builds `step` (a string like "30m"); ISO-formatted start/end are computed by the caller via `GPUTrendTimestampISO`.
type RawAppListItem ¶
type RawAppListItem struct {
ID string `json:"id"`
Name string `json:"name"`
Title string `json:"title"`
Icon string `json:"icon"`
Namespace string `json:"namespace"`
Deployment string `json:"deployment"`
OwnerKind string `json:"ownerKind"`
State string `json:"state"`
Entrances []map[string]interface{} `json:"entrances"`
}
RawAppListItem mirrors the subset of `AppListItem` (controlPanelCommon/network/network.ts:280) the workload merge consumes. We tolerate unknown extra fields — the BFF freely adds new ones, and nothing in this struct has tag `json:"-"` to swallow them.
func FetchAppsList ¶
func FetchAppsList(ctx context.Context, c *Client) ([]RawAppListItem, error)
FetchAppsList queries `/user-service/api/myapps_v2` and returns the SPA's `appsWithNamespace` selector (entries with at least one entrance). Empty entrance list ⇒ filtered out, mirroring `appsWithNamespace` in stores/AppList.ts.
type RunOnce ¶
RunOnce is the per-iteration callback every leaf command supplies. The watch loop calls it once per tick; the leaf is responsible for building and emitting (or returning) one Envelope.
`iter` is 1-based. `now` is the wall clock anchor for the current iteration — the leaf should use it to compute time windows when --since is set.
Returning a non-nil error signals iteration failure. The watch loop will:
- emit a single NDJSON line with Meta.Error / Meta.ErrorKind populated to keep the stream intact (JSON output);
- log a warning to stderr (table output);
- bump the consecutive-failure counter; after FailureThreshold (default 3) consecutive failures, the loop exits non-zero;
- on credential.ErrTokenInvalidated / ErrNotLoggedIn, exit immediately non-zero regardless of failure count (re-running login is a hard prerequisite).
One-shot commands (no --watch) reuse RunOnce too: the runner just calls it exactly once and returns its error.
type Runner ¶
type Runner struct {
// Flags is the resolved CommonFlags after Validate() ran.
Flags *CommonFlags
// Recommended is the SPA-side polling cadence (typically pulled from
// the metric catalog). 0 means "this command is not poll-able"; in
// that case --watch is rejected up front. Otherwise it's the default
// for --watch-interval when the user didn't set one.
Recommended time.Duration
// FailureThreshold is the consecutive-failure cap before the watch
// loop bails out. Defaults to 3.
FailureThreshold int
// Stdout / Stderr default to os.Stdout / os.Stderr. Tests override
// them.
Stdout io.Writer
Stderr io.Writer
// Now is the clock used by the watch loop. nil = time.Now. Tests
// inject a fake clock to drive the ticker deterministically.
Now func() time.Time
// Sleep is the scheduling primitive. nil = time.NewTimer-based; tests
// inject a fake sleeper that advances Now in lockstep.
Sleep func(ctx context.Context, d time.Duration) error
// Iter is the per-iteration callback. Required. We name the field
// Iter (not Run) so it doesn't collide with the entry-point method
// Runner.Run.
Iter RunOnce
}
Runner is the per-command "execute me with the user's --watch flags" orchestrator. Embed CommonFlags into your command, register a RunOnce, and call Runner.Run(ctx) from RunE.
type SchemaEntry ¶
type SchemaEntry struct {
Path string // user-facing command path, e.g. "overview cpu"
Kind string // dashboard.* constant
File string // filename inside the embedded schemas/ FS
}
SchemaEntry is one row of the schema index — exported for both tests and the cmd-side schema introspection command.
func LoadSchemaIndex ¶
func LoadSchemaIndex() []SchemaEntry
LoadSchemaIndex reads the embedded schemas/ directory + augments each entry with the user-facing command path. Order: the static table below is the source of truth for path↔kind mapping; the FS walk only verifies every static entry has a matching file (a missing file returns the entry with File="" so `dashboard schema` still surfaces it).
type SystemFanData ¶
type SystemFanData struct {
GPUFanSpeed float64 `json:"gpu_fan_speed"`
GPUTemperature float64 `json:"gpu_temperature"`
CPUFanSpeed float64 `json:"cpu_fan_speed"`
CPUTemperature float64 `json:"cpu_temperature"`
}
SystemFanData mirrors the SPA's getSystemFan response payload (.data field of /user-service/api/mdns/olares-one/cpu-gpu).
func FetchSystemFan ¶
func FetchSystemFan(ctx context.Context, c *Client) (*SystemFanData, error)
FetchSystemFan queries `/user-service/api/mdns/olares-one/cpu-gpu` and returns the live RPM + temperature pair the overview fan live leaf renders.
type SystemIFSItem ¶
type SystemIFSItem struct {
Iface string `json:"iface"`
IsHostIp bool `json:"isHostIp,omitempty"`
Hostname string `json:"hostname,omitempty"`
Method string `json:"method,omitempty"`
MTU any `json:"mtu,omitempty"`
IP string `json:"ip,omitempty"`
IPv4Mask string `json:"ipv4Mask,omitempty"`
IPv4Gateway string `json:"ipv4Gateway,omitempty"`
IPv4DNS string `json:"ipv4DNS,omitempty"`
IPv6Address string `json:"ipv6Address,omitempty"`
IPv6Gateway string `json:"ipv6Gateway,omitempty"`
IPv6DNS string `json:"ipv6DNS,omitempty"`
InternetConnected bool `json:"internetConnected,omitempty"`
IPv6Connectivity bool `json:"ipv6Connectivity,omitempty"`
TxRate any `json:"txRate,omitempty"`
RxRate any `json:"rxRate,omitempty"`
}
SystemIFSItem mirrors the dashboard SystemIFSItem type: the union of fields the SPA's overview network page reads.
func FetchSystemIFS ¶
FetchSystemIFS queries `/capi/system/ifs?testConnectivity=...`. The SPA's initial fetch passes testConnectivity=true so the server probes outgoing connectivity per-iface.
type SystemStatus ¶
SystemStatus is the subset of `/user-service/api/system/status`'s payload the CLI uses to decide whether fan / gpu subtrees apply on this device. The wire shape is `data.{device_name, ...}`; we only keep the bits that drive the gates.
Mirrors `TerminusStatus` in `packages/app/src/services/abstractions/mdns/service.ts:237` and the `device_name === 'Olares One'` check in `packages/app/src/apps/dashboard/stores/Fan.ts:67`.
func (*SystemStatus) IsOlaresOne ¶
func (s *SystemStatus) IsOlaresOne() bool
IsOlaresOne reports whether this Olares instance is running on an Olares One device. Returns the cached value from EnsureSystemStatus. On error from the underlying call we report `false` so the gates fail open (no fan section) rather than blocking with stale data.
type TableColumn ¶
type TableColumn struct {
Header string
// Get pulls the cell value out of an Item — typically by reading
// item.Display[<key>]. Should never return nil; render "-" instead.
Get func(Item) string
}
TableColumn names a table column and how to extract its value. Each leaf command supplies its own []TableColumn so the renderer stays agnostic.
type TimeWindow ¶
type TimeWindow struct {
Since string `json:"since,omitempty"`
Start string `json:"start"`
End string `json:"end"`
Step string `json:"step,omitempty"`
}
TimeWindow describes a relative + absolute time range. All fields are strings so JSON consumers don't have to deal with timezone parsing — `since` is the user-supplied "1h"/"8h" form (or "" when --start/--end drove the window), `start`/`end` are RFC-3339 wall-clock, `step` is the SPA-style coarse-grain step ("30m" / "10m" / etc.).
type UserDetail ¶
UserDetail captures the subset of /capi/app/detail the CLI cares about. The wire payload nests the identity inside `.user.{username,globalrole}` (matching `AppDetailResponse` in `controlPanelCommon/network/network.ts`); EnsureUser does the lifting so the rest of the package keeps a flat view.
func ResolveTargetUser ¶
ResolveTargetUser returns the user to operate on for `--user`-aware commands (overview user, etc.).
- explicit (positional or --user) is honoured if the active profile is admin; non-admins targeting a third party get an admin-required error.
- empty falls back to the active profile's identity.
func (*UserDetail) IsAdmin ¶
func (u *UserDetail) IsAdmin() bool
IsAdmin reports whether the resolved user has the platform-admin role. The server canonical value is `platform-admin`; we tolerate `admin` too for forward-compat with the role rename done in early Olares releases.
type WorkloadAggregate ¶
type WorkloadAggregate struct {
Name string
Title string
Icon string
Namespace string
Deployment string
OwnerKind string
State string
IsSystem bool
PodCount int
CPU float64 // pod_cpu_usage / namespace_cpu_usage (last sample)
Memory float64 // pod_memory_usage_wo_cache / namespace_memory_usage_wo_cache
NetIn float64 // pod_net_bytes_received / namespace_net_bytes_received
NetOut float64 // pod_net_bytes_transmitted / namespace_net_bytes_transmitted
}
WorkloadAggregate is the merged shape returned by FetchWorkloadsMetrics. Each row carries the application's logical identity (name / title / namespace / system flag / owner) plus the four metric values plus the running-pod count for the row. State / Title / Icon ride along so the CLI can present the same per-app card the SPA does without a second round-trip.
func FetchWorkloadsMetrics ¶
func FetchWorkloadsMetrics(ctx context.Context, c *Client, cf *CommonFlags, req WorkloadRequest, w MonitoringWindow, now time.Time) ([]WorkloadAggregate, error)
FetchWorkloadsMetrics fans out two BFF calls (pods inside the user's namespace + per-namespace summary), merges them into a per-app aggregate, and orders the result.
func MergeWorkloadMetrics ¶
func MergeWorkloadMetrics(apps []WorkloadApp, podData, nsData map[string]format.MonitoringResult) []WorkloadAggregate
MergeWorkloadMetrics walks the apps list once, looking up each app's row in either the pod result (system apps, keyed by deployment via PodDeploymentName(metric.pod)) or the namespace result (custom apps, keyed by metric.namespace). Last-sample values are pulled per-metric.
The system-frontend specialcase mirrors `getTabOptions` in Applications2/config.ts:347 — multiple entrance apps share the same deployment, and each entrance gets a clone of the same metric.
type WorkloadApp ¶
type WorkloadApp struct {
Name string
Title string
Icon string
Namespace string
Deployment string
OwnerKind string
State string
IsSystem bool
}
WorkloadApp is one row of the SPA's app inventory (post `entrances` filter). Mirrors `AppListItem` from `controlPanelCommon/network/network.ts:280` for the fields the workload merge actually reads.
func LoadAppsForRanking ¶
func LoadAppsForRanking(ctx context.Context, c *Client, target string) ([]WorkloadApp, string, error)
LoadAppsForRanking discovers the active user's app inventory the way the SPA does — via /user-service/api/myapps_v2 (FetchAppsList). Each app entry is then tagged `IsSystem` based on whether it lives in the user's `user-space-<username>` namespace, mirroring Applications2/IndexPage.vue:330 (`userNamespace = "user-space-${username}"`).
Returns the apps + the user's `user-space-` namespace so the per-pod monitoring fetch can target the right ns.
type WorkloadRequest ¶
type WorkloadRequest struct {
Apps []WorkloadApp
UserNamespace string
Sort string // "asc" | "desc"
SortBy string // "cpu" (default) | "memory" | "net_in" | "net_out"
}
WorkloadRequest is what callers feed into FetchWorkloadsMetrics: a flattened list of registered apps + the active user's namespace.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package applications hosts the business logic for the `olares-cli dashboard applications` cobra leaf — a workload-grain table mirroring the SPA's Applications page.
|
Package applications hosts the business logic for the `olares-cli dashboard applications` cobra leaf — a workload-grain table mirroring the SPA's Applications page. |
|
Package format is a 1:1 Go port of the dashboard SPA's number / unit formatting helpers.
|
Package format is a 1:1 Go port of the dashboard SPA's number / unit formatting helpers. |
|
Package overview hosts the business logic for the `olares-cli dashboard overview` cobra subtree — physical / user / ranking / cpu / memory / pods / network leaves plus the default fan-out aggregator.
|
Package overview hosts the business logic for the `olares-cli dashboard overview` cobra subtree — physical / user / ranking / cpu / memory / pods / network leaves plus the default fan-out aggregator. |
|
disk
Package disk hosts the business logic for the `olares-cli dashboard overview disk` subtree (root + main + partitions).
|
Package disk hosts the business logic for the `olares-cli dashboard overview disk` subtree (root + main + partitions). |
|
fan
Package fan hosts the business logic for the `olares-cli dashboard overview fan` subtree (root + live + curve).
|
Package fan hosts the business logic for the `olares-cli dashboard overview fan` subtree (root + live + curve). |
|
gpu
Package gpu hosts the business logic for the `olares-cli dashboard overview gpu` subtree (root + list + tasks + get + task + detail + task-detail).
|
Package gpu hosts the business logic for the `olares-cli dashboard overview gpu` subtree (root + list + tasks + get + task + detail + task-detail). |