pathwalk

package module
v0.0.0-...-d7ee6d5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 27, 2026 License: MIT Imports: 19 Imported by: 0

README

pathwalk

A Go library and CLI that executes Bland AI-style conversational pathway JSON files as agentic pipelines. Define your workflow as a graph of nodes and edges; the engine walks the graph, calls your LLM at each step, extracts variables, and routes to the next node automatically.

Installation

go get github.com/wricardo/pathwalk

CLI

Build and run a pathway from the command line:

go build ./cmd/pathwalk/

./pathwalk run \
  --pathway examples/pizzeria_ops.json \
  --task "Create an order for John: 2x Margherita" \
  --model gpt-4o \
  --api-key $OPENAI_API_KEY
run flags
Flag Default Description
--pathway, -p required Path to the pathway JSON file
--task, -t required Initial task description
--model gpt-4o LLM model name
--api-key $OPENAI_API_KEY API key
--base-url $OPENAI_BASE_URL Base URL (for OpenAI-compatible APIs)
--max-steps 50 Maximum nodes to traverse
--verbose, -v false Print each step's output and routing decision
--graphql-endpoint $GRAPHQL_ENDPOINT Enables the built-in GraphQL tools
--graphql-header Extra HTTP headers (Key=Value, repeatable)
validate command

Validates a pathway JSON file against the bundled JSON schema and structural rules:

./pathwalk validate examples/pizzeria_ops.json

Outputs schema errors and parse errors separately, exits with code 1 on failure.

Library usage

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/wricardo/pathwalk"
)

func main() {
    pathway, err := pathwalk.ParsePathway("my_pathway.json")
    if err != nil {
        log.Fatal(err)
    }

    llm := pathwalk.NewOpenAIClient(apiKey, "", "gpt-4o")

    engine := pathwalk.NewEngine(pathway, llm,
        pathwalk.WithMaxSteps(30),
    )

    result, err := engine.Run(context.Background(), "My task description")
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(result.Output)
    fmt.Println(result.Variables)
}
Step-by-step execution

For fine-grained control, use Step() to process one node at a time:

state := pathwalk.NewState("My task description")
nodeID := pathway.StartNodeID

for {
    result, err := engine.Step(ctx, state, nodeID)
    if err != nil || result.Done {
        break
    }
    nodeID = result.NextNodeID
}
Engine options
Option Description
WithMaxSteps(n) Maximum nodes to visit in a single Run() call (default 50)
WithTools(tools...) Register global tools available to all LLM nodes
WithLogger(log) Set a custom *slog.Logger (default slog.Default())
WithGlobalNodeCheck(bool) Enable/disable per-step global node interception (auto-enabled when pathway has global nodes)
Adding tools
myTool := pathwalk.Tool{
    Name:        "lookup_user",
    Description: "Look up a user by email",
    Parameters: map[string]any{
        "type": "object",
        "properties": map[string]any{
            "email": map[string]any{"type": "string"},
        },
        "required": []string{"email"},
    },
    Fn: func(ctx context.Context, args map[string]any) (any, error) {
        // your implementation
        return map[string]any{"id": "123", "name": "Alice"}, nil
    },
}

engine := pathwalk.NewEngine(pathway, llm, pathwalk.WithTools(myTool))
GraphQL tools

The tools package provides six GraphQL tools that are automatically wired when a --graphql-endpoint is set or when using the library directly:

Tool Description
graphql_query Execute a GraphQL query
graphql_mutation Execute a GraphQL mutation
graphql_queries List available queries with argument types and return types
graphql_mutations List available mutations with argument types and return types
graphql_types List all named non-scalar types (objects, inputs, enums, interfaces)
graphql_type Describe a specific type with fields expanded 2 levels deep

The list/describe tools support optional filter and withDescription parameters.

import "github.com/wricardo/pathwalk/tools"

gt := &tools.GraphQLTool{
    Endpoint: "http://localhost:4000/graphql",
    Headers:  map[string]string{"Authorization": "Bearer " + token},
}
engine := pathwalk.NewEngine(pathway, llm, pathwalk.WithTools(gt.AsTools()...))

When Name is set on GraphQLTool, all tool names get a _<Name> suffix (e.g. graphql_query_sheets) so multiple endpoints can coexist.

RunResult
type RunResult struct {
    Output     string         // final text output
    Variables  map[string]any // accumulated extracted variables
    Steps      []Step         // one entry per visited node
    Reason     string         // why the run ended (see below)
    FailedNode string         // node that caused the stop (on "error" or "max_node_visits")
    Logs       []LogEntry     // structured log records emitted during the run
}

Reason values:

Value Meaning
"terminal" Reached a terminal (End Call) node
"max_steps" Hit the step limit (WithMaxSteps or pathway maxTurns)
"error" An error occurred during execution
"dead_end" Node has no outgoing edges and isn't terminal
"missing_node" Referenced node ID not found in the pathway
"max_node_visits" A node exceeded its per-node visit limit

Run() can return both a non-nil *RunResult and a non-nil error when Reason is "error" or "missing_node". The result contains partial execution state (steps taken, variables extracted so far).

StepResult
type StepResult struct {
    Step       Step       // the step record for this execution
    NextNodeID string     // empty when Done=true
    Done       bool       // true when the run should terminate
    Reason     string     // same values as RunResult.Reason
    Output     string     // text output from the node
    Error      string     // error message if applicable
    FailedNode string     // node name that caused the stop
    Logs       []LogEntry // log records emitted during this step
}
Validation

Validate pathway JSON programmatically:

data, _ := os.ReadFile("pathway.json")
result := pathwalk.ValidatePathwayBytes(data)

if !result.Valid() {
    for _, err := range result.Errors() {
        fmt.Println(err)
    }
}

ValidatePathwayBytes runs both JSON schema validation (against an embedded schema) and structural parsing. Both checks run independently so all errors are returned in a single call.

Pathway JSON format

Pathways are JSON files with nodes and edges arrays, compatible with the Bland AI export format.

Top-level fields
{
  "nodes": [...],
  "edges": [...],
  "graphqlEndpoint": "http://localhost:4000/graphql",
  "graphqlEndpoints": { "sheets": "http://localhost:4001/graphql" },
  "maxTurns": 30,
  "maxVisitsPerNode": 5
}
Field Description
graphqlEndpoint Default GraphQL endpoint; the CLI flag overrides this
graphqlEndpoints Named endpoints; tools get _<name> suffix
maxTurns Caps total node transitions (overrides engine default if lower)
maxVisitsPerNode Default per-node visit cap for all nodes (0 = no limit)
Node types

Default (LLM node) -- runs an LLM prompt, optionally extracts variables, then routes to the next node.

{
  "id": "classify",
  "type": "Default",
  "data": {
    "name": "Classify Request",
    "isStart": true,
    "prompt": "Classify the incoming request.",
    "condition": "Exit when classification is complete.",
    "extractVars": [
      ["operation_type", "string", "The operation category", true]
    ],
    "modelOptions": { "newTemperature": 0.1 },
    "maxVisits": 3
  }
}

extractVars tuple: [name, type, description, required] Supported types: "string", "integer", "boolean"

maxVisits overrides the pathway-level maxVisitsPerNode for this node.

Route -- branches based on extracted variables (no LLM call).

{
  "id": "router",
  "type": "Route",
  "data": {
    "name": "Route to Handler",
    "routes": [
      {
        "conditions": [{ "field": "operation_type", "value": "orders", "operator": "is" }],
        "targetNodeId": "orders-node"
      }
    ],
    "fallbackNodeId": "end"
  }
}

Supported operators: "is", "is not", "contains", "not contains", ">", "<", ">=", "<="

Multiple conditions within a rule are AND-ed; rules are evaluated in order; first match wins. String comparisons are case-insensitive. Numeric operators (>, <, >=, <=) parse values as float64.

End Call -- terminal node; returns text as the run output.

{
  "id": "end",
  "type": "End Call",
  "data": { "name": "Done", "text": "Operation complete." }
}

Webhook -- makes an HTTP request; supports {{variable}} placeholders in the body.

{
  "id": "notify",
  "type": "Webhook",
  "data": {
    "name": "Notify",
    "url": "https://example.com/hook",
    "method": "POST",
    "headers": { "Authorization": "Bearer token" },
    "body": { "customer": "{{customer_name}}" },
    "extractVars": [["order_id", "string", "Created order ID", true]]
  }
}
Global nodes

Nodes marked with "isGlobal": true act as interrupt handlers. Before each step, the engine asks the LLM whether any global node's condition matches the current state. If one matches, execution jumps to that node instead.

{
  "id": "escalate",
  "type": "Default",
  "data": {
    "name": "Escalate to Manager",
    "isGlobal": true,
    "globalLabel": "Customer asks to speak with a manager",
    "prompt": "Transfer to manager..."
  }
}

Global node checking is auto-enabled when the pathway has at least one global node. Override with WithGlobalNodeCheck(false).

Node-level tools

Nodes can declare their own tools in node.data.tools. These are scoped to the node -- the LLM only sees them when executing that specific node. They are merged with any global tools registered via WithTools().

Currently only "webhook" type tools are supported. The engine performs the HTTP call with {{variable}} template substitution.

{
  "tools": [
    {
      "name": "save_customer",
      "description": "Save customer data. Call when name and email are confirmed.",
      "type": "webhook",
      "behavior": "feed_context",
      "config": {
        "url": "https://api.example.com/customers",
        "method": "POST",
        "headers": { "Content-Type": "application/json" },
        "body": "{\"name\": \"{{customer_name}}\", \"email\": \"{{customer_email}}\"}",
        "timeout": 10,
        "retries": 1
      },
      "extractVars": [["customer_id", "string", "Assigned customer ID", true]],
      "responsePathways": [
        { "type": "BlandStatusCode", "operator": "==", "value": "409", "nodeId": "already_exists" },
        { "type": "default", "nodeId": "" }
      ]
    }
  ]
}

Key fields:

  • type: "webhook" -- makes an HTTP call with the configured method/URL/body
  • behavior: "feed_context" -- the response is fed back to the LLM conversation
  • config.timeout: per-tool HTTP timeout in seconds (0 = default 30s)
  • config.retries: number of retry attempts on failure (0 = no retries)
  • extractVars: variables to extract from the webhook JSON response into state
  • responsePathways: conditional routing based on the tool's response:
    • "default" -- always matches (fallback)
    • "BlandStatusCode" -- matches on HTTP status code with an operator/value condition
    • When a pathway with a nodeId matches, it overrides normal edge-based routing

See examples/node_tools_example.json for a complete working example.

Edges
{
  "id": "e1",
  "source": "classify",
  "target": "router",
  "data": { "label": "continue", "description": "When classification is done" }
}

When a Default node has multiple outgoing edges, the LLM picks the route using the edge labels and descriptions as options.

LLM client

LLMClient is the interface for making LLM completions:

type LLMClient interface {
    Complete(ctx context.Context, req CompletionRequest) (*CompletionResponse, error)
}

The built-in OpenAIClient works with any OpenAI-compatible API (OpenAI, Groq, Ollama, OpenRouter, venu, etc.) via the baseURL parameter. It handles the tool-call loop internally (up to 25 rounds).

Context keys

Two context keys are set before each LLM call, useful for mocking:

  • NodeIDContextKey ("nodeID") -- which node triggered the call
  • CallPurposeContextKey ("callPurpose") -- "execute", "extract_vars", "route", or "check_global"

Temporal integration

The temporalworker package runs pathways as distributed Temporal workflows, executing each node as a separate activity.

Worker

Build and run the Temporal worker:

go build ./cmd/pathwalk-worker/

TEMPORAL_HOST=localhost:7233 TEMPORAL_NAMESPACE=default ./pathwalk-worker
Env var Default Description
TEMPORAL_HOST localhost:7233 Temporal server address
TEMPORAL_NAMESPACE default Temporal namespace

Or embed the worker in your own service:

import "github.com/wricardo/pathwalk/temporalworker"

w, err := temporalworker.StartWorker(temporalClient, &temporalworker.PathwayActivities{})
Starting a workflow
import "github.com/wricardo/pathwalk/temporalworker"

pathwayJSON, _ := os.ReadFile("my_pathway.json")

input := temporalworker.PathwayInput{
    PathwayJSON: pathwayJSON,
    Task:        "Create an order for John",
    LLMModel:    "gpt-4o",
    LLMAPIKey:   os.Getenv("OPENAI_API_KEY"),
    MaxSteps:    30,
}

// Start async -- returns immediately with the workflow ID.
workflowID, err := temporalworker.StartRun(ctx, temporalClient, input, temporalworker.RunOptions{
    WorkflowID: "my-idempotent-id", // optional; Temporal generates a UUID if empty
})
Querying status
// Non-blocking: get current state of a running workflow.
snapshot, err := temporalworker.GetResult(ctx, temporalClient, workflowID)
// snapshot.Status is "running" or a terminal reason
// snapshot.CurrentNodeID, snapshot.Variables, snapshot.Steps, snapshot.Output

// Blocking: wait for the workflow to finish.
result, err := temporalworker.WaitForResult(ctx, temporalClient, workflowID, "")
// result is *pathwalk.RunResult
Completion callbacks

Optionally invoke an activity on a different task queue when the workflow finishes:

input := temporalworker.PathwayInput{
    // ... pathway config ...
    CompletionTaskQueue:    "my-app-queue",
    CompletionActivityName: "HandlePathwayComplete",
    CompletionData:         "execution-123", // opaque; echoed back in the callback
}

The callback receives a CompletionCallbackInput with the RunResult and the echoed CompletionData.

Features
  • Each node executes as a separate Temporal activity with heartbeats
  • Pathway JSON is cached by SHA-256 hash across activity invocations
  • Built-in "get-result" query handler for mid-run status checks
  • Graceful shutdown via SIGINT/SIGTERM

Web UI

A React SPA for visualizing pathway JSON files, served by a Go HTTP server.

Build and run
# Build the React app (required once before running)
cd ui && npm install && npm run build && cd ..

# Build and start the server
go build ./cmd/pathwalk-ui/
./pathwalk-ui

Open http://localhost:8080 in your browser.

Flags
Flag Default Description
--addr :8080 Listen address
--ui ui/dist Path to React build output
--pathways examples Directory containing pathway JSON files
Development mode

Run the Go server and the Vite dev server side by side. The Vite proxy forwards /api requests to the Go server:

# Terminal 1 -- Go API server
./pathwalk-ui -addr :8080

# Terminal 2 -- React dev server with hot reload
cd ui && npm run dev
# open http://localhost:5173
Features
  • Sidebar -- lists all .json files in the pathways directory; click one to load it.
  • Flow diagram -- nodes are rendered at their position coordinates from the JSON. Nodes without positions are laid out automatically using BFS from the start node.
  • Node types are color-coded:
    • Blue -- LLM (Default)
    • Orange -- Route
    • Purple -- Webhook
    • Red -- Terminal (End Call)
    • A green dot marks the start node.
  • Pan and zoom -- drag to pan, scroll to zoom centered on the cursor.
  • Node details panel -- click any node to open a panel showing its prompt, exit condition, extract variables, routes, tools, and other fields.
API endpoints

The Go server exposes two JSON endpoints:

Endpoint Description
GET /api/pathways Returns a JSON array of .json filenames from the pathways directory
GET /api/pathway?file=<name> Returns the raw JSON content of a single pathway file

Testing

MockLLMClient lets you script LLM responses without network calls:

mock := pathwaytest.NewMockLLMClient()

// Match by node ID
mock.OnNode("n1", pathwaytest.MockResponse{Content: "Hello!"})

// Match by node ID + call purpose ("execute", "extract_vars", "route", or "check_global")
mock.OnNodePurpose("classify", "extract_vars", pathwaytest.MockResponse{
    ToolCalls: []pathwaytest.MockToolCall{
        {Name: "set_variables", Args: map[string]any{"operation_type": "orders"}},
    },
})

// Mock global node checks
mock.OnNodePurpose(pathwalk.GlobalCheckNodeID, "check_global", pathwaytest.MockResponse{
    ToolCalls: []pathwaytest.MockToolCall{
        {Name: "select_global_node", Args: map[string]any{"node": 0}},
    },
})

// Fallback for any unmatched call
mock.SetDefault(pathwaytest.MockResponse{Content: "ok"})

engine := pathwalk.NewEngine(pathway, mock)
result, err := engine.Run(ctx, "test task")

// Assertions
mock.CallCount("n1")  // number of LLM calls for that node
mock.Calls            // []RecordedCall -- full call log

Run the tests:

go test ./...

Claude Code skill

A pathwalk-engineer skill is bundled in this repo. It covers the pathway JSON format, engine API, testing patterns, tools, and Temporal integration — useful when building with pathwalk in any project.

Install
/plugin marketplace add wricardo/pathwalk
/plugin install pathwalk-engineer@pathwalk

Then invoke it with /pathwalk-engineer in any Claude Code session.

Manual install (copy)
cp -r .claude-plugin/../../plugins/pathwalk-engineer/skills/pathwalk-engineer ~/.claude/skills/

Or if you cloned this repo, Claude Code will prompt you to install the plugin automatically via .claude/settings.json.

Examples

File Description
examples/pizzeria_ops.json Multi-node pizzeria operations pathway with classification, routing, and GraphQL tools
examples/node_tools_example.json Demonstrates node-level webhook tools with response pathways and conditional routing
examples/pizzeria-server/ A gqlgen GraphQL server that backs the pizzeria pathway

Documentation

Overview

Package pathwalk executes conversational pathway JSON files as agentic pipelines. A pathway is a directed graph of nodes connected by edges. The Engine walks the graph step-by-step: NodeTypeLLM nodes invoke an LLM, NodeTypeRoute nodes evaluate conditions, NodeTypeWebhook nodes make HTTP calls, and NodeTypeTerminal nodes terminate the run.

Quick start:

pathway, err := pathwalk.ParsePathway("my_pathway.json")
llm := pathwalk.NewOpenAIClient(apiKey, "", "gpt-4o")
engine := pathwalk.NewEngine(pathway, llm)
result, err := engine.Run(ctx, "your task description")

Index

Constants

View Source
const (
	// NodeIDContextKey is set in the context before each LLM call so mocks
	// can inspect which node triggered the call.
	NodeIDContextKey contextKey = "nodeID"

	// CallPurposeContextKey distinguishes the purpose of an LLM call.
	// Values: "execute", "extract_vars", "route", "check_global"
	CallPurposeContextKey contextKey = "callPurpose"
)
View Source
const GlobalCheckNodeID = "$global_check"

GlobalCheckNodeID is the node ID placed in context during the global-node-check LLM call each step. Use it with MockLLMClient.OnNodePurpose in tests:

mock.OnNodePurpose(pathwalk.GlobalCheckNodeID, "check_global", ...)

Variables

This section is empty.

Functions

func CallPurposeFromContext

func CallPurposeFromContext(ctx context.Context) string

CallPurposeFromContext retrieves the call purpose from the context.

func NodeIDFromContext

func NodeIDFromContext(ctx context.Context) string

NodeIDFromContext retrieves the node ID from the context.

func WithCallPurpose

func WithCallPurpose(ctx context.Context, purpose string) context.Context

WithCallPurpose returns a context carrying the call purpose.

func WithNodeID

func WithNodeID(ctx context.Context, nodeID string) context.Context

WithNodeID returns a context carrying the current node ID.

Types

type CompletionRequest

type CompletionRequest struct {
	Model       string
	Messages    []Message
	Tools       []Tool
	Temperature float64
	MaxTokens   int
}

CompletionRequest is the input to LLMClient.Complete.

type CompletionResponse

type CompletionResponse struct {
	Content   string
	ToolCalls []ToolCall
}

CompletionResponse is the output from LLMClient.Complete.

type Edge

type Edge struct {
	ID     string
	Source string
	Target string
	Label  string
	Desc   string
}

Edge represents a directed connection between two nodes.

type Engine

type Engine struct {
	// contains filtered or unexported fields
}

Engine executes a parsed pathway using an LLM and optional tools.

func NewEngine

func NewEngine(pathway *Pathway, llm LLMClient, opts ...EngineOption) *Engine

NewEngine creates an Engine for the given pathway and LLM client. Panics if pathway or llm is nil.

func (*Engine) Run

func (e *Engine) Run(ctx context.Context, task string) (*RunResult, error)

Run executes the pathway with `task` as the initial context.

Unlike Step, Run can return both a non-nil *RunResult and a non-nil error simultaneously when Reason is "error" or "missing_node". Callers should always inspect both: the result contains the partial execution state (steps taken, variables extracted so far) and the error describes what went wrong.

func (*Engine) Step

func (e *Engine) Step(ctx context.Context, state *State, nodeID string) (*StepResult, error)

Step executes a single node in the pathway and returns the result. State is mutated in place (variables merged, step appended). Call this repeatedly with the returned NextNodeID until StepResult.Done is true.

Example:

state := NewState("my task")
for {
    result, err := engine.Step(ctx, state, startNodeID)
    if err != nil || result.Done {
        break
    }
    nodeID = result.NextNodeID
}

type EngineOption

type EngineOption func(*Engine)

EngineOption is a functional option for Engine.

func WithGlobalNodeCheck

func WithGlobalNodeCheck(enabled bool) EngineOption

WithGlobalNodeCheck enables or disables the per-step global node interception. By default it is enabled whenever the pathway has at least one global node.

func WithLogger

func WithLogger(log *slog.Logger) EngineOption

WithLogger sets the logger for the engine. If not set, slog.Default() is used.

func WithMaxSteps

func WithMaxSteps(n int) EngineOption

WithMaxSteps sets the maximum number of nodes to visit.

func WithTools

func WithTools(tools ...Tool) EngineOption

WithTools adds tools to the engine's tool registry.

type LLMClient

type LLMClient interface {
	// Complete sends messages to the LLM, executes any tool calls, and returns
	// the final text content plus a record of all tool calls made.
	Complete(ctx context.Context, req CompletionRequest) (*CompletionResponse, error)
}

LLMClient is the interface for making LLM completions. The implementation is responsible for handling the tool-call loop.

type LogEntry

type LogEntry struct {
	Time    time.Time      `json:"time"`
	Level   string         `json:"level"`
	Message string         `json:"message"`
	Attrs   map[string]any `json:"attrs,omitempty"`
}

LogEntry is a single captured log record, safe for serialisation.

type Message

type Message struct {
	Role    string // "system", "user", "assistant"
	Content string
}

Message is a single turn in an LLM conversation.

type Node

type Node struct {
	ID          string
	Type        NodeType
	Name        string
	IsStart     bool
	IsGlobal    bool
	GlobalLabel string

	// LLM node
	Prompt      string
	Text        string
	Condition   string
	ExtractVars []VariableDef
	Temperature float64

	// Terminal node
	TerminalText string

	// Webhook node
	WebhookURL     string
	WebhookMethod  string
	WebhookHeaders map[string]string
	WebhookBody    any

	// Node-level tools (parsed from JSON, scoped to this node only)
	Tools []NodeTool

	// Route node
	Routes         []RouteRule
	FallbackNodeID string

	// MaxVisits caps how many times this node may be visited in a single run.
	// 0 means use the pathway-level MaxVisitsPerNode default (or no limit).
	MaxVisits int
}

Node is a parsed node from the pathway.

type NodeTool

type NodeTool struct {
	Name        string
	Description string
	Type        string // "webhook" or "custom_tool"
	Behavior    string // "feed_context" — response fed back into conversation

	// Webhook config
	URL     string
	Method  string
	Headers map[string]string
	Body    string // raw body template with {{variable}} placeholders

	// Timeout in seconds for the HTTP request. 0 means use the default (30s).
	Timeout int
	// Retries is the number of retry attempts on failure. 0 means no retries.
	Retries int

	// Speech is optional text the agent speaks while the tool executes
	// (relevant for voice agents; ignored by the default engine).
	Speech string

	// Variables to extract from the tool's response
	ExtractVars []VariableDef

	// ResponsePathways defines conditional routing based on the tool's response.
	// When Behavior is "feed_context", the LLM sees the result and continues.
	// When pathways have conditions, a matching pathway can redirect the
	// conversation to a different node, overriding normal edge-based routing.
	ResponsePathways []ToolResponsePathway
}

NodeTool is a declarative tool definition attached to a specific node in the pathway JSON. Unlike Tool, it carries no Go function — the engine constructs an executable Tool from the config at runtime (e.g. performing a webhook call).

type NodeType

type NodeType string

NodeType identifies the kind of node in a pathway.

const (
	// NodeTypeLLM invokes the LLM to execute the node's prompt and optionally
	// extracts variables from the response.
	NodeTypeLLM NodeType = "llm"
	// NodeTypeTerminal terminates the pathway run and returns the node's TerminalText.
	NodeTypeTerminal NodeType = "terminal"
	// NodeTypeWebhook performs an HTTP request and optionally extracts variables
	// from the JSON response.
	NodeTypeWebhook NodeType = "webhook"
	// NodeTypeRoute evaluates conditions against the current state variables to
	// pick the next node without calling the LLM.
	NodeTypeRoute NodeType = "route"
)

type OpenAIClient

type OpenAIClient struct {
	// contains filtered or unexported fields
}

OpenAIClient implements LLMClient using the openai-go SDK. It is compatible with any OpenAI-compatible API (venu, Groq, Ollama, OpenRouter, etc.).

func NewOpenAIClient

func NewOpenAIClient(apiKey, baseURL, model string) *OpenAIClient

NewOpenAIClient creates a new OpenAIClient. apiKey and baseURL can be empty to use environment defaults.

func (*OpenAIClient) Complete

Complete sends the request to the OpenAI API, handles tool call loops, and returns the final assistant content.

type Pathway

type Pathway struct {
	Nodes            []*Node
	Edges            []*Edge
	NodeByID         map[string]*Node
	EdgesFrom        map[string][]*Edge // source nodeID → outgoing edges
	StartNode        *Node
	GlobalNodes      []*Node           // nodes with IsGlobal == true and a non-empty GlobalLabel
	GraphQLEndpoint  string            // optional single GraphQL endpoint (unnamed tools)
	GraphQLEndpoints map[string]string // named endpoints → tools get _<name> suffix

	// MaxTurns caps the total number of node-to-node transitions in a run.
	// 0 means use the engine's WithMaxSteps value (default 50).
	MaxTurns int
	// MaxVisitsPerNode is the default per-node visit cap for all nodes in the pathway.
	// 0 means no limit unless a node's own MaxVisits overrides it.
	MaxVisitsPerNode int
}

Pathway holds parsed nodes and edges with lookup indexes.

func ParsePathway

func ParsePathway(path string) (*Pathway, error)

ParsePathway reads a pathway JSON file and returns a Pathway.

func ParsePathwayBytes

func ParsePathwayBytes(data []byte) (*Pathway, error)

ParsePathwayBytes parses a pathway from raw JSON bytes.

type RouteCondition

type RouteCondition struct {
	Field    string
	Operator string // "is", "is not", "contains", "not contains", ">", "<", ">=", "<="
	Value    string
}

RouteCondition is a single field/operator/value check.

type RouteRule

type RouteRule struct {
	Conditions []RouteCondition
	TargetID   string
}

RouteRule maps a set of conditions (AND-logic) to a target node.

type RunResult

type RunResult struct {
	// Output is the last meaningful content produced by the run — the output
	// of the last LLM or webhook step that executed before the terminal node.
	// Terminal nodes do not contribute to Output.
	Output    string
	Variables map[string]any
	Steps     []Step
	// Reason explains why the run ended. Values: "terminal", "max_steps",
	// "error", "dead_end", "missing_node", "max_node_visits".
	Reason string
	// FailedNode is the name of the node that caused the run to stop when
	// Reason is "error" or "max_node_visits". Empty otherwise.
	FailedNode string
	// Logs contains all log records emitted during this run.
	Logs []LogEntry
}

RunResult is the final result of running a pathway.

type State

type State struct {
	Task        string
	Variables   map[string]any
	Steps       []Step
	VisitCounts map[string]int // nodeID → number of times visited this run
}

State holds the mutable runtime state of a pathway execution.

func NewState

func NewState(task string) *State

NewState creates a new execution state for the given task. Use this to initialize state for calls to Engine.Step.

func (*State) SetVars

func (s *State) SetVars(vars map[string]any)

SetVars merges vars into state, skipping nil values.

func (*State) StepsSummary

func (s *State) StepsSummary() string

StepsSummary returns a concise summary of the steps taken so far.

func (*State) VarsSummary

func (s *State) VarsSummary() string

VarsSummary returns a human-readable summary of the current variables.

type Step

type Step struct {
	NodeID   string
	NodeName string
	Output   string
	Vars     map[string]any
	// ToolCalls holds every tool invocation made by the LLM during this node's
	// execution. Empty for Route and Terminal nodes.
	ToolCalls []ToolCall
	// RouteReason is the human-readable explanation for why the engine
	// chose the next node (e.g. "single edge", "selected route 2").
	RouteReason string
	NextNode    string
}

Step records what happened at a single node.

type StepResult

type StepResult struct {
	Step       Step       // The step record for this execution
	NextNodeID string     // Empty when Done=true (terminal, dead_end, error, or max_node_visits)
	Done       bool       // True when the run should terminate
	Reason     string     // "terminal", "dead_end", "error", "missing_node", "max_node_visits"
	Output     string     // Text output from the node
	Error      string     // Error message if Reason=="error" or "max_node_visits"
	FailedNode string     // Name of the node that caused the stop when Reason is "error" or "max_node_visits"
	Logs       []LogEntry // Log records emitted during this step
}

StepResult captures what happened during a single node execution.

type Tool

type Tool struct {
	Name        string
	Description string
	Parameters  map[string]any // JSON schema
	Fn          func(ctx context.Context, args map[string]any) (any, error)
}

Tool is a callable function exposed to the LLM.

type ToolCall

type ToolCall struct {
	ID     string
	Name   string
	Args   map[string]any
	Result any
	Error  string
}

ToolCall records a single tool invocation and its outcome.

type ToolResponsePathway

type ToolResponsePathway struct {
	// Type is the trigger type: "default" (always matches)
	// or "BlandStatusCode" (matches on HTTP status code).
	Type string `json:"type"`

	// Condition operator: "==", "!=", ">", "<", ">=", "<=", "contains", "!contains", "is".
	// Empty means no condition (always matches).
	Operator string `json:"operator,omitempty"`

	// Value to compare against (e.g. "200", "error").
	Value string `json:"value,omitempty"`

	// NodeID is the target node to route to when the condition matches.
	NodeID string `json:"nodeId"`

	// Name is a human-readable label for this pathway.
	Name string `json:"name,omitempty"`
}

ToolResponsePathway describes how to handle a tool's response. It can act as a conditional offramp: if the response matches the condition, route to the specified node instead of continuing normal flow.

type ValidationError

type ValidationError struct {
	Field   string // JSON path to the failing field
	Message string // Human-readable description
}

ValidationError is a single JSON schema violation.

func (ValidationError) Error

func (e ValidationError) Error() string

type ValidationResult

type ValidationResult struct {
	SchemaErrors []ValidationError // JSON schema violations
	ParseError   error             // Structural error from ParsePathwayBytes
}

ValidationResult holds the outcome of a schema + parse validation.

func ValidatePathwayBytes

func ValidatePathwayBytes(data []byte) *ValidationResult

ValidatePathwayBytes validates pathway JSON against the bundled JSON schema and then attempts to parse it. Both checks run independently so all errors are visible in a single call.

func (*ValidationResult) Errors

func (r *ValidationResult) Errors() []error

Errors returns all errors as a flat slice, schema errors first then parse error.

func (*ValidationResult) Valid

func (r *ValidationResult) Valid() bool

Valid returns true when there are no schema errors and no parse error.

type VariableDef

type VariableDef struct {
	Name        string
	Type        string // "string", "integer", "boolean"
	Description string
	Required    bool
}

VariableDef describes a variable to extract from LLM output.

Directories

Path Synopsis
cmd
pathwalk command
pathwalk-ui command
pathwalk-worker command
Package evals provides a lightweight evaluation framework for pathwalk pathways.
Package evals provides a lightweight evaluation framework for pathwalk pathways.
Package pathwaytest provides test helpers for the pathwalk library, including a mock LLM client for unit testing without hitting a real API.
Package pathwaytest provides test helpers for the pathwalk library, including a mock LLM client for unit testing without hitting a real API.
Package tools provides built-in tool implementations for use with the pathwalk pathwalk.Engine.
Package tools provides built-in tool implementations for use with the pathwalk pathwalk.Engine.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL