README
¶
logq
Your logs, queryable. Instantly.
A fast, interactive terminal log explorer that treats your log files like a database.
Install · Quick Start · Query Syntax · Full Reference
Why logq?
Debugging with logs today means chaining grep | jq | less or scrolling through a cloud UI. There's no fast, local, interactive way to explore structured logs the way you explore data in a spreadsheet.
logq changes that. Point it at a file (or pipe logs in) and get:
- Instant filtering — type a query, results update as you type
- Match highlighting — matching text is highlighted in yellow so you can instantly see why each record matched
- Field auto-complete — press
Tabto complete field names and values; ghost text previews the suggestion inline - Multiple files —
logq app.log db.logmerges files into a unified timeline, withsource:filenamequeries - Follow mode —
logq -ftails growing files with live updates (liketail -f, but queryable) - Time histogram — see log volume and error spikes at a glance
- Record detail — press Enter to inspect any log line,
cto copy to clipboard - JSON drill-down — nested JSON objects are rendered as a collapsible tree; fold/expand with Enter, copy dot-paths with
d - Export & batch mode —
logq -q "level:error" -o errors.jsonlfor scripting, or presssto save from the TUI - Aggregations —
--group-by service --top 5for quick field-value summaries - Column mode —
--columns timestamp,level,service,messagefor a structured table view - Color themes — auto-detects dark/light terminal, or set
--theme dark/--theme light - Persistent query history — queries are saved across sessions; Up/Down arrows to recall
- Query aliases —
@err,@slow,@warnbuilt-in shortcuts, plus custom aliases via.logq.toml - Config file — per-project
.logq.tomlfor theme, columns, custom aliases, and saved views - Saved views — define named views in
.logq.tomlwith query + columns; switch with1-9keys,0to clear - Trace following — press
ton any record to follow its trace/request ID across all files - Pattern clustering — press
pto group similar log messages by template; drill into clusters with Enter - Log diff —
logq diff before.log after.logcompares pattern distributions, level changes, and new/gone patterns - Bookmarks —
mto mark records,'to jump between them,Bto filter to bookmarks only - Multi-line grouping — stack traces and multi-line exceptions are grouped into single entries automatically
- Zero setup — auto-detects JSON, logfmt, and plain text; config file is optional
- Single binary — no dependencies, just run it
Install
# Go
go install github.com/riccardomerenda/logq@latest
# Homebrew (macOS / Linux)
brew install riccardomerenda/tap/logq
# Scoop (Windows)
scoop bucket add logq https://github.com/riccardomerenda/scoop-bucket
scoop install logq
# Or download a binary from GitHub Releases
# https://github.com/riccardomerenda/logq/releases
Updating
# If installed via go install
logq update
# Homebrew
brew upgrade logq
# Scoop
scoop update logq
# Or manually
go install github.com/riccardomerenda/logq@latest
Quick Start
# Explore a log file
logq server.log
# Follow a growing file (like tail -f, but interactive)
logq -f /var/log/app.log
# Pipe from anywhere
kubectl logs myapp | logq
docker logs mycontainer 2>&1 | logq
# Merge multiple files (sorted by timestamp)
logq app.log db.log auth.log
# Gzipped? No problem
logq server.log.gz
Batch Mode & Export
Run queries without the TUI for scripting and pipelines:
# Filter and print to stdout
logq server.log -q "level:error"
# Save to file
logq server.log -q "level:error AND service:auth" -o errors.jsonl
# Output as JSON (re-serialized fields) or CSV
logq server.log -q "latency>1000" --format json
logq server.log -q "latency>1000" --format csv
# Count matches only
logq server.log -q "level:error" --count
# Aggregations — group by field and show counts
logq server.log --group-by level
logq server.log -q "level:error" --group-by service --top 5
logq server.log --group-by level --format json
# Column mode — select specific fields
logq server.log -q "level:error" --columns timestamp,level,message --format csv
# Pattern clustering — group similar messages by template
logq server.log --patterns
logq server.log -q "level:error" --patterns --top 10
In the TUI, press s to save the current filtered results to a file.
The --columns flag also works in TUI mode, rendering a structured table view.
Diff Mode
Compare two log files to see what changed — new patterns, gone patterns, level distribution shifts:
# Compare two files
logq diff before.log after.log
# Filter both files with a query
logq diff before.log after.log -q "level:error"
# JSON output for scripting
logq diff before.log after.log --format json
# Show only top 5 patterns per category
logq diff before.log after.log --top 5
# Lower the change threshold (default 50%)
logq diff before.log after.log --threshold 20
Multiple Files
Open multiple files and logq merges them into a unified timeline sorted by timestamp:
# Merge multiple files
logq app.log db.log auth.log
# Mix plain and gzipped files
logq app.log.1.gz app.log.2.gz app.log
# Shell glob expansion works naturally
logq /var/log/app/*.log
Each record gets a source field with the originating filename, so you can filter by file:
source:app.log AND level:error
source~"auth.*" AND latency>500
The source file is shown as <filename> in the log view when multiple files are loaded.
Config File
Drop a .logq.toml in your project root to set per-project defaults:
theme = "dark"
columns = ["timestamp", "level", "service", "message"]
[aliases]
noisy = "NOT service:healthcheck AND NOT service:ping"
auth = "service:auth OR service:gateway"
[aliases.oncall]
query = "level:error AND last:15m"
columns = ["timestamp", "service", "message"]
[views.errors]
query = "level:error"
[views.oncall]
query = "level:error AND last:15m"
columns = ["timestamp", "service", "message"]
Views are assigned to keys 1-9 in alphabetical order by name. Press 0 to clear the active view.
Run logq init to scaffold a starter config. logq auto-discovers it by walking up from the current directory. CLI flags always override config settings.
Built-in Aliases
These are always available, even without a config file:
| Alias | Expands to |
|---|---|
@err |
level:error OR level:fatal |
@warn |
level:warn OR level:warning |
@slow |
latency>1000 |
logq server.log -q "@err AND service:auth"
logq server.log -q "@slow" --count
Query Syntax
Type queries in the filter bar (/). Results update live.
| Pattern | Meaning | Example |
|---|---|---|
word |
Full-text search across all fields | timeout |
field:value |
Exact match on a field | level:error |
field>n |
Numeric comparison (>, >=, <, <=) |
latency>500 |
field~"regex" |
Regex match | message~"timeout.*retry" |
timestamp>"time" |
Time range (absolute) | timestamp>"2026-03-08T10:00:00Z" |
last:duration |
Time range (relative to now) | last:5m, last:1h, last:2d |
A AND B |
Both conditions must match | level:error AND service:auth |
A OR B |
Either condition matches | level:error OR level:fatal |
NOT A |
Negate a condition | NOT service:healthcheck |
source:filename |
Filter by source file (multi-file mode) | source:app.log AND level:error |
@alias |
Query alias (built-in or custom) | @err, @slow AND service:api |
(A OR B) AND C |
Group with parentheses | (level:error OR level:fatal) AND service:api |
Compound queries work naturally:
level:error AND latency>1000 AND NOT service:healthcheck
See the full query reference for details.
Keyboard Shortcuts
| Key | Action |
|---|---|
/ |
Focus the filter bar |
j / k or Up / Down |
Scroll through logs |
Up / Down (in filter bar) |
Browse query history |
Tab (in filter bar) |
Accept auto-complete suggestion |
PgUp / PgDn |
Page scroll |
Home / End |
Jump to start / end |
Enter |
Show full record detail / toggle fold (in tree view) |
c |
Copy raw record to clipboard (in detail view) |
d |
Copy dot-path to clipboard (in JSON tree view) |
← / → |
Collapse / expand node (in JSON tree view) |
t |
Follow trace/request ID (in detail view) |
T |
Clear trace filter and restore previous query |
p |
Toggle pattern clustering view |
m |
Toggle bookmark on current record |
' |
Jump to next bookmark |
B |
Filter to bookmarked records only |
1-9 |
Switch to saved view (from .logq.toml) |
0 |
Clear saved view, restore default |
s |
Save filtered results to file |
Escape |
Clear filter / close detail overlay |
Tab |
Toggle focus between log view and histogram |
q |
Quit |
Supported Log Formats
logq auto-detects the format of each line independently:
| Format | Example |
|---|---|
| JSON Lines | {"level":"error","message":"timeout","latency":523} |
| logfmt | level=error msg="timeout" latency=523 |
| Plain text | ERROR: connection timeout after 523ms |
Timestamps are auto-parsed from RFC3339, ISO 8601, Unix epoch, syslog, nginx/Apache formats, time-only (HH:MM:SS), and more. Log levels are normalized from dozens of variants (WARNING, WARN, WRN, W all become warn).
Mixed formats in the same file are handled gracefully.
Multi-Line Log Entries
logq automatically groups multi-line log entries like stack traces, exception dumps, and multi-line error messages into single records. The grouping strategy is auto-detected:
- Timestamp-anchored — entries start with a timestamp; continuation lines (indented stack traces, JSON payloads, etc.) are grouped with the preceding entry
- Structured — entries start with
{(JSON) orkey=value(logfmt); everything else is a continuation - Single-line — for files where every line is its own entry (standard JSON Lines, logfmt), no grouping overhead is added
This works out of the box for .NET exceptions, Java stack traces, Python tracebacks, and any log format where entries start with a timestamp.
Example: a 1300-line .NET exception log with embedded Elasticsearch JSON errors is automatically grouped into 15 logical entries, each with its full stack trace accessible via the detail view (Enter).
Plain Text Timestamp & Level Detection
For unstructured plain text logs, logq extracts:
- Timestamps from the start of lines:
12:43:10 ...,2026-03-08 10:00:01 ...,Mar 8 10:00:01 ..., etc. - Log levels from keywords near the start:
ERROR,WARN,INFO,DEBUG,FATAL,CRITICAL,PANIC
Architecture
File / stdin / .gz
|
v
+---------+ +----------+ +---------+ +---------+ +-----------+
| Input |--->|Multiline |--->| Parser |--->| Index |--->| Query |
| Reader | | Grouper | | JSON | | Inverted| | Engine |
| | | | | logfmt | | Numeric | | Lexer |
| gzip | | auto- | | plain | | Time | | Parser |
+---------+ | detect | +---------+ +---------+ | Evaluator |
+----------+ +-----+-----+
|
+-----+-----+
| |
v v
+---------+ +---------+
| TUI | | Batch |
|Log View | | Export |
|Histogram| |raw/json |
|QueryBar | | csv |
| Detail | | stdout |
+---------+ +---------+
Performance by design:
- Field lookups are O(1) via inverted indexes
- Numeric range queries use binary search — O(log n)
- Time navigation uses sorted indexes — O(log n)
- Full-text search scans sequentially with early exit — fast enough for millions of lines
Roadmap
See docs/v2-roadmap.md for full details and design notes.
✅ Shipped
| Feature | Version |
|---|---|
| Core engine — multi-format parsing, indexing, query language | v0.1 |
| Interactive TUI — log view, histogram, query bar, detail overlay | v0.2 |
| Input flexibility — file, stdin, gzip, follow mode, multi-line | v0.3 |
Time queries (last:5m), batch export, query history |
v0.4 |
| Multi-file support with merged timeline | v0.5 |
| Match highlighting, field auto-complete | v0.6 |
| Persistent history, color themes, aggregations, column mode, Homebrew & Scoop | v0.7 |
Config file (.logq.toml), query aliases (@err, @slow, custom) |
v0.8 |
Trace following — press t to follow trace/request IDs across files |
v0.9 |
| Pattern clustering, bookmarks | v1.0 |
| JSON drill-down, saved views | v1.1 |
| Log diff | v1.2 |
Building From Source
git clone https://github.com/riccardomerenda/logq.git
cd logq
make build
./logq testdata/sample.jsonl # or logq.exe on Windows
Requirements
- Go 1.22+
Development
make test # run all tests
make lint # run linter (requires golangci-lint)
make run # build and run with sample data
Benchmarks
go test ./benchmarks/ -bench=. -benchmem
Project Structure
logq/
├── main.go # CLI entry point
├── internal/
│ ├── input/
│ │ ├── reader.go # File, stdin, gzip reading
│ │ ├── multiline.go # Multi-line entry grouping
│ │ └── follow.go # File tailing for follow mode (-f)
│ ├── parser/ # JSON, logfmt, plain text, timestamps
│ ├── index/ # In-memory inverted + numeric + time indexes
│ ├── query/ # Lexer, recursive descent parser, evaluator
│ ├── config/ # .logq.toml parser with auto-discovery
│ ├── diff/ # Log file comparison engine
│ ├── alias/ # Query alias registry and expansion
│ ├── trace/ # Trace/correlation ID detection and following
│ ├── pattern/ # Log pattern clustering and template extraction
│ ├── history/ # Persistent query history
│ ├── output/ # Export writers (raw, JSON, CSV, aggregations)
│ └── ui/ # Bubbletea TUI components
├── benchmarks/ # Performance benchmarks
└── testdata/ # Sample log files for testing
License
Documentation
¶
There is no documentation for this package.