omen

module
v1.7.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 30, 2025 License: MIT

README

Omen

Omen - Code Analysis CLI

Go Version License CI codecov Go Report Card Release Go Reference Snyk Security

A multi-language code analysis CLI built in Go. Omen uses tree-sitter for parsing source code across 13 languages, providing insights into complexity, technical debt, code duplication, and defect prediction.

Why "Omen"? An omen is a sign of things to come - good or bad. Your codebase is full of omens: low complexity and clean architecture signal smooth sailing ahead, while high churn, technical debt, and code clones warn of trouble brewing. Omen surfaces these signals so you can act before that "temporary fix" celebrates its third anniversary in production.

Features

Complexity Analysis - How hard your code is to understand and test

There are two types of complexity:

  • Cyclomatic Complexity counts the number of different paths through your code. Every if, for, while, or switch creates a new path. A function with cyclomatic complexity of 10 means there are 10 different ways to run through it. The higher the number, the more test cases you need to cover all scenarios.

  • Cognitive Complexity measures how hard code is for a human to read. It penalizes deeply nested code (like an if inside a for inside another if) more than flat code. Two functions can have the same cyclomatic complexity, but the one with deeper nesting will have higher cognitive complexity because it's harder to keep track of.

Why it matters: Research shows that complex code has more bugs and takes longer to fix. McCabe's original 1976 paper found that functions with complexity over 10 are significantly harder to maintain. SonarSource's cognitive complexity builds on this by measuring what actually confuses developers.

[!TIP] Keep cyclomatic complexity under 10 and cognitive complexity under 15 per function.

Self-Admitted Technical Debt (SATD) - Comments where developers admit they took shortcuts

When developers write TODO: fix this later or HACK: this is terrible but works, they're creating technical debt and admitting it. Omen finds these comments and groups them by type:

Category Markers What it means
Design HACK, KLUDGE, SMELL Architecture shortcuts that need rethinking
Defect BUG, FIXME, BROKEN Known bugs that haven't been fixed
Requirement TODO, FEAT Missing features or incomplete implementations
Test FAILING, SKIP, DISABLED Tests that are broken or turned off
Performance SLOW, OPTIMIZE, PERF Code that works but needs to be faster
Security SECURITY, VULN, UNSAFE Known security issues

Why it matters: Potdar and Shihab's 2014 study found that SATD comments often stay in codebases for years. The longer they stay, the harder they are to fix because people forget the context. Maldonado and Shihab (2015) showed that design debt is the most common and most dangerous type.

[!TIP] Review SATD weekly. If a TODO is older than 6 months, either fix it or delete it.

Dead Code Detection - Code that exists but never runs

Dead code includes:

  • Functions that are never called
  • Variables that are assigned but never used
  • Classes that are never instantiated
  • Code after a return statement that can never execute

Why it matters: Dead code isn't just clutter. It confuses new developers who think it must be important. It increases build times and binary sizes. Worst of all, it can hide bugs - if someone "fixes" dead code thinking it runs, they've wasted time. Romano et al. (2020) found that dead code is a strong predictor of other code quality problems.

[!TIP] Delete dead code. Version control means you can always get it back if needed.

Git Churn Analysis - How often files change over time

Churn looks at your git history and counts:

  • How many times each file was modified
  • How many lines were added and deleted
  • Which files change together

Files with high churn are "hotspots" - they're constantly being touched, which could mean they're:

  • Central to the system (everyone needs to modify them)
  • Poorly designed (constant bug fixes)
  • Missing good abstractions (features keep getting bolted on)

Why it matters: Nagappan and Ball's 2005 research at Microsoft found that code churn is one of the best predictors of bugs. Files that change a lot tend to have more defects. Combined with complexity data, churn helps you find the files that are both complicated AND frequently modified - your highest-risk code.

[!TIP] If a file has high churn AND high complexity, prioritize refactoring it.

Code Clone Detection - Duplicated code that appears in multiple places

There are three types of clones:

Type Description Example
Type-1 Exact copies (maybe different whitespace/comments) Copy-pasted code
Type-2 Same structure, different names Same function with renamed variables
Type-3 Similar code with some modifications Functions that do almost the same thing

Why it matters: When you fix a bug in one copy, you have to remember to fix all the other copies too. Juergens et al. (2009) found that cloned code has significantly more bugs because fixes don't get applied consistently. The more clones you have, the more likely you'll miss one during updates.

[!TIP] Anything copied more than twice should probably be a shared function. Aim for duplication ratio under 5%.

Defect Prediction - The likelihood that a file contains bugs

Omen combines multiple signals to predict defect probability:

  • Complexity (complex code = more bugs)
  • Churn (frequently changed code = more bugs)
  • Size (bigger files = more bugs)
  • Age (newer code = more bugs, counterintuitively)
  • Coupling (code with many dependencies = more bugs)

Each file gets a risk score from 0% to 100%.

Why it matters: You can't review everything equally. Menzies et al. (2007) showed that defect prediction helps teams focus testing and code review on the files most likely to have problems. Rahman et al. (2014) found that even simple models outperform random file selection for finding bugs.

[!TIP] Prioritize code review for files with >70% defect probability.

Technical Debt Gradient (TDG) - A composite "health score" for each file

TDG combines multiple metrics into a single score (0-100 scale, higher is better):

Component Weight What it measures
Structural Complexity 20% Cyclomatic complexity and nesting depth
Semantic Complexity 15% Cognitive complexity
Duplication 15% Amount of cloned code
Coupling 15% Dependencies on other modules
Hotspot 10% Churn x complexity interaction
Temporal Coupling 10% Co-change patterns with other files
Consistency 10% Code style and pattern adherence
Documentation 5% Comment coverage

Scores are classified into letter grades (A+ to F), where:

  • A/A+ (90-100): Excellent - well-maintained code
  • B (75-89): Good - minor improvements possible
  • C (60-74): Needs attention - technical debt accumulating
  • D (50-59): Poor - significant refactoring needed
  • F (<50): Critical - immediate action required

Why it matters: Technical debt is like financial debt - a little is fine, too much kills you. Cunningham coined the term in 1992, and Kruchten et al. (2012) formalized how to measure and manage it. TDG gives you a single number to track over time and compare across files.

[!TIP] Fix files with grade C or lower before adding new features. Track average TDG over time - it should go up, not down.

Dependency Graph - How your modules connect to each other

Omen builds a graph showing which files import which other files, then calculates:

  • PageRank: Which files are most "central" (many things depend on them)
  • Betweenness: Which files are "bridges" between different parts of the codebase
  • Coupling: How interconnected modules are

Why it matters: Highly coupled code is fragile - changing one file breaks many others. Parnas's 1972 paper on modularity established that good software design minimizes dependencies between modules. The dependency graph shows you where your architecture is clean and where it's tangled.

[!TIP] Files with high PageRank should be especially stable and well-tested. Consider breaking up files that appear as "bridges" everywhere.

Halstead Metrics - Software complexity based on operators and operands

Maurice Halstead developed these metrics in 1977 to measure programs like physical objects:

Metric Formula What it means
Vocabulary n1 + n2 Unique operators + unique operands
Length N1 + N2 Total operators + total operands
Volume N * log2(n) Size of the implementation
Difficulty (n1/2) * (N2/n2) How hard to write and understand
Effort Volume * Difficulty Mental effort required
Time Effort / 18 Estimated coding time in seconds
Bugs Effort^(2/3) / 3000 Estimated number of bugs

Why it matters: Halstead metrics give you objective measurements for comparing different implementations of the same functionality. They can estimate how long code took to write and predict how many bugs it might contain.

[!TIP] Use Halstead for comparing alternative implementations. Lower effort and predicted bugs = better.

Hotspot Analysis - High-risk files where complexity meets frequent changes

Hotspots are files that are both complex AND frequently modified. A simple file that changes often is probably fine - it's easy to work with. A complex file that rarely changes is also manageable - you can leave it alone. But a complex file that changes constantly? That's where bugs breed.

Omen calculates hotspot scores using the geometric mean of normalized churn and complexity:

hotspot = sqrt(churn_percentile * complexity_percentile)

Both factors are normalized against industry benchmarks using empirical CDFs, so scores are comparable across projects:

  • Churn percentile - Where this file's commit count ranks against typical OSS projects
  • Complexity percentile - Where the average cognitive complexity ranks against industry benchmarks
Hotspot Score Severity Action
>= 0.6 Critical Prioritize immediately
>= 0.4 High Schedule for review
>= 0.25 Moderate Monitor
< 0.25 Low Healthy

Why it matters: Adam Tornhill's "Your Code as a Crime Scene" introduced hotspot analysis as a way to find the most impactful refactoring targets. His research shows that a small percentage of files (typically 4-8%) contain most of the bugs. Graves et al. (2000) and Nagappan et al. (2005) demonstrated that relative code churn is a strong defect predictor.

[!TIP] Start refactoring with your top 3 hotspots. Reducing complexity in high-churn files has the highest ROI.

Temporal Coupling - Files that change together reveal hidden dependencies

When two files consistently change in the same commits, they're temporally coupled. This often reveals:

  • Hidden dependencies not visible in import statements
  • Logical coupling where a change in one file requires a change in another
  • Accidental coupling from copy-paste or inconsistent abstractions

Omen analyzes your git history to find file pairs that change together:

Coupling Strength Meaning
> 80% Almost always change together - likely tight dependency
50-80% Frequently coupled - investigate the relationship
20-50% Moderately coupled - may be coincidental
< 20% Weakly coupled - probably independent

Why it matters: Ball et al. (1997) first studied co-change patterns at AT&T and found they reveal architectural violations invisible to static analysis. Beyer and Noack (2005) showed that temporal coupling predicts future changes - if files changed together before, they'll likely change together again.

[!TIP] If two files have >50% temporal coupling but no import relationship, consider extracting a shared module or merging them.

Code Ownership/Bus Factor - Knowledge concentration and team risk

Bus factor asks: "How many people would need to be hit by a bus before this code becomes unmaintainable?" Low bus factor means knowledge is concentrated in too few people.

Omen uses git blame to calculate:

  • Primary owner - Who wrote most of the code
  • Ownership ratio - What percentage one person owns
  • Contributor count - How many people have touched the file
  • Bus factor - Number of major contributors (>5% of code)
Ownership Ratio Risk Level What it means
> 90% High risk Single point of failure
70-90% Medium risk Limited knowledge sharing
50-70% Low risk Healthy distribution
< 50% Very low Broad ownership

Why it matters: Bird et al. (2011) found that code with many minor contributors has more bugs than code with clear ownership, but code owned by a single person creates organizational risk. The sweet spot is 2-4 significant contributors per module. Nagappan et al. (2008) showed that organizational metrics (like ownership) predict defects better than code metrics alone.

[!TIP] Files with >80% single ownership should have documented knowledge transfer. Critical files should have at least 2 people who understand them.

CK Metrics - Object-oriented design quality measurements

The Chidamber-Kemerer (CK) metrics suite measures object-oriented design quality:

Metric Name What it measures Threshold
WMC Weighted Methods per Class Sum of method complexities < 20
CBO Coupling Between Objects Number of other classes used < 10
RFC Response for Class Methods that can be invoked < 50
LCOM Lack of Cohesion in Methods Methods not sharing fields < 3
DIT Depth of Inheritance Tree Inheritance chain length < 5
NOC Number of Children Direct subclasses < 6

LCOM (Lack of Cohesion) is particularly important. Low LCOM means methods in a class use similar instance variables - the class is focused. High LCOM means the class is doing unrelated things and should probably be split.

Why it matters: Chidamber and Kemerer's 1994 paper established these metrics as the foundation of OO quality measurement. Basili et al. (1996) validated them empirically, finding that WMC and CBO strongly correlate with fault-proneness. These metrics have been cited thousands of times and remain the standard for OO design analysis.

[!TIP] Classes violating multiple CK thresholds are candidates for refactoring. High WMC + high LCOM often indicates a "god class" that should be split.

Repository Map - PageRank-ranked symbol index for LLM context

Repository maps provide a compact summary of your codebase's important symbols, ranked by structural importance using PageRank. This is designed for LLM context windows - you get the most important functions and types first.

For each symbol, the map includes:

  • Name and kind (function, class, method, interface)
  • File location and line number
  • Signature for quick understanding
  • PageRank score based on how many other symbols depend on it
  • In/out degree showing dependency connections

Why it matters: LLMs have limited context windows. Stuffing them with entire files wastes tokens on less important code. PageRank, developed by Brin and Page (1998), identifies structurally important nodes in a graph. Applied to code, it surfaces the symbols that are most central to understanding the codebase.

Scalability: Omen uses a sparse power iteration algorithm for PageRank computation, scaling linearly with the number of edges O(E) rather than quadratically with nodes O(V^2). This enables fast analysis of large monorepos with 25,000+ symbols in under 30 seconds.

Example output:

# Repository Map (Top 20 symbols by PageRank)

## parser.ParseFile (function) - pkg/parser/parser.go:45
  PageRank: 0.0823 | In: 12 | Out: 5
  func ParseFile(path string) (*Result, error)

## models.TdgScore (struct) - pkg/models/tdg.go:28
  PageRank: 0.0651 | In: 8 | Out: 3
  type TdgScore struct

[!TIP] Use omen context --repo-map --top 50 to generate context for LLM prompts. The top 50 symbols usually capture the essential architecture.

MCP Server - LLM tool integration via Model Context Protocol

Omen includes a Model Context Protocol (MCP) server that exposes all analyzers as tools for LLMs like Claude. This enables AI assistants to analyze codebases directly through standardized tool calls.

Available tools:

  • analyze_complexity - Cyclomatic and cognitive complexity
  • analyze_satd - Self-admitted technical debt detection
  • analyze_deadcode - Unused functions and variables
  • analyze_churn - Git file change frequency
  • analyze_duplicates - Code clones detection
  • analyze_defect - Defect probability prediction
  • analyze_tdg - Technical Debt Gradient scores
  • analyze_graph - Dependency graph generation
  • analyze_hotspot - High churn + complexity files
  • analyze_temporal_coupling - Files that change together
  • analyze_ownership - Code ownership and bus factor
  • analyze_cohesion - CK OO metrics
  • analyze_repo_map - PageRank-ranked symbol map

Each tool includes detailed descriptions with interpretation guidance, helping LLMs understand what metrics mean and when to use each analyzer.

Tool outputs default to TOON (Token-Oriented Object Notation) format, a compact serialization designed for LLM workflows that reduces token usage by 30-60% compared to JSON while maintaining high comprehension accuracy. JSON and Markdown formats are also available.

Why it matters: LLMs work best when they have access to structured tools rather than parsing unstructured output. MCP is the emerging standard for LLM tool integration, supported by Claude Desktop and other AI assistants. TOON output maximizes the information density within context windows.

[!TIP] Configure omen as an MCP server in your AI assistant to enable natural language queries like "find the most complex functions" or "show me technical debt hotspots."

Supported Languages

Go, Rust, Python, TypeScript, JavaScript, TSX/JSX, Java, C, C++, C#, Ruby, PHP, Bash

Installation

Homebrew (macOS/Linux)
brew install panbanda/omen/omen
Go Install
go install github.com/panbanda/omen/cmd/omen@latest
Download Binary

Download pre-built binaries from the releases page.

Build from Source
git clone https://github.com/panbanda/omen.git
cd omen
go build -o omen ./cmd/omen

Quick Start

# Run all analyzers
omen analyze ./src

# Analyze complexity
omen analyze complexity ./src

# Detect technical debt
omen analyze satd ./src

# Find dead code
omen analyze deadcode ./src

# Analyze git churn (last 30 days)
omen analyze churn ./

# Detect code clones
omen analyze duplicates ./src

# Predict defect probability
omen analyze defect ./src

# Calculate TDG scores
omen analyze tdg ./src

# Generate dependency graph
omen analyze graph ./src --metrics

# Find hotspots (high churn + complexity)
omen analyze hotspot ./src

# Detect temporal coupling
omen analyze temporal ./

# Analyze code ownership
omen analyze ownership ./src

# Calculate CK cohesion metrics
omen analyze cohesion ./src

Commands

Top-level Commands
Command Alias Description
analyze a Run analyzers (all if no subcommand, or specific one)
context ctx Deep context generation for LLMs
mcp - Start MCP server for LLM tool integration
Analyzer Subcommands (omen analyze <subcommand>)
Subcommand Alias Description
complexity cx Cyclomatic and cognitive complexity analysis
satd debt Self-admitted technical debt detection
deadcode dc Unused code detection
churn - Git history analysis for file churn
duplicates dup Code clone detection
defect predict Defect probability prediction
tdg - Technical Debt Gradient scores
graph dag Dependency graph (Mermaid output)
hotspot hs Churn x complexity risk analysis
temporal-coupling tc Temporal coupling detection
ownership own, bus-factor Code ownership and bus factor
cohesion ck CK object-oriented metrics
lint-hotspot lh Lint violation density

Output Formats

All commands support multiple output formats:

omen analyze complexity ./src -f text      # Default, colored terminal output
omen analyze complexity ./src -f json      # JSON for programmatic use
omen analyze complexity ./src -f markdown  # Markdown tables
omen analyze complexity ./src -f toon      # TOON format

Write output to a file:

omen analyze ./src -f json -o report.json

Configuration

Create omen.toml, .omen.toml, or .omen/omen.toml:

[exclude]
patterns = ["vendor/**", "node_modules/**", "**/*_test.go"]
dirs = [".git", "dist", "build"]

[thresholds]
cyclomatic = 10
cognitive = 15
duplicate_min_lines = 6
duplicate_similarity = 0.8
dead_code_confidence = 0.8

[analysis]
churn_days = 30

See omen.example.toml for all options.

Examples

Find Complex Functions
omen analyze complexity ./pkg --functions-only --cyclomatic-threshold 15
High-Risk Files Only
omen analyze defect ./src --high-risk-only
Top 5 TDG Hotspots
omen analyze tdg ./src --hotspots 5
Generate LLM Context
omen context ./src --include-metrics --include-graph
Repository Map for LLM Context
omen context ./src --repo-map --top 50
Find Hotspots (High-Risk Files)
omen analyze hotspot ./src --top 10
Analyze Temporal Coupling
omen analyze temporal ./ --min-coupling 0.5 --min-commits 5
Check Bus Factor Risk
omen analyze ownership ./src --top 20
CK Metrics for Classes
omen analyze cohesion ./src --sort lcom

MCP Server

Omen includes a Model Context Protocol (MCP) server that exposes all analyzers as tools for LLMs like Claude. This enables AI assistants to analyze codebases directly.

Configuration

Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "omen": {
      "command": "omen",
      "args": ["mcp"]
    }
  }
}
Available Tools
Tool Description
analyze_complexity Cyclomatic and cognitive complexity analysis
analyze_satd Self-admitted technical debt detection
analyze_deadcode Unused functions and variables
analyze_churn Git file change frequency
analyze_duplicates Code clones and copy-paste detection
analyze_defect Defect probability prediction
analyze_tdg Technical Debt Gradient scores
analyze_graph Dependency graph generation
analyze_hotspot High churn + high complexity files
analyze_temporal_coupling Files that change together
analyze_ownership Code ownership and bus factor
analyze_cohesion CK OO metrics (LCOM, WMC, CBO, DIT)
analyze_repo_map PageRank-ranked symbol map

Each tool includes detailed descriptions with interpretation guidance, helping LLMs understand what metrics mean and when to use each analyzer.

Example Usage

Once configured, you can ask Claude:

  • "Analyze the complexity of this codebase"
  • "Find technical debt in the src directory"
  • "What are the hotspot files that need refactoring?"
  • "Show me the bus factor risk for this project"

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -am 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Create a Pull Request

Acknowledgments

Omen draws heavy inspiration from paiml-mcp-agent-toolkit - a fantastic CLI and comprehensive suite of code analysis tools for LLM workflows. If you're doing serious AI-assisted development, it's worth checking out. Omen exists as a streamlined alternative for teams who want a focused subset of analyzers without the additional dependencies. If you're looking for a Rust-focused MCP/agent generator as an alternative to Python, it's definitely worth checking out.

License

MIT License - see LICENSE for details.

Directories

Path Synopsis
cmd
omen command
internal
fileproc
Package fileproc provides concurrent file processing utilities.
Package fileproc provides concurrent file processing utilities.
vcs
Package vcs provides version control system abstractions.
Package vcs provides version control system abstractions.
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL