ghosttype
β¨οΈ Terminal Command Prediction
Ghosttype is your AI-powered command assistant for the terminal.
It learns how you work β from your command history, project context, and shell configuration β and predicts what you're most likely to type next.
Using a hybrid of traditional and AI-enhanced models, Ghosttype intelligently suggests your next move with:
- π Markov chains β learning the flow of your typical command sequences
- π Frequency analysis β surfacing your most common commands quickly
- π§ LLM-based embeddings β understanding semantic similarity via vector search
- πΎ Shell aliases β integrating your custom shortcuts
- π¦ Project context awareness β reading from
Makefile, package.json, pom.xml, and more
Itβs like having autocomplete β but for the way you use the terminal.
π§ Status: Active Development
Ghosttype is still under active development.
Expect occasional breaking changes. Contributions and issue reports are welcome!
Current Performance vs. Popular Tools
We regularly benchmark Ghosttype against established command-line tools to track our progress:
βββββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββββ¬βββββββββββ
β Tool β Top-1 β Top-10 β Avg Timeβ P95 Time β Errors β
βββββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββββΌβββββββββββ€
β π ghosttype β 16.0% β 31.0% β 158.483ms β 255.674ms β 0.5% β
β fzf β 7.5% β 13.5% β 10.846ms β 15.67ms β 41.5% β
βββββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββββ΄βββββββββββ
π₯ WINNERS BY METRIC:
Best Top-1 Accuracy: ghosttype
Best Top-10 Accuracy: ghosttype
Fastest Average Response: fzf
Best P95 Latency: fzf
Most Reliable: ghosttype
π‘ GHOSTTYPE ADVANTAGES:
β
2x more accurate than fzf (16.0% vs 7.5%)
What we're doing well:
- 2x more accurate command predictions than traditional fuzzy finders
- Zero errors vs 54% error rate in string-based matching
- Better semantic understanding of command intent
What we're working on:
- Latency optimization: Current ~800ms response time needs improvement for real-time use
- Model efficiency: Exploring lighter models and caching strategies
- Progressive loading: Show fast results immediately, then enhance with AI suggestions
- Hybrid approach: Instant prefix matching for short inputs, AI for complex queries
- Deeper contextual understanding: Providing more relevant suggestions by analyzing the current directory's files, git status, and recently executed commands.
- Intelligent error correction: Suggesting corrections for typos or common errors (e.g., correcting gti status to git status).
π Demo
$ git chβ # Press Ctrl+P (zsh Integration)
> git checkout main
git checkout add-slim-version
git checkout hoge
β¨ Features
- π Learns from
~/.zsh_history or ~/.bash_history
- π€ Embeds historical commands via LLM-powered vector search
- π§ Predicts likely next commands using multiple models (Markov, freq, embedding, etc.)
- π Context-aware suggestions from
Makefile, package.json, pom.xml, etc.
- β‘ Zsh keybinding integration
π Installation
1. Install ghosttype
go install github.com/trknhr/ghosttype@latest
This will install the ghosttype command to your $GOBIN (usually ~/go/bin).
π₯οΈ Zsh Integration
Add the following to your .zshrc:
# Predict a command using ghosttype + TUI, then replace current shell input with the selection
function ghosttype_predict() {
local result=$(ghosttype "$BUFFER")
if [[ -n "$result" ]]; then
BUFFER="$result"
CURSOR=${#BUFFER}
zle reset-prompt
fi
}
zle -N ghosttype_predict
bindkey '^p' ghosttype_predict
Then reload your shell:
source ~/.zshrc
Now press Ctrl+P in your terminal to trigger Ghosttype suggestions.
π§ Enable LLM-Powered Suggestions (via Ollama)
Ghosttype supports LLM-based predictions and vector embeddings powered by Ollama.
To use these features, follow the steps below:
1. Install Ollama
Download and install from the official site:
π https://ollama.com/download
Verify installation:
ollama --version
2. Pull required models
Ghosttype uses the following models:
llama3.2 β for next-command prediction
nomic-embed-text β for semantic similarity via embedding
Download the models:
ollama run llama3.2:1b # Starts and downloads the LLM model
ollama pull nomic-embed-text # Downloads the embedding model
βΉοΈ ollama run llama3.2:1b must be running in the background to enable LLM-powered suggestions.
You can run it in a separate terminal window:
Once Ollama is running and the models are downloaded, Ghosttype will automatically use them to enhance prediction accuracy.
π§ Architecture
Ghosttype uses an ensemble of models:
markov: Lightweight transition-based predictor
freq: Frequency-based suggestion engine
alias: Shell aliases from .zshrc/.bashrc
context: Targets from Makefile, package.json, pom.xml, etc.
embedding: LLM-generated vector search powered by ollama
All models implement a unified SuggestModel interface and are combined via ensemble.Model.
π Project Structure
.
βββ cmd/ # CLI (tui, suggest, root)
βββ history/ # Loaders for bash/zsh history
βββ model/ # All prediction models
βββ internal/ # Logging, utils, alias sync
βββ ollama/ # LLM/embedding interface
βββ parser/ # RC and alias parsing
βββ script/ # Shell helper scripts
βββ main.go
βββ go.mod
π License
Apache-2.0
See LICENSE for full terms.