dank-mcp
dank-mcp
is a Model Context Procotol (MCP) server that respond to questions about supported Cannabis Datasets.
Currently the following datasets are supported:
You can find snapshots of these datasets in the AgentDant dank-data
repository.
Using with LLMs
To use this dank-mcp
MCP server, you must configure your host program to use it. We will illustrate with Claude Desktop. We must find the buttplug-mcp
program on our system; the example below shows where dank-mcp
is installed with MacOS Homebrew (perhaps build your own and point at that).
The following configuration JSON (also in the repo as mcp-config.json
) sets this up:
{
"mcpServers": {
"dank": {
"command": "/opt/homebrew/bin/dank-mcp",
"args": [
]
}
}
}
Claude Desktop
Using Claude Desktop, you can follow their configuration tutorial but substitute the configuration above. With that in place, you can ask Claude question and it will use the buttplug-mcp
server. Here's example conversations:
Ollama and mcphost
For local inferencing, there are MCP hosts that support Ollama. You can use any Ollama LLM that supports "Tools". We experimented with mcphost
, authored by the developer of the mcp-go
library that peformed the heavy lifting for us.
Here's how to install and run with it with the configuration above, stored in mcp-config.json
:
$ go install github.com/mark3labs/mcphost@latest
$ ollama pull llama3.3
$ mcphost -m ollama:llama3.3 --config mcp-config.json
...chat away...
Command Line Usage
Here is the command-line help:
usage: ./bin/dank-mcp [opts]
--db string DuckDB data file to use (use :memory: or '' for in-memory) (default "dank-mcp.duckdb")
--dump Only download files and populate DB, no MCP server
-h, --help Show help
-l, --log-file string Log file destination (or MCP_LOG_FILE envvar). Default is stderr
-j, --log-json Log in JSON (default is plaintext)
-n, --no-fetch Don't fetch any data, only use what is in current DB
--root string Set root location of '.dank' dir
--sse Use SSE Transport (default is STDIO transport)
--sse-host string host:port to listen to SSE connections
-t, --token string ct.data.gov App Token
-v, --verbose Verbose logging
You can download and dump the data by running dank-mcp --dump
. Snapshots of this data are curated at AgentDank's dank-data
repository.
$ dank-mcp --dump
time=2025-04-03T18:18:03.177-04:00 level=INFO msg=dank-mcp
time=2025-04-03T18:18:03.185-04:00 level=INFO msg="fetching brands from ct.data.gov"
time=2025-04-03T18:18:24.055-04:00 level=INFO msg="inserting brands into db" count=26499
time=2025-04-03T18:18:52.533-04:00 level=INFO msg=finished
Data Cleaning
Because the upstream datasets are not perfect, we have to "clean" it. Such practices are opinionated, but since this is Open Source, you can inspect how we clean it by examining the code. For example:
- CT Brands cleaning
- Extract "trace" values from text measurments
Generally, we simply remove weird characters. We treat detected "trace" amounts as 0 -- and even with that, there's a decision about what field entry actually means "trace".
We also remove rows with riduclous data -- as much as I'd love a 90,385% THC product
, I don't think it really exists. In that case, it was an incorrect decimal point (by looking at the label picture), but not every picture validates every error. It's also a pain -- but maybe a multi-modal vision LLM can help with that?
Building
Building is performed with task:
$ task
task: [build] go build -o dank-mcp cmd/dank-mcp/main.go
Contribution and Conduct
Pull requests and issues are welcome. Or fork it. You do you.
Either way, obey our Code of Conduct. Be shady, but don't be a jerk.
Credits and License
Copyright (c) 2025 Neomantra Corp. Authored by Evan Wies for AgentDank.
Released under the MIT License, see LICENSE.txt.
Made with 🌿 and 🔥 by the team behind AgentDank.