next.orly.dev

command module
v0.19.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 25, 2025 License: Unlicense Imports: 22 Imported by: 0

README

= next.orly.dev
:toc:
:note-caption: note 👉

image:./docs/orly.png[orly.dev]

image:https://img.shields.io/badge/godoc-documentation-blue.svg[Documentation,link=https://pkg.go.dev/next.orly.dev]
image:https://img.shields.io/badge/donate-geyser_crowdfunding_project_page-orange.svg[Support this project,link=https://geyser.fund/project/orly]
zap me: ⚡️mlekudev@getalby.com
follow me on link:https://jumble.social/users/npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku[nostr]

== about

ORLY is a nostr relay written from the ground up to be performant, low latency, and built with a number of features designed to make it well suited for

- personal relays
- small community relays
- business deployments and RaaS (Relay as a Service) with a nostr-native NWC client to allow accepting payments through NWC capable lightning nodes
- high availability clusters for reliability and/or providing a unified data set across multiple regions

ORLY uses a fast embedded link:https://github.com/hypermodeinc/badger[badger] database with a database designed for high performance querying and event storage.

On linux platforms, it uses https://github.com/bitcoin/secp256k1[libsecp256k1]-enabled signature and signature verification (see link:pkg/crypto/p256k/README.md[here]).

== building

ORLY is a standard Go application that can be built using the Go toolchain.

=== prerequisites

- Go 1.25.0 or later
- Git
- For web UI: link:https://bun.sh/[Bun] JavaScript runtime

=== basic build

To build the relay binary only:

[source,bash]
----
git clone <repository-url>
cd next.orly.dev
go build -o orly
----

=== building with web UI

To build with the embedded web interface:

[source,bash]
----
# Build the Svelte web application
cd app/web
bun install
bun run build

# Build the Go binary from project root
cd ../../
go build -o orly
----

The recommended way to build and embed the web UI is using the provided script:

[source,bash]
----
./scripts/update-embedded-web.sh
----

This script will:
- Build the Svelte app in `app/web` to `app/web/dist` using Bun (preferred) or fall back to npm/yarn/pnpm
- Run `go install` from the repository root so the binary picks up the new embedded assets
- Automatically detect and use the best available JavaScript package manager

For manual builds, you can also use:

[source,bash]
----
#!/bin/bash
# build.sh
echo "Building Svelte app..."
cd app/web
bun install
bun run build

echo "Building Go binary..."
cd ../../
go build -o orly

echo "Build complete!"
----

Make it executable with `chmod +x build.sh` and run with `./build.sh`.

== web UI

ORLY includes a modern web-based user interface built with link:https://svelte.dev/[Svelte] that provides comprehensive relay management capabilities.

=== features

The web UI offers:

* **Authentication**: Secure login using Nostr key pairs with challenge-response authentication
* **Event Management**: View, export, and import Nostr events with advanced filtering and search
* **User Administration**: Manage user permissions and roles (admin/owner)
* **Sprocket Management**: Configure and manage external event processing scripts
* **Real-time Updates**: Live event streaming and status updates
* **Dark/Light Theme**: Toggle between themes with persistent preferences
* **Responsive Design**: Works on desktop and mobile devices

=== authentication

The web UI uses Nostr-native authentication:

1. **Challenge Generation**: Server generates a cryptographic challenge
2. **Signature Verification**: Client signs the challenge with their private key
3. **Session Management**: Authenticated sessions with role-based permissions

Supported authentication methods:
- Direct private key input
- Nostr extension integration
- Hardware wallet support

=== user roles

* **Guest**: Read-only access to public events
* **User**: Can publish events and manage their own content
* **Admin**: Full relay management except sprocket configuration
* **Owner**: Complete control including sprocket management and system configuration

=== event management

The interface provides comprehensive event management:

* **Event Browser**: Paginated view of all events with filtering by kind, author, and content
* **Export Functionality**: Export events in JSON format with configurable date ranges
* **Import Capability**: Bulk import events (admin/owner only)
* **Search**: Full-text search across event content and metadata
* **Event Details**: Expandable view showing full event JSON and metadata

=== sprocket integration

The web UI includes a dedicated sprocket management interface:

* **Status Monitoring**: Real-time status of sprocket scripts
* **Script Upload**: Upload and manage sprocket scripts
* **Version Control**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View sprocket execution logs and errors

=== development mode

For development, the web UI supports hot-reloading:

[source,bash]
----
# Enable development proxy
export ORLY_WEB_DISABLE_EMBEDDED=true
export ORLY_WEB_DEV_PROXY_URL=localhost:5000

# Start relay
./orly

# In another terminal, start Svelte dev server
cd app/web
bun run dev
----

This allows for rapid development with automatic reloading of changes.

== sprocket event sifter interface

The sprocket system provides a powerful interface for external event processing scripts, allowing you to implement custom filtering, validation, and processing logic for Nostr events before they are stored in the relay.

=== overview

Sprocket scripts receive events via stdin and respond with JSONL (JSON Lines) format, enabling real-time event processing with three possible actions:

* **accept**: Continue with normal event processing
* **reject**: Return OK false to client with rejection message
* **shadowReject**: Return OK true to client but abort processing (useful for spam filtering)

=== how it works

1. **Event Reception**: Events are sent to the sprocket script as JSON objects via stdin
2. **Processing**: Script analyzes the event and applies custom logic
3. **Response**: Script responds with JSONL containing the decision and optional message
4. **Action**: Relay processes the response and either accepts, rejects, or shadow rejects the event

=== script protocol

==== input format

Events are sent as JSON objects, one per line:

```json
{
  "id": "event_id_here",
  "kind": 1,
  "content": "Hello, world!",
  "pubkey": "author_pubkey",
  "tags": [["t", "hashtag"], ["p", "reply_pubkey"]],
  "created_at": 1640995200,
  "sig": "signature_here"
}
```

==== output format

Scripts must respond with JSONL format:

```json
{"id": "event_id", "action": "accept", "msg": ""}
{"id": "event_id", "action": "reject", "msg": "reason for rejection"}
{"id": "event_id", "action": "shadowReject", "msg": ""}
```

=== configuration

Enable sprocket processing:

[source,bash]
----
export ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
----

The sprocket script should be placed at:
`~/.config/{ORLY_APP_NAME}/sprocket.sh`

For example, with default `ORLY_APP_NAME="ORLY"`:
`~/.config/ORLY/sprocket.sh`

Backup files are automatically created when updating sprocket scripts via the web UI, with timestamps like:
`~/.config/ORLY/sprocket.sh.20240101120000`

=== manual sprocket updates

For manual sprocket script updates, you can use the stop/write/restart method:

1. **Stop the relay**:
   ```bash
   # Send SIGINT to gracefully stop
   kill -INT <relay_pid>
   ```

2. **Write new sprocket script**:
   ```bash
   # Create/update the sprocket script
   cat > ~/.config/ORLY/sprocket.sh << 'EOF'
   #!/bin/bash
   while read -r line; do
       if [[ -n "$line" ]]; then
           event_id=$(echo "$line" | jq -r '.id')
           echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
       fi
   done
   EOF
   
   # Make it executable
   chmod +x ~/.config/ORLY/sprocket.sh
   ```

3. **Restart the relay**:
   ```bash
   ./orly
   ```

The relay will automatically detect the new sprocket script and start it. If the script fails, sprocket will be disabled and all events rejected until the script is fixed.

=== failure handling

When sprocket is enabled but fails to start or crashes:

1. **Automatic Disable**: Sprocket is automatically disabled
2. **Event Rejection**: All incoming events are rejected with error message
3. **Periodic Recovery**: Every 30 seconds, the system checks if the sprocket script becomes available
4. **Auto-Restart**: If the script is found, sprocket is automatically re-enabled and restarted

This ensures that:
- Relay continues running even when sprocket fails
- No events are processed without proper sprocket filtering
- Sprocket automatically recovers when the script is fixed
- Clear error messages inform users about the sprocket status
- Error messages include the exact file location for easy fixes

When sprocket fails, the error message will show:
`sprocket disabled due to failure - all events will be rejected (script location: ~/.config/ORLY/sprocket.sh)`

This makes it easy to locate and fix the sprocket script file.

=== example script

Here's a Python example that implements various filtering criteria:

[source,python]
----
#!/usr/bin/env python3
import json
import sys

def process_event(event_json):
    event_id = event_json.get('id', '')
    event_content = event_json.get('content', '')
    event_kind = event_json.get('kind', 0)
    
    # Reject spam content
    if 'spam' in event_content.lower():
        return {
            'id': event_id,
            'action': 'reject',
            'msg': 'Content contains spam'
        }
    
    # Shadow reject test events
    if event_kind == 9999:
        return {
            'id': event_id,
            'action': 'shadowReject',
            'msg': ''
        }
    
    # Accept all other events
    return {
        'id': event_id,
        'action': 'accept',
        'msg': ''
    }

# Main processing loop
for line in sys.stdin:
    if line.strip():
        try:
            event = json.loads(line)
            response = process_event(event)
            print(json.dumps(response))
            sys.stdout.flush()
        except json.JSONDecodeError:
            continue
----

=== bash example

A simple bash script example:

[source,bash]
----
#!/bin/bash
while read -r line; do
    if [[ -n "$line" ]]; then
        # Extract event ID
        event_id=$(echo "$line" | jq -r '.id')
        
        # Check for spam content
        if echo "$line" | jq -r '.content' | grep -qi "spam"; then
            echo "{\"id\":\"$event_id\",\"action\":\"reject\",\"msg\":\"Spam detected\"}"
        else
            echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
        fi
    fi
done
----

=== testing

Test your sprocket script directly:

[source,bash]
----
# Test with sample event
echo '{"id":"test","kind":1,"content":"spam test"}' | python3 sprocket.py

# Expected output:
# {"id": "test", "action": "reject", "msg": "Content contains spam"}
----

Run the comprehensive test suite:

[source,bash]
----
./test-sprocket-complete.sh
----

=== web UI management

The web UI provides a complete sprocket management interface:

* **Status Monitoring**: View real-time sprocket status and health
* **Script Upload**: Upload new sprocket scripts via the web interface
* **Version Management**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View execution logs and error messages
* **Restart**: Restart sprocket scripts without relay restart

=== use cases

Common sprocket use cases include:

* **Spam Filtering**: Detect and reject spam content
* **Content Moderation**: Implement custom content policies
* **Rate Limiting**: Control event publishing rates
* **Event Validation**: Additional validation beyond Nostr protocol
* **Analytics**: Log and analyze event patterns
* **Integration**: Connect with external services and APIs

=== performance considerations

* Sprocket scripts run synchronously and can impact relay performance
* Keep processing logic efficient and fast
* Use appropriate timeouts to prevent blocking
* Consider using shadow reject for non-critical filtering to maintain user experience

== secp256k1 dependency

ORLY uses the optimized `libsecp256k1` C library from Bitcoin Core for schnorr signatures, providing 4x faster signing and ECDH operations compared to pure Go implementations.

=== installation

For Ubuntu/Debian, you can use the provided installation script:

[source,bash]
----
./scripts/ubuntu_install_libsecp256k1.sh
----

Or install manually:

[source,bash]
----
# Install build dependencies
sudo apt -y install build-essential autoconf libtool

# Initialize and build secp256k1
cd pkg/crypto/p256k/secp256k1
git submodule init
git submodule update
./autogen.sh
./configure --enable-module-schnorrsig --enable-module-ecdh --prefix=/usr
make
sudo make install
----

=== fallback mode

If you need to build without the C library dependency, disable CGO:

[source,bash]
----
export CGO_ENABLED=0
go build -o orly
----

This uses the pure Go `btcec` fallback library, which is slower but doesn't require system dependencies.

== deployment

ORLY includes an automated deployment script that handles Go installation, dependency setup, building, and systemd service configuration.

=== automated deployment

The deployment script (`scripts/deploy.sh`) provides a complete setup solution:

[source,bash]
----
# Clone the repository
git clone <repository-url>
cd next.orly.dev

# Run the deployment script
./scripts/deploy.sh
----

The script will:

1. **Install Go 1.23.1** if not present (in `~/.local/go`)
2. **Configure environment** by creating `~/.goenv` and updating `~/.bashrc`
3. **Install build dependencies** using the secp256k1 installation script (requires sudo)
4. **Build the relay** with embedded web UI using `update-embedded-web.sh`
5. **Set capabilities** for port 443 binding (requires sudo)
6. **Install binary** to `~/.local/bin/orly`
7. **Create systemd service** and enable it

After deployment, reload your shell environment:

[source,bash]
----
source ~/.bashrc
----

=== TLS configuration

ORLY supports automatic TLS certificate management with Let's Encrypt and custom certificates:

[source,bash]
----
# Enable TLS with Let's Encrypt for specific domains
export ORLY_TLS_DOMAINS=relay.example.com,backup.relay.example.com

# Optional: Use custom certificates (will load .pem and .key files)
export ORLY_CERTS=/path/to/cert1,/path/to/cert2

# When TLS domains are configured, ORLY will:
# - Listen on port 443 for HTTPS/WSS
# - Listen on port 80 for ACME challenges
# - Ignore ORLY_PORT setting
----

Certificate files should be named with `.pem` and `.key` extensions:
- `/path/to/cert1.pem` (certificate)
- `/path/to/cert1.key` (private key)

=== systemd service management

The deployment script creates a systemd service for easy management:

[source,bash]
----
# Start the service
sudo systemctl start orly

# Stop the service
sudo systemctl stop orly

# Restart the service
sudo systemctl restart orly

# Enable service to start on boot
sudo systemctl enable orly --now

# Disable service from starting on boot
sudo systemctl disable orly --now

# Check service status
sudo systemctl status orly

# View service logs
sudo journalctl -u orly -f

# View recent logs
sudo journalctl -u orly --since "1 hour ago"
----

=== remote deployment

You can deploy ORLY on a remote server using SSH:

[source,bash]
----
# Deploy to a VPS with SSH key authentication
ssh user@your-server.com << 'EOF'
  # Clone and deploy
  git clone <repository-url>
  cd next.orly.dev
  ./scripts/deploy.sh
  
  # Configure your relay
  echo 'export ORLY_TLS_DOMAINS=relay.example.com' >> ~/.bashrc
  echo 'export ORLY_ADMINS=npub1your_admin_key_here' >> ~/.bashrc
  
  # Start the service
  sudo systemctl start orly --now
EOF

# Check deployment status
ssh user@your-server.com 'sudo systemctl status orly'
----

=== configuration

After deployment, configure your relay by setting environment variables in your shell profile:

[source,bash]
----
# Add to ~/.bashrc or ~/.profile
export ORLY_TLS_DOMAINS=relay.example.com
export ORLY_ADMINS=npub1your_admin_key
export ORLY_ACL_MODE=follows
export ORLY_APP_NAME="MyRelay"
----

Then restart the service:

[source,bash]
----
source ~/.bashrc
sudo systemctl restart orly
----

=== firewall configuration

Ensure your firewall allows the necessary ports:

[source,bash]
----
# For TLS-enabled relays
sudo ufw allow 80/tcp   # HTTP (ACME challenges)
sudo ufw allow 443/tcp  # HTTPS/WSS

# For non-TLS relays
sudo ufw allow 3334/tcp # Default ORLY port

# Enable firewall if not already enabled
sudo ufw enable
----

=== monitoring

Monitor your relay using systemd and standard Linux tools:

[source,bash]
----
# Service status and logs
sudo systemctl status orly
sudo journalctl -u orly -f

# Resource usage
htop
sudo ss -tulpn | grep orly

# Disk usage (database grows over time)
du -sh ~/.local/share/ORLY/

# Check TLS certificates (if using Let's Encrypt)
ls -la ~/.local/share/ORLY/autocert/
----

== stress testing

The stress tester is a tool for performance testing relay implementations under various load conditions.

=== usage

[source,bash]
----
cd cmd/stresstest
go run . [options]
----

Or use the compiled binary:

[source,bash]
----
./cmd/stresstest/stresstest [options]
----

=== options

* `--address` - Relay address (default: localhost)
* `--port` - Relay port (default: 3334)  
* `--workers` - Number of concurrent publisher workers (default: 8)
* `--duration` - How long to run the stress test (default: 60s)
* `--publish-timeout` - Timeout waiting for OK per publish (default: 15s)
* `--query-workers` - Number of concurrent query workers (default: 4)
* `--query-timeout` - Subscription timeout for queries (default: 3s)
* `--query-min-interval` - Minimum interval between queries per worker (default: 50ms)
* `--query-max-interval` - Maximum interval between queries per worker (default: 300ms)
* `--skip-cache` - Skip uploading example events before running

=== example

[source,bash]
----
# Run stress test against local relay for 2 minutes with 16 workers
go run cmd/stresstest/main.go --address localhost --port 3334 --workers 16 --duration 120s

# Test a remote relay with higher query load
go run cmd/stresstest/main.go --address relay.example.com --port 443 --query-workers 8 --duration 300s
----

The stress tester will show real-time statistics including events sent/received per second, query counts, and results.

== benchmarks

The benchmark suite provides comprehensive performance testing and comparison across multiple relay implementations.

=== quick start

1. **Setup external relays:**
+
[source,bash]
----
cd cmd/benchmark
./setup-external-relays.sh
----

2. **Run all benchmarks:**
+
[source,bash]
----
docker compose up --build
----

3. **View results:**
+
[source,bash]
----
# View aggregate report
cat reports/run_YYYYMMDD_HHMMSS/aggregate_report.txt

# List individual relay results
ls reports/run_YYYYMMDD_HHMMSS/
----

=== benchmark types

The suite includes three main benchmark patterns:

==== peak throughput test
Tests maximum event ingestion rate with concurrent workers pushing events as fast as possible. Measures events/second, latency distribution, and success rate.

==== burst pattern test  
Simulates real-world traffic with alternating high-activity bursts and quiet periods to test relay behavior under varying loads.

==== mixed read/write test
Concurrent read and write operations to test query performance while events are being ingested. Measures combined throughput and latency.

=== tested relays

The benchmark suite compares:

* **next.orly.dev** (this repository) - BadgerDB-based relay
* **Khatru** - SQLite and Badger variants  
* **Relayer** - Basic example implementation
* **Strfry** - C++ LMDB-based relay
* **nostr-rs-relay** - Rust-based relay with SQLite

=== metrics reported

* **Throughput**: Events processed per second
* **Latency**: Average, P95, and P99 response times  
* **Success Rate**: Percentage of successful operations
* **Memory Usage**: Peak memory consumption during tests
* **Error Analysis**: Detailed error reporting and categorization

Results are timestamped and stored in the `reports/` directory for tracking performance improvements over time.

== follows ACL

The follows ACL (Access Control List) system provides a flexible way to control relay access based on social relationships in the Nostr network. It grants different access levels to users based on whether they are followed by designated admin users.

=== how it works

The follows ACL system operates by:

1. **Admin Configuration**: Designated admin users are specified in the relay configuration
2. **Follow List Discovery**: The system fetches follow lists (kind 3 events) from admin users
3. **Access Level Assignment**:
   - **Admin access**: Users listed as admins get full administrative privileges
   - **Write access**: Users followed by any admin can publish events to the relay
   - **Read access**: All other users can only read events from the relay

=== configuration

Enable the follows ACL system by setting the ACL mode:

[source,bash]
----
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1abc...,npub1xyz...
----

Or in your environment configuration:

[source,env]
----
ORLY_ACL_MODE=follows
ORLY_ADMINS=npub1abc123...,npub1xyz456...
----

=== usage example

[source,bash]
----
# Set up a relay with follows ACL
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku

# Start the relay
./orly
----

The relay will automatically:
- Load the follow lists of the specified admin users
- Grant write access to anyone followed by these admins
- Provide read-only access to everyone else
- Update follow lists dynamically as admins modify their follows

Documentation

The Go Gopher

There is no documentation for this package.

Source Files

  • main.go

Directories

Path Synopsis
app
config
Package config provides a go-simpler.org/env configuration table and helpers for working with the list of key/value lists stored in .env files.
Package config provides a go-simpler.org/env configuration table and helpers for working with the list of key/value lists stored in .env files.
cmd
aggregator command
benchmark command
convert command
policytest command
stresstest command
pkg
acl
crypto/ec
Package btcec implements support for the elliptic curves needed for bitcoin.
Package btcec implements support for the elliptic curves needed for bitcoin.
crypto/ec/base58
Package base58 provides an API for working with modified base58 and Base58Check encodings.
Package base58 provides an API for working with modified base58 and Base58Check encodings.
crypto/ec/bech32
Package bech32 provides a Go implementation of the bech32 format specified in BIP 173.
Package bech32 provides a Go implementation of the bech32 format specified in BIP 173.
crypto/ec/chaincfg
Package chaincfg provides basic parameters for bitcoin chain and testnets.
Package chaincfg provides basic parameters for bitcoin chain and testnets.
crypto/ec/chainhash
Package chainhash provides abstracted hash functionality.
Package chainhash provides abstracted hash functionality.
crypto/ec/ecdsa
Package ecdsa provides secp256k1-optimized ECDSA signing and verification.
Package ecdsa provides secp256k1-optimized ECDSA signing and verification.
crypto/ec/musig2
Package musig2 provides an implementation of the musig2 protocol for bitcoin.
Package musig2 provides an implementation of the musig2 protocol for bitcoin.
crypto/ec/schnorr
Package schnorr provides custom Schnorr signing and verification via secp256k1.
Package schnorr provides custom Schnorr signing and verification via secp256k1.
crypto/ec/secp256k1
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go.
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go.
crypto/ec/secp256k1/precomps command
Package main provides a generator for precomputed constants for secp256k1 signatures.
Package main provides a generator for precomputed constants for secp256k1 signatures.
crypto/ec/taproot
Package taproot provides a collection of tools for encoding bitcoin taproot addresses.
Package taproot provides a collection of tools for encoding bitcoin taproot addresses.
crypto/ec/wire
Package wire contains a set of data structure definitions for the bitcoin blockchain.
Package wire contains a set of data structure definitions for the bitcoin blockchain.
crypto/encryption
Package encryption contains the message encryption schemes defined in NIP-04 and NIP-44, used for encrypting the content of nostr messages.
Package encryption contains the message encryption schemes defined in NIP-04 and NIP-44, used for encrypting the content of nostr messages.
crypto/keys
Package keys is a set of helpers for generating and converting public/secret keys to hex and back to binary.
Package keys is a set of helpers for generating and converting public/secret keys to hex and back to binary.
crypto/p256k
Package p256k is a signer interface that (by default) uses the bitcoin/libsecp256k1 library for fast signature creation and verification of the BIP-340 nostr X-only signatures and public keys, and ECDH.
Package p256k is a signer interface that (by default) uses the bitcoin/libsecp256k1 library for fast signature creation and verification of the BIP-340 nostr X-only signatures and public keys, and ECDH.
crypto/sha256
Package sha256 is taken from github.com/minio/sha256-simd, implementing, where available, an accelerated SIMD implementation of sha256.
Package sha256 is taken from github.com/minio/sha256-simd, implementing, where available, an accelerated SIMD implementation of sha256.
encoders/bech32encoding
Package bech32encoding implements NIP-19 entities, which are bech32 encoded data that describes nostr data types.
Package bech32encoding implements NIP-19 entities, which are bech32 encoded data that describes nostr data types.
encoders/bech32encoding/pointers
Package pointers is a set of basic nip-19 data types for generating bech32 encoded nostr entities.
Package pointers is a set of basic nip-19 data types for generating bech32 encoded nostr entities.
encoders/bech32encoding/tlv
Package tlv implements a simple Type Length Value encoder for nostr NIP-19 bech32 encoded entities.
Package tlv implements a simple Type Length Value encoder for nostr NIP-19 bech32 encoded entities.
encoders/envelopes
Package envelopes provides common functions for marshaling and identifying nostr envelopes (JSON arrays containing protocol messages).
Package envelopes provides common functions for marshaling and identifying nostr envelopes (JSON arrays containing protocol messages).
encoders/envelopes/authenvelope
Package authenvelope defines the auth challenge (relay message) and response (client message) of the NIP-42 authentication protocol.
Package authenvelope defines the auth challenge (relay message) and response (client message) of the NIP-42 authentication protocol.
encoders/envelopes/closedenvelope
Package closedenvelope defines the nostr message type CLOSED which is sent from a relay to indicate the relay-side termination of a subscription or the demand for authentication associated with a subscription.
Package closedenvelope defines the nostr message type CLOSED which is sent from a relay to indicate the relay-side termination of a subscription or the demand for authentication associated with a subscription.
encoders/envelopes/closeenvelope
Package closeenvelope provides the encoder for the client message CLOSE which is a request to terminate a subscription.
Package closeenvelope provides the encoder for the client message CLOSE which is a request to terminate a subscription.
encoders/envelopes/countenvelope
Package countenvelope is an encoder for the COUNT request (client) and response (relay) message types.
Package countenvelope is an encoder for the COUNT request (client) and response (relay) message types.
encoders/envelopes/eoseenvelope
Package eoseenvelope provides an encoder for the EOSE (End Of Stored Events) event that signifies that a REQ has found all stored events and from here on the request morphs into a subscription, until the limit, if requested, or until CLOSE or CLOSED.
Package eoseenvelope provides an encoder for the EOSE (End Of Stored Events) event that signifies that a REQ has found all stored events and from here on the request morphs into a subscription, until the limit, if requested, or until CLOSE or CLOSED.
encoders/envelopes/eventenvelope
Package eventenvelope is a codec for the event Submission request EVENT envelope (client) and event Result (to a REQ) from a relay.
Package eventenvelope is a codec for the event Submission request EVENT envelope (client) and event Result (to a REQ) from a relay.
encoders/envelopes/messages
Package messages is a collection of example/common messages and machine-readable prefixes to use with OK and CLOSED envelopes.
Package messages is a collection of example/common messages and machine-readable prefixes to use with OK and CLOSED envelopes.
encoders/envelopes/noticeenvelope
Package noticeenvelope is a codec for the NOTICE envelope, which is used to serve (mostly ignored) messages that are supposed to be shown to a user in the client.
Package noticeenvelope is a codec for the NOTICE envelope, which is used to serve (mostly ignored) messages that are supposed to be shown to a user in the client.
encoders/envelopes/okenvelope
Package okenvelope is a codec for the OK message, which is an acknowledgement for an EVENT eventenvelope.Submission, containing true/false and if false a message with a machine readable error type as found in the messages package.
Package okenvelope is a codec for the OK message, which is an acknowledgement for an EVENT eventenvelope.Submission, containing true/false and if false a message with a machine readable error type as found in the messages package.
encoders/envelopes/reqenvelope
Package reqenvelope is a message from a client to a relay containing a subscription identifier and an array of filters to search for events.
Package reqenvelope is a message from a client to a relay containing a subscription identifier and an array of filters to search for events.
encoders/event/examples
Package examples is an embedded jsonl format of a collection of events intended to be used to test an event codec.
Package examples is an embedded jsonl format of a collection of events intended to be used to test an event codec.
encoders/hex
Package hex is a set of aliases and helpers for using the templexxx SIMD hex encoder.
Package hex is a set of aliases and helpers for using the templexxx SIMD hex encoder.
encoders/ints
Package ints is an optimised encoder for decimal numbers in ASCII format, that simplifies and accelerates encoding and decoding decimal strings.
Package ints is an optimised encoder for decimal numbers in ASCII format, that simplifies and accelerates encoding and decoding decimal strings.
encoders/ints/gen command
Package main is a generator for the base10000 (4 digit) encoding of the ints library.
Package main is a generator for the base10000 (4 digit) encoding of the ints library.
encoders/kind
Package kind includes a type for convenient handling of event kinds, and a kind database with reverse lookup for human-readable information about event kinds.
Package kind includes a type for convenient handling of event kinds, and a kind database with reverse lookup for human-readable information about event kinds.
encoders/tag
Package tag provides an implementation of a nostr tag list, an array of strings with a usually single letter first "key" field, including methods to compare, marshal/unmarshal and access elements with their proper semantics.
Package tag provides an implementation of a nostr tag list, an array of strings with a usually single letter first "key" field, including methods to compare, marshal/unmarshal and access elements with their proper semantics.
encoders/tag/atag
Package atag implements a special, optimized handling for keeping a tags (address) in a more memory efficient form while working with these tags.
Package atag implements a special, optimized handling for keeping a tags (address) in a more memory efficient form while working with these tags.
encoders/timestamp
Package timestamp is a set of helpers for working with timestamps including encoding and conversion to various integer forms, from time.Time and varints.
Package timestamp is a set of helpers for working with timestamps including encoding and conversion to various integer forms, from time.Time and varints.
encoders/varint
Package varint is a variable integer encoding that works in reverse compared to the stdlib binary Varint.
Package varint is a variable integer encoding that works in reverse compared to the stdlib binary Varint.
interfaces/acl
Package acl is an interface for implementing arbitrary access control lists.
Package acl is an interface for implementing arbitrary access control lists.
interfaces/signer
Package signer defines server for management of signatures, used to abstract the signature algorithm from the usage.
Package signer defines server for management of signatures, used to abstract the signature algorithm from the usage.
interfaces/store
Package store is an interface and ancillary helpers and types for defining a series of API elements for abstracting the event storage from the implementation.
Package store is an interface and ancillary helpers and types for defining a series of API elements for abstracting the event storage from the implementation.
interfaces/typer
Package typer is an interface for server to use to identify their type simply for aggregating multiple self-registered server such that the top level can recognise the type of a message and match it to the type of handler.
Package typer is an interface for server to use to identify their type simply for aggregating multiple self-registered server such that the top level can recognise the type of a message and match it to the type of handler.
protocol/directory
Package directory implements the distributed directory consensus protocol as defined in NIP-XX for Nostr relay operators.
Package directory implements the distributed directory consensus protocol as defined in NIP-XX for Nostr relay operators.
protocol/directory-client
Package directory_client provides a client library for the Distributed Directory Consensus Protocol (NIP-XX).
Package directory_client provides a client library for the Distributed Directory Consensus Protocol (NIP-XX).
protocol/httpauth
Package httpauth provides helpers and encoders for nostr NIP-98 HTTP authentication header messages and a new JWT authentication message and delegation event kind 13004 that enables time limited expiring delegations of authentication (as with NIP-42 auth) for the HTTP API.
Package httpauth provides helpers and encoders for nostr NIP-98 HTTP authentication header messages and a new JWT authentication message and delegation event kind 13004 that enables time limited expiring delegations of authentication (as with NIP-42 auth) for the HTTP API.
utils/apputil
Package apputil provides utility functions for file and directory operations.
Package apputil provides utility functions for file and directory operations.
utils/atomic
Package atomic provides simple wrappers around numerics to enforce atomic access.
Package atomic provides simple wrappers around numerics to enforce atomic access.
utils/atomic/internal/gen-atomicint command
gen-atomicint generates an atomic wrapper around an integer type.
gen-atomicint generates an atomic wrapper around an integer type.
utils/atomic/internal/gen-atomicwrapper command
gen-atomicwrapper generates wrapper types around other atomic types.
gen-atomicwrapper generates wrapper types around other atomic types.
utils/interrupt
Package interrupt is a library for providing handling for Ctrl-C/Interrupt handling and triggering callbacks for such things as closing files, flushing buffers, and other elements of graceful shutdowns.
Package interrupt is a library for providing handling for Ctrl-C/Interrupt handling and triggering callbacks for such things as closing files, flushing buffers, and other elements of graceful shutdowns.
utils/normalize
Package normalize is a set of tools for cleaning up URL s and formatting nostr OK and CLOSED messages.
Package normalize is a set of tools for cleaning up URL s and formatting nostr OK and CLOSED messages.
utils/number
Package number implements a simple number list, used with relayinfo package for NIP support lists.
Package number implements a simple number list, used with relayinfo package for NIP support lists.
utils/qu
Package qu is a library for making handling signal (chan struct{}) channels simpler, as well as monitoring the state of the signal channels in an application.
Package qu is a library for making handling signal (chan struct{}) channels simpler, as well as monitoring the state of the signal channels in an application.
utils/units
Package units is a convenient set of names designating data sizes in bytes using common ISO names (base 10).
Package units is a convenient set of names designating data sizes in bytes using common ISO names (base 10).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL