Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ErrSessionBadSignature = errors.New("session signature invalid")
ErrSessionBadSignature is returned by Verify for cookies whose HMAC does not match. Includes tampered payloads and cookies signed by a previous AdminSessionSigner instance (post-rotation).
var ErrSessionExpired = errors.New("session expired")
ErrSessionExpired is returned by Verify for cookies whose embedded expiry has passed.
var ErrSessionIPMismatch = errors.New("session IP mismatch")
ErrSessionIPMismatch is returned when the cookie was issued for a different IP than the one presenting it. Stops cookie theft from a different network.
var ErrSessionMalformed = errors.New("session payload malformed")
ErrSessionMalformed wraps decoding errors so a corrupt cookie has a distinct sentinel from a tampered one.
Functions ¶
func CompareAdminSecret ¶
CompareAdminSecret returns true when stored and presented secrets match in constant time. Empty stored secret always returns false so a misconfigured admin_secret cannot accidentally accept any caller.
Types ¶
type AdminSessionSigner ¶
type AdminSessionSigner struct {
// contains filtered or unexported fields
}
AdminSessionSigner mints and verifies the signed cookies that let authenticated operators bypass the PoW. The signing key is generated on construction; rebuilding the signer (i.e., daemon restart) invalidates every previously-issued cookie -- the rotation contract.
func NewAdminSessionSigner ¶
func NewAdminSessionSigner(ttl time.Duration) (*AdminSessionSigner, error)
NewAdminSessionSigner generates a fresh 32-byte signing key. The caller must keep the returned pointer for the lifetime of the challenge server; never construct a second signer for the same server, or already-issued cookies will be invalidated mid-session.
func (*AdminSessionSigner) Issue ¶
func (s *AdminSessionSigner) Issue(ip string) string
Issue returns a cookie value of the form "<base64(payload)>.<base64 hmac>". The payload binds the cookie to a single IP and a single expiry so a stolen cookie does not work elsewhere or after the TTL.
func (*AdminSessionSigner) TTL ¶
func (s *AdminSessionSigner) TTL() time.Duration
TTL exposes the configured cookie lifetime so the server can set the matching Max-Age on the Set-Cookie header.
func (*AdminSessionSigner) Verify ¶
func (s *AdminSessionSigner) Verify(cookieValue, ip string) error
Verify checks the HMAC, payload format, expiry, and IP binding. Use errors.Is to branch on the failure mode.
type CaptchaProvider ¶
type CaptchaProvider interface {
Name() string
Verify(ctx context.Context, token, remoteIP string) (bool, error)
}
CaptchaProvider verifies a third-party CAPTCHA token. Implementations post the operator's secret + the visitor's response token to the provider's siteverify endpoint and return a single bool: did the provider accept this submission?
func NewCaptchaProvider ¶
func NewCaptchaProvider(name, secret string, timeout time.Duration) (CaptchaProvider, error)
NewCaptchaProvider returns the right provider for the configured name. Returns nil + nil when the operator has not enabled CAPTCHA; the server treats nil as "feature off".
type CrawlerVerifier ¶
type CrawlerVerifier struct {
// contains filtered or unexported fields
}
CrawlerVerifier classifies an IP as a verified search crawler iff the IP's reverse-DNS PTR matches one of the configured suffixes AND the PTR forward-resolves back to the same IP. The verifier caches both positive and negative results; positive cache TTL is the configured cacheTTL, negative is one-fifth of that to keep a transiently-broken resolver from locking out a legitimate crawler.
func NewCrawlerVerifier ¶
func NewCrawlerVerifier(providers []string, cacheTTL time.Duration, resolver Resolver) *CrawlerVerifier
NewCrawlerVerifier builds a verifier with the named crawler families enabled. Unknown names are ignored (operators may have configured a family this binary does not know about; that is harmless).
func (*CrawlerVerifier) Enabled ¶
func (v *CrawlerVerifier) Enabled() bool
Enabled reports whether at least one crawler family is configured; the server uses this to skip the verifier entirely (no DNS round trip) when the operator has not opted in.
func (*CrawlerVerifier) Verified ¶
func (v *CrawlerVerifier) Verified(ctx context.Context, ip string) bool
Verified does the reverse-DNS + forward-confirm dance for ip and caches the result. Returns true only when the PTR ends in one of the allowed suffixes AND a forward lookup of the PTR includes ip in the result set.
type ExpiredEntry ¶
ExpiredEntry is returned by ExpiredEntries for escalation.
type IPList ¶
type IPList struct {
// contains filtered or unexported fields
}
IPList manages the set of IPs that should see challenge pages. Apache RewriteMap reads the file to redirect IPs to the challenge server.
func (*IPList) CleanExpired ¶
func (l *IPList) CleanExpired()
CleanExpired removes expired entries without returning them. Use ExpiredEntries() instead when escalation is needed.
func (*IPList) ExpiredEntries ¶
func (l *IPList) ExpiredEntries() []ExpiredEntry
ExpiredEntries removes and returns all expired entries for escalation. The caller is expected to hard-block these IPs.
type IPUnblocker ¶
IPUnblocker is the interface for temporarily allowing an IP.
type Resolver ¶
type Resolver interface {
LookupAddr(ctx context.Context, addr string) (names []string, err error)
LookupHost(ctx context.Context, host string) (addrs []string, err error)
}
Resolver matches the subset of net.Resolver that CrawlerVerifier uses, so tests can swap in a fake without spinning up a real DNS server.
type Server ¶
type Server struct {
// contains filtered or unexported fields
}
Server serves challenge pages to gray-listed IPs. When an IP passes the challenge, it gets a temporary allow.
func New ¶
func New(cfg *config.Config, unblocker IPUnblocker, ipList *IPList) *Server
New creates a challenge server.
func (*Server) CleanExpired ¶
func (s *Server) CleanExpired()
CleanExpired removes old verification records, prunes the admin-failure log, and evicts stale crawler-cache entries. Called from the daemon's challengeEscalator ticker every 60 seconds; under a sustained scan from many source IPs, this is the only thing keeping per-IP map entries from accumulating until restart.