fastrpc

package module
v0.0.0-...-32fdf45 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 15, 2020 License: MIT Imports: 12 Imported by: 0

README

GoDoc Go Report

fastrpc

Experimental fork of valyala/fastrpc. Meant to be tested against high-latency p2p networks.

Features

  • Optimized for speed.
  • Zero memory allocations in hot paths.
  • Compression saves network bandwidth.

How does it work?

It just sends batched rpc requests and responses over a single compressed connection. This solves the following issues:

  • High network bandwidth usage.
  • High network packets rate.
  • A lot of open TCP connections.

Benchmark results

GOMAXPROCS=1 go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/iwasaki-kenta/fastrpc
BenchmarkCoarseTimeNow          347397258                3.39 ns/op            0 B/op          0 allocs/op
BenchmarkTimeNow                22361860                51.9 ns/op             0 B/op          0 allocs/op
BenchmarkEndToEndNoDelay1         363763              2831 ns/op          92.19 MB/s           0 B/op          0 allocs/op
BenchmarkEndToEndNoDelay10        415294              2820 ns/op          92.55 MB/s           5 B/op          0 allocs/op
BenchmarkEndToEndNoDelay100       423319              2915 ns/op          89.54 MB/s           6 B/op          0 allocs/op
BenchmarkEndToEndNoDelay1000      388750              2809 ns/op          92.91 MB/s          17 B/op          0 allocs/op
BenchmarkEndToEndNoDelay10K       400630              2903 ns/op          89.90 MB/s          91 B/op          0 allocs/op
BenchmarkEndToEndDelay1ms         463946              2352 ns/op         110.95 MB/s          13 B/op          0 allocs/op
BenchmarkEndToEndDelay2ms         381465              2832 ns/op          92.16 MB/s          15 B/op          0 allocs/op
BenchmarkEndToEndDelay4ms         196458              5626 ns/op          46.39 MB/s          31 B/op          0 allocs/op
BenchmarkEndToEndDelay8ms         124017              9416 ns/op          27.72 MB/s          47 B/op          0 allocs/op
BenchmarkEndToEndDelay16ms         59052             17158 ns/op          15.21 MB/s         100 B/op          0 allocs/op
BenchmarkEndToEnd                 476716              2739 ns/op          95.30 MB/s          12 B/op          0 allocs/op
BenchmarkEndToEndPipeline1        479089              2745 ns/op          95.10 MB/s           0 B/op          0 allocs/op
BenchmarkEndToEndPipeline10       445239              2660 ns/op          98.12 MB/s           4 B/op          0 allocs/op
BenchmarkEndToEndPipeline100      486576              2533 ns/op         103.02 MB/s           5 B/op          0 allocs/op
BenchmarkEndToEndPipeline1000     391005              2862 ns/op          91.19 MB/s          16 B/op          0 allocs/op
BenchmarkSendNowait              3698457               320 ns/op               0 B/op          0 allocs/op

GOMAXPROCS=4 go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/iwasaki-kenta/fastrpc
BenchmarkCoarseTimeNow-4                1000000000               0.868 ns/op           0 B/op          0 allocs/op
BenchmarkTimeNow-4                      77040884                13.8 ns/op             0 B/op          0 allocs/op
BenchmarkEndToEndNoDelay1-4              1000000              1040 ns/op         250.99 MB/s           1 B/op          0 allocs/op
BenchmarkEndToEndNoDelay10-4             1213870              1061 ns/op         245.99 MB/s           1 B/op          0 allocs/op
BenchmarkEndToEndNoDelay100-4            1225502               986 ns/op         264.69 MB/s           2 B/op          0 allocs/op
BenchmarkEndToEndNoDelay1000-4           1126610              1096 ns/op         238.07 MB/s           8 B/op          0 allocs/op
BenchmarkEndToEndNoDelay10K-4            1012008              1123 ns/op         232.42 MB/s          41 B/op          0 allocs/op
BenchmarkEndToEndDelay1ms-4              1150018              1024 ns/op         254.77 MB/s           7 B/op          0 allocs/op
BenchmarkEndToEndDelay2ms-4              1088492              1163 ns/op         224.48 MB/s           7 B/op          0 allocs/op
BenchmarkEndToEndDelay4ms-4               745650              1477 ns/op         176.70 MB/s          10 B/op          0 allocs/op
BenchmarkEndToEndDelay8ms-4               343885              2948 ns/op          88.52 MB/s          23 B/op          0 allocs/op
BenchmarkEndToEndDelay16ms-4              226666              5369 ns/op          48.61 MB/s          35 B/op          0 allocs/op
BenchmarkEndToEnd-4                      1204228               974 ns/op         267.85 MB/s           7 B/op          0 allocs/op
BenchmarkEndToEndPipeline1-4             1469049               857 ns/op         304.64 MB/s           0 B/op          0 allocs/op
BenchmarkEndToEndPipeline10-4            1440090               871 ns/op         299.54 MB/s           1 B/op          0 allocs/op
BenchmarkEndToEndPipeline100-4           1341830               864 ns/op         302.15 MB/s           2 B/op          0 allocs/op
BenchmarkEndToEndPipeline1000-4          1397589               901 ns/op         289.65 MB/s           6 B/op          0 allocs/op
BenchmarkSendNowait-4                    6482055               178 ns/op               0 B/op          0 allocs/op

Documentation

Index

Constants

View Source
const (
	// DefaultMaxPendingRequests is the default number of pending requests
	// a single Client may queue before sending them to the server.
	//
	// This parameter may be overridden by Client.MaxPendingRequests.
	DefaultMaxPendingRequests = 1000

	// DefaultConcurrency is the default maximum number of concurrent
	// Server.Handler goroutines the server may run.
	DefaultConcurrency = 10000

	// DefaultHandshakeTimeout is the default timeout before declaring whether or not a handshake has failed.
	DefaultHandshakeTimeout = 3 * time.Second

	// DefaultReadBufferSize is the default size for read buffers.
	DefaultReadBufferSize = 64 * 1024

	// DefaultWriteBufferSize is the default size for write buffers.
	DefaultWriteBufferSize = 64 * 1024
)

Variables

View Source
var (
	// ErrTimeout is returned from timed out calls.
	ErrTimeout = fasthttp.ErrTimeout

	// ErrPendingRequestsOverflow is returned when Client cannot send
	// more requests to the server due to Client.MaxPendingRequests limit.
	ErrPendingRequestsOverflow = errors.New("pending requests overflowed")
)

Functions

This section is empty.

Types

type Client

type Client struct {
	// NewResponse must return new response object.
	NewResponse func() ResponseReader

	// Addr is the Server address to connect to.
	Addr string

	// Dial is a custom function used for connecting to the Server.
	//
	// fasthttp.Dial is used by default.
	Dial func(addr string) (net.Conn, error)

	Handshake        func(conn net.Conn) (net.Conn, error)
	HandshakeTimeout time.Duration

	// MaxPendingRequests is the maximum number of pending requests
	// the client may issue until the server responds to them.
	//
	// DefaultMaxPendingRequests is used by default.
	MaxPendingRequests int

	// MaxBatchDelay is the maximum duration before pending requests
	// are sent to the server.
	//
	// Requests' batching may reduce network bandwidth usage and CPU usage.
	//
	// By default requests are sent immediately to the server.
	MaxBatchDelay time.Duration

	// Maximum duration for full response reading (including body).
	//
	// This also limits idle connection lifetime duration.
	//
	// By default response read timeout is unlimited.
	ReadTimeout time.Duration

	// Maximum duration for full request writing (including body).
	//
	// By default request write timeout is unlimited.
	WriteTimeout time.Duration

	// ReadBufferSize is the size for read buffer.
	//
	// DefaultReadBufferSize is used by default.
	ReadBufferSize int

	// WriteBufferSize is the size for write buffer.
	//
	// DefaultWriteBufferSize is used by default.
	WriteBufferSize int

	// Prioritizes new requests over old requests if MaxPendingRequests pending
	// requests is reached.
	PrioritizeNewRequests bool

	OnMessageSent func(conn net.Conn)
	OnMessageRecv func(conn net.Conn)
	// contains filtered or unexported fields
}

Client sends rpc requests to the Server over a single connection.

Use multiple clients for establishing multiple connections to the server if a single connection processing consumes 100% of a single CPU core on either multi-core client or server.

func (*Client) Close

func (c *Client) Close()

func (*Client) Conn

func (c *Client) Conn() net.Conn

func (*Client) DoDeadline

func (c *Client) DoDeadline(req RequestWriter, resp ResponseReader, deadline time.Time) error

DoDeadline sends the given request to the server set in Client.Addr.

ErrTimeout is returned if the server didn't return response until the given deadline.

func (*Client) PendingRequests

func (c *Client) PendingRequests() int

PendingRequests returns the number of pending requests at the moment.

This function may be used either for informational purposes or for load balancing purposes.

func (*Client) SendNowait

func (c *Client) SendNowait(req RequestWriter, releaseReq func(req RequestWriter)) bool

SendNowait schedules the given request for sending to the server set in Client.Addr.

req cannot be used after SendNowait returns and until releaseReq is called. releaseReq is called when the req is no longer needed and may be re-used.

req cannot be re-used if releaseReq is nil.

Returns true if the request is successfully scheduled for sending, otherwise returns false.

Response for the given request is ignored.

type HandlerCtx

type HandlerCtx interface {
	// ConcurrencyLimitError must set the response
	// to 'concurrency limit exceeded' error.
	ConcurrencyLimitError(concurrency int)

	// Init must prepare ctx for reading the next request.
	Init(conn net.Conn, logger fasthttp.Logger)

	// ReadRequest must read request from br.
	ReadRequest(br *bufio.Reader) error

	// WriteResponse must write response to bw.
	WriteResponse(bw *bufio.Writer) error
}

HandlerCtx is an interface implementing context passed to Server.Handler

type RequestWriter

type RequestWriter interface {
	// WriteRequest must write request to bw.
	WriteRequest(bw *bufio.Writer) error
}

RequestWriter is an interface for writing rpc request to buffered writer.

type ResponseReader

type ResponseReader interface {
	// ReadResponse must read response from br.
	ReadResponse(br *bufio.Reader) error
}

ResponseReader is an interface for reading rpc response from buffered reader.

type Server

type Server struct {
	// NewHandlerCtx must return new HandlerCtx
	NewHandlerCtx func() HandlerCtx

	// Handler must process incoming requests.
	//
	// The handler must return either ctx passed to the call
	// or new non-nil ctx.
	//
	// The handler may return ctx passed to the call only if the ctx
	// is no longer used after returning from the handler.
	// Otherwise new ctx must be returned.
	Handler func(ctx HandlerCtx) HandlerCtx

	Handshake        func(conn net.Conn) (net.Conn, error)
	HandshakeTimeout time.Duration

	// Concurrency is the maximum number of concurrent goroutines
	// with Server.Handler the server may run.
	//
	// DefaultConcurrency is used by default.
	Concurrency int

	// MaxBatchDelay is the maximum duration before ready responses
	// are sent to the client.
	//
	// Responses' batching may reduce network bandwidth usage and CPU usage.
	//
	// By default responses are sent immediately to the client.
	MaxBatchDelay time.Duration

	// Maximum duration for reading the full request (including body).
	//
	// This also limits the maximum lifetime for idle connections.
	//
	// By default request read timeout is unlimited.
	ReadTimeout time.Duration

	// Maximum duration for writing the full response (including body).
	//
	// By default response write timeout is unlimited.
	WriteTimeout time.Duration

	// ReadBufferSize is the size for read buffer.
	//
	// DefaultReadBufferSize is used by default.
	ReadBufferSize int

	// WriteBufferSize is the size for write buffer.
	//
	// DefaultWriteBufferSize is used by default.
	WriteBufferSize int

	// Logger, which is used by the Server.
	//
	// Standard logger from log package is used by default.
	Logger fasthttp.Logger

	// PipelineRequests enables requests' pipelining.
	//
	// Requests from a single client are processed serially
	// if is set to true.
	//
	// Enabling requests' pipelining may be useful in the following cases:
	//
	//   - if requests from a single client must be processed serially;
	//   - if the Server.Handler doesn't block and maximum throughput
	//     must be achieved for requests' processing.
	//
	// By default requests from a single client are processed concurrently.
	PipelineRequests bool
	// contains filtered or unexported fields
}

Server accepts rpc requests from Client.

func (*Server) Serve

func (s *Server) Serve(ln net.Listener) error

Serve serves rpc requests accepted from the given listener.

Directories

Path Synopsis
Package tlv provides 'Type-Length-Value' building blocks for fastrpc.
Package tlv provides 'Type-Length-Value' building blocks for fastrpc.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL