evproxy

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 15, 2023 License: MIT Imports: 16 Imported by: 0

README

Introduction

Evproxy is a simple high performance TCP(UDP not supported yet ) proxy based on epoll(current linux only), which aims to adress the high memory usage problem under high concurrent connections(>100k) when the proxy is implemented in the GO stdnet*. It supports basic source IP hash load balancing, read/write timeout, and bandwidth speed limiting.

Features

  • ~8x less memory usage compared to the go stdnet when connections > 100k.
  • TCP support
  • Source IP hash load balancing
  • Basic connection and traffic statistics
  • UDP support
  • TLS support
  • Proxy protocol support

Build

go install github.com/vincentwuo/evproxy/cmd/evproxy@latest

Download

Please check releases.

Quick start

Create a simple tcp proxy

#example
#simple
evproxy -bind 10.0.1.3:5201 -upstreams 10.0.1.4:5201

#load balance
evproxy -bind 10.0.1.3:5201 -upstreams 10.0.1.4:5201,10.0.1.5:5201

#help
evproxy -h
	-type string
        proxy type, defaut:tcp (default "tcp")
	-bind string
        addr to accept downstream data. Example: 0.0.0.0:8890 
  -c int
        cocurrent limit. '0' means no limit
  -check int
        (unit:second) the interval to check whether the unreachable upstream endpoint is back online or not (default 30)
  -dtimeout int
        (unit:second) the timeout when dialing to the upstream (default 15)
  -f string
        config file dir.
  -n int
        the number of workers to handlle the data transfer. default 0 will set it to the number of CPU cores
  -rmbps int
        mbyte per second for reading from the downstream. '0.0' means no limit
  -upstreams string
        upstream addrs define where the data will be transfer to , need to be splited by comma. Example: 1.1.1.1:123, 2.2.2.2:123
  -wmbps int
        mbyte per second for reading from the upstream. '0.0' means no limit
  -wtimeout int
        (unit:second) the timeout for the write operation when one of the peer is closed. (default 30)

Create with JSON config file

evproxy -f path/to/config.json

Config file example

{
    "workernum": 16,
    "proxyconfig": [
        {
            "type": "tcp",
            "bindaddr": "10.0.1.3:8080",
            "upstreamaddrs": "10.0.1.4:8080,10.0.1.5:8080",
            "upstreamcheckinterval": 30,
            "concurrentlimit": 10000,
            "readspeed": 10,
            "writespeed": 10,
            "dialtimeout": 60,
            "writetimeout":60
        },
        {
            "type": "tcp",
            "bindaddr": "10.0.1.3:8081",
            "upstreamaddrs": "10.0.1.14:8081",
            "upstreamcheckinterval": 30,
            "concurrentlimit": 10000,
            "readspeed": 10,
            "writespeed": 10,
            "dialtimeout": 60,
            "writetimeout":60
        }
        
    ]
}

Benchmark

Spec:

Ubuntu 18.04.3
Intel(R) Xeon(R) CPU           E5620  @ 2.40GH
16GB RAM
  • wrk 25000 connections for 60 seconds

    Evproxy:

    wrk -t12 -c 25000 -d60  http://10.0.1.3:5201/index.html
    Running 1m test @ http://10.0.1.3:5201/index.html
      12 threads and 25000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   199.37ms   34.13ms 746.51ms   78.96%
        Req/Sec    10.48k     2.23k   68.46k    90.88%
      7008919 requests in 1.00m, 4.58GB read
    Requests/sec: 116622.96
    Transfer/sec:     78.12MB
    

    Go stdnet proxy*:

    wrk -t12 -c 25000 -d60  http://10.0.1.3:5201/index.html
    Running 1m test @ http://10.0.1.3:5201/index.html
      12 threads and 25000 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency   179.22ms   76.40ms   1.44s    81.14%
        Req/Sec    11.69k     3.13k   61.55k    80.50%
      7817582 requests in 1.00m, 5.14GB read
    Requests/sec: 130087.08
    Transfer/sec:     87.52MB
    

    The throughput of the go stdnet implementation is better than Evproxy When the number of connections is not too large. So if the load is not too heavy or ram is not the limit, the go stdnet is the better choice.

  • tcp echo test, 200K connections packet size 800B, keeping sending at random(1, 10) seconds.

    Evproxy:

    ram consumption is about 1.18GB

    #top
    VIRT    RES    SHR S  %CPU  %MEM                                
    10.5g  109932  5824 S 453.0  0.7 
    

    Go stdnet proxy*:

    ram consumption is about 7.86GB

    #top
    VIRT    RES    SHR S  %CPU  %MEM                                
    19.6g   6.7g   6048 S 615.6 43.1
    

    Evproxy shows better memory and cpu utilization when the number of connections is 200K.

Go stdnet proxy

Proxy implementation using the go standard package.

//example
{
	listener,_:= net.Listen(addr)
	for{
	    conn, _:= listener.Accept()
		go handle(conn)
	}
}

//copy and traffic shaping
func copy(dst, src net.Conn, buf []byte)

func handle(conn net.Conn){
    upstreamConn,_ := net.Dial(upstreamAddr)
    
    //use buffer from the buffer pool
	  //usually we allocate a 16KB buf for each read and write
    //the problem is when the connection is idle, there is no way
    //to reuse these bufs, which leads to non-trivial memory consumption
    //when the number of connections is large.
    //i.e, 100k conncetion = 100k * (8k from 2 gorutines + 32KB buf) = 3.8GB  
	readBuf := pool.Get()
	writeBuf := pool.Get()

	for{
	    copy(upstreamConn, conn, readBuf)
      go copy(conn, upstreamConn, writeBuf)
	}
    
    pool.Put(readBuf)
    pool.Put(writeBuf)
}

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Conn

type Conn struct {
	Fd                 int
	PeerConn           *Conn
	LocalAddr          string
	RemoteAddr         string
	Upstream           bool
	Buffer             *[]byte
	LastTimeUnavilable int64 // ?need?

	CurPos int
	EndPos int
	// contains filtered or unexported fields
}

type PorxyType

type PorxyType uint8
const (
	TCP PorxyType = 1
	UDP PorxyType = 2
)

type Proxy

type Proxy struct {
	Type PorxyType

	LastTimeUnavilable int64
	// contains filtered or unexported fields
}

func NewTCPProxy

func NewTCPProxy(workers []*Worker, laddr string, raddrs []string, unreachableThreshold int, thresholdInterval, checkInterval time.Duration, opts ...ProxyOption) (*Proxy, error)

func (*Proxy) Accepting

func (p *Proxy) Accepting()

Accepting looping

func (*Proxy) Close

func (p *Proxy) Close() int

blocking and time-cosuming function takes at least 1 seconds WARNING: might have some bugs due to the race condition

func (*Proxy) Local

func (p *Proxy) Local() string

func (*Proxy) Remotes

func (p *Proxy) Remotes() []string

func (*Proxy) SetBandLimitNumber

func (p *Proxy) SetBandLimitNumber(readFromDownStream, readFromUpStream float64)

func (*Proxy) StreamCount

func (p *Proxy) StreamCount() int64

func (*Proxy) TrafficCount

func (p *Proxy) TrafficCount() (up, down int64)

type ProxyOption

type ProxyOption func(*Proxy)

func WithBandwidthLimit

func WithBandwidthLimit(readFromSourceLimit, readFromRemoteLimit float64) ProxyOption

func WithConcurrentLimit

func WithConcurrentLimit(limit int64) ProxyOption

func WithDialTimeout

func WithDialTimeout(timeout time.Duration) ProxyOption

func WithDialworkerNum

func WithDialworkerNum(num int) ProxyOption

func WithGlobalBandwidthLimit

func WithGlobalBandwidthLimit(readFromSourceLimit, readFromRemoteLimit float64, readFromSourceLimiter, readFromRemoteLimiter *rate.Limiter) ProxyOption

func WithGlobalConcurrentLimiter

func WithGlobalConcurrentLimiter(limiter *concurrent.AtomicLimiter) ProxyOption

func WithWriteFuncAfterDial

func WithWriteFuncAfterDial(f func(*Conn)) ProxyOption

func WithWriteTimeout

func WithWriteTimeout(timeout time.Duration) ProxyOption

type Worker

type Worker struct {
	ID int

	Timer *engine.TimeQueue
	// contains filtered or unexported fields
}

func NewWorker

func NewWorker(ID int, bufferSize int, maxReadLoop int) (*Worker, error)

func NewWorkers

func NewWorkers(num int, bufferSize int, maxReadLoop int) []*Worker

func (*Worker) ClosePair

func (w *Worker) ClosePair(c *Conn)

ClosePair will close the c and it's peer

func (*Worker) Run

func (w *Worker) Run()

Directories

Path Synopsis
cmd
evproxy command
internal
engine
edit from https://github.com/xtaci/gaio/blob/master/aio_generic.go
edit from https://github.com/xtaci/gaio/blob/master/aio_generic.go
pkg
lb
nic

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL