geziyor

package module
v0.0.0-...-2d0df81 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 7, 2021 License: MPL-2.0 Imports: 15 Imported by: 0

README

Geziyor

Geziyor is a blazing fast web crawling and web scraping framework. It can be used to crawl websites and extract structured data from them. Geziyor is useful for a wide range of purposes such as data mining, monitoring and automated testing.

GoDoc report card Code Coverage

Features

  • JS Rendering
  • 5.000+ Requests/Sec
  • Caching (Memory/Disk/LevelDB)
  • Automatic Data Exporting (JSON, CSV, or custom)
  • Metrics (Prometheus, Expvar, or custom)
  • Limit Concurrency (Global/Per Domain)
  • Request Delays (Constant/Randomized)
  • Cookies, Middlewares, robots.txt
  • Automatic response decoding to UTF-8

See scraper Options for all custom settings.

Status

We highly recommend you to use Geziyor with go modules.

Usage

This example extracts all quotes from quotes.toscrape.com and exports to JSON file.

func main() {
    geziyor.NewGeziyor(&geziyor.Options{
        StartURLs: []string{"http://quotes.toscrape.com/"},
        ParseFunc: quotesParse,
        Exporters: []export.Exporter{&export.JSON{}},
    }).Start()
}

func quotesParse(g *geziyor.Geziyor, r *client.Response) {
    r.HTMLDoc.Find("div.quote").Each(func(i int, s *goquery.Selection) {
        g.Exports <- map[string]interface{}{
            "text":   s.Find("span.text").Text(),
            "author": s.Find("small.author").Text(),
        }
    })
    if href, ok := r.HTMLDoc.Find("li.next > a").Attr("href"); ok {
        g.Get(r.JoinURL(href), quotesParse)
    }
}

See tests for more usage examples.

Documentation

Installation
go get -u github.com/albertbronsky/geziyor

If you want to make JS rendered requests, make sure you have Chrome installed.

NOTE: macOS limits the maximum number of open file descriptors. If you want to make concurrent requests over 256, you need to increase limits. Read this for more.

Making Normal Requests

Initial requests start with StartURLs []string field in Options. Geziyor makes concurrent requests to those URLs. After reading response, ParseFunc func(g *Geziyor, r *Response) called.

geziyor.NewGeziyor(&geziyor.Options{
    StartURLs: []string{"http://api.ipify.org"},
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        fmt.Println(string(r.Body))
    },
}).Start()

If you want to manually create first requests, set StartRequestsFunc. StartURLs won't be used if you create requests manually.
You can make requests using Geziyor methods:

geziyor.NewGeziyor(&geziyor.Options{
    StartRequestsFunc: func(g *geziyor.Geziyor) {
    	g.Get("https://httpbin.org/anything", g.Opt.ParseFunc)
        g.Head("https://httpbin.org/anything", g.Opt.ParseFunc)
    },
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        fmt.Println(string(r.Body))
    },
}).Start()
Making JS Rendered Requests

JS Rendered requests can be made using GetRendered method. By default, geziyor uses local Chrome application CLI to start Chrome browser. Set BrowserEndpoint option to use different chrome instance. Such as, "ws://localhost:3000"

geziyor.NewGeziyor(&geziyor.Options{
    StartRequestsFunc: func(g *geziyor.Geziyor) {
        g.GetRendered("https://httpbin.org/anything", g.Opt.ParseFunc)
    },
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        fmt.Println(string(r.Body))
    },
    //BrowserEndpoint: "ws://localhost:3000",
}).Start()
Extracting Data

We can extract HTML elements using response.HTMLDoc. HTMLDoc is Goquery's Document.

HTMLDoc can be accessible on Response if response is HTML and can be parsed using Go's built-in HTML parser If response isn't HTML, response.HTMLDoc would be nil.

geziyor.NewGeziyor(&geziyor.Options{
    StartURLs: []string{"http://quotes.toscrape.com/"},
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        r.HTMLDoc.Find("div.quote").Each(func(_ int, s *goquery.Selection) {
            log.Println(s.Find("span.text").Text(), s.Find("small.author").Text())
        })
    },
}).Start()
Exporting Data

You can export data automatically using exporters. Just send data to Geziyor.Exports chan. Available exporters

geziyor.NewGeziyor(&geziyor.Options{
    StartURLs: []string{"http://quotes.toscrape.com/"},
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        r.HTMLDoc.Find("div.quote").Each(func(_ int, s *goquery.Selection) {
            g.Exports <- map[string]interface{}{
                "text":   s.Find("span.text").Text(),
                "author": s.Find("small.author").Text(),
            }
        })
    },
    Exporters: []export.Exporter{&export.JSON{}},
}).Start()
Custom Requests - Passing Metadata To Callbacks

You can create custom requests with client.NewRequest

Use that request on geziyor.Do(request, callback)

geziyor.NewGeziyor(&geziyor.Options{
    StartRequestsFunc: func(g *geziyor.Geziyor) {
        req, _ := client.NewRequest("GET", "https://httpbin.org/anything", nil)
        req.Meta["key"] = "value"
        g.Do(req, g.Opt.ParseFunc)
    },
    ParseFunc: func(g *geziyor.Geziyor, r *client.Response) {
        fmt.Println("This is our data from request: ", r.Request.Meta["key"])
    },
}).Start()

Benchmark

8748 request per seconds on Macbook Pro 15" 2016

See tests for this benchmark function:

>> go test -run none -bench Requests -benchtime 10s
goos: darwin
goarch: amd64
pkg: github.com/albertbronsky/geziyor
BenchmarkRequests-8   	  200000	    108710 ns/op
PASS
ok  	github.com/albertbronsky/geziyor	22.861s

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Geziyor

type Geziyor struct {
	Opt     *Options
	Client  *client.Client
	Exports chan interface{}
	// contains filtered or unexported fields
}

Geziyor is our main scraper type

func NewGeziyor

func NewGeziyor(opt *Options) *Geziyor

NewGeziyor creates new Geziyor with default values. If options provided, options

func (*Geziyor) Do

func (g *Geziyor) Do(req *client.Request, callback func(g *Geziyor, r *client.Response))

Do sends an HTTP request

func (*Geziyor) Get

func (g *Geziyor) Get(url string, callback func(g *Geziyor, r *client.Response))

Get issues a GET to the specified URL.

func (*Geziyor) GetRendered

func (g *Geziyor) GetRendered(url string, callback func(g *Geziyor, r *client.Response))

GetRendered issues GET request using headless browser Opens up a new Chrome instance, makes request, waits for rendering HTML DOM and closed. Rendered requests only supported for GET requests.

func (*Geziyor) Head

func (g *Geziyor) Head(url string, callback func(g *Geziyor, r *client.Response))

Head issues a HEAD to the specified URL

func (*Geziyor) Start

func (g *Geziyor) Start()

Start starts scraping

type Options

type Options struct {
	// AllowedDomains is domains that are allowed to make requests
	// If empty, any domain is allowed
	AllowedDomains []string

	// Chrome headless browser WS endpoint.
	// If you want to run your own Chrome browser runner, provide its endpoint in here
	// For example: ws://localhost:3000
	BrowserEndpoint string

	// Cache storage backends.
	// - Memory
	// - Disk
	// - LevelDB
	Cache cache.Cache

	// Policies for caching.
	// - Dummy policy (default)
	// - RFC2616 policy
	CachePolicy cache.Policy

	// Response charset detection for decoding to UTF-8
	CharsetDetectDisabled bool

	// Concurrent requests limit
	ConcurrentRequests int

	// Concurrent requests per domain limit. Uses request.URL.Host
	// Subdomains are different than top domain
	ConcurrentRequestsPerDomain int

	// If set true, cookies won't send.
	CookiesDisabled bool

	// ErrorFunc is callback of errors.
	// If not defined, all errors will be logged.
	ErrorFunc func(g *Geziyor, r *client.Request, err error)

	// For extracting data
	Exporters []export.Exporter

	// Disable logging by setting this true
	LogDisabled bool

	// Max body reading size in bytes. Default: 1GB
	MaxBodySize int64

	// Maximum redirection time. Default: 10
	MaxRedirect int

	// Scraper metrics exporting type. See metrics.Type
	MetricsType metrics.Type

	// ParseFunc is callback of StartURLs response.
	ParseFunc func(g *Geziyor, r *client.Response)

	// If true, HTML parsing is disabled to improve performance.
	ParseHTMLDisabled bool

	// Request delays
	RequestDelay time.Duration

	// RequestDelayRandomize uses random interval between 0.5 * RequestDelay and 1.5 * RequestDelay
	RequestDelayRandomize bool

	// Called before requests made to manipulate requests
	RequestMiddlewares []middleware.RequestProcessor

	// Called after response received
	ResponseMiddlewares []middleware.ResponseProcessor

	// RequestsPerSecond limits requests that is made per seconds. Default: No limit
	RequestsPerSecond float64

	// Which HTTP response codes to retry.
	// Other errors (DNS lookup issues, connections lost, etc) are always retried.
	// Default: []int{500, 502, 503, 504, 522, 524, 408}
	RetryHTTPCodes []int

	// Maximum number of times to retry, in addition to the first download.
	// Set -1 to disable retrying
	// Default: 2
	RetryTimes int

	// If true, disable robots.txt checks
	RobotsTxtDisabled bool

	// StartRequestsFunc called on scraper start
	StartRequestsFunc func(g *Geziyor)

	// First requests will made to this url array. (Concurrently)
	StartURLs []string

	// Timeout is global request timeout
	Timeout time.Duration

	// Revisiting same URLs is disabled by default
	URLRevisitEnabled bool

	// User Agent.
	// Default: "Geziyor 1.0"
	UserAgent string
}

Options is custom options type for Geziyor

Directories

Path Synopsis
Package cache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses.
Package cache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses.
diskcache
Package diskcache provides an implementation of cache.Cache that uses the diskv package to supplement an in-memory map with persistent storage
Package diskcache provides an implementation of cache.Cache that uses the diskv package to supplement an in-memory map with persistent storage
leveldbcache
Package leveldbcache provides an implementation of cache.Cache that uses github.com/syndtr/goleveldb/leveldb
Package leveldbcache provides an implementation of cache.Cache that uses github.com/syndtr/goleveldb/leveldb

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL