elastic

package module
v6.1.11+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 16, 2018 License: MIT Imports: 29 Imported by: 0

README

Elastic

This is a development branch that is actively being worked on. DO NOT USE IN PRODUCTION! If you want to use stable versions of Elastic, please use a dependency manager like dep.

Elastic is an Elasticsearch client for the Go programming language.

Build Status Godoc license

See the wiki for additional information about Elastic.

Buy Me A Coffee

Releases

The release branches (e.g. release-branch.v6) are actively being worked on and can break at any time. If you want to use stable versions of Elastic, please use a dependency manager like dep.

Here's the version matrix:

Elasticsearch version Elastic version Package URL Remarks
6.x                   6.0             github.com/olivere/elastic (source doc) Use a dependency manager (see below).
5.x 5.0 gopkg.in/olivere/elastic.v5 (source doc) Actively maintained.
2.x 3.0 gopkg.in/olivere/elastic.v3 (source doc) Deprecated. Please update.
1.x 2.0 gopkg.in/olivere/elastic.v2 (source doc) Deprecated. Please update.
0.9-1.3 1.0 gopkg.in/olivere/elastic.v1 (source doc) Deprecated. Please update.

Example:

You have installed Elasticsearch 6.0.0 and want to use Elastic. As listed above, you should use Elastic 6.0.

To use the required version of Elastic in your application, it is strongly advised to use a tool like dep or Glide to manage that dependency. Make sure to use a version such as ^6.0.0.

To use Elastic, simply import:

import "github.com/olivere/elastic"
Elastic 6.0

Elastic 6.0 targets Elasticsearch 6.x which was released on 14th November 2017.

Notice that there are will be a lot of breaking changes in Elasticsearch 6.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from earlier versions of Elastic.

Elastic 5.0

Elastic 5.0 targets Elasticsearch 5.0.0 and later. Elasticsearch 5.0.0 was released on 26th October 2016.

Notice that there are will be a lot of breaking changes in Elasticsearch 5.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from Elastic 2.0 (for Elasticsearch 1.x) to Elastic 3.0 (for Elasticsearch 2.x).

Furthermore, the jump in version numbers will give us a chance to be in sync with the Elastic Stack.

Elastic 3.0

Elastic 3.0 targets Elasticsearch 2.x and is published via gopkg.in/olivere/elastic.v3.

Elastic 3.0 will only get critical bug fixes. You should update to a recent version.

Elastic 2.0

Elastic 2.0 targets Elasticsearch 1.x and is published via gopkg.in/olivere/elastic.v2.

Elastic 2.0 will only get critical bug fixes. You should update to a recent version.

Elastic 1.0

Elastic 1.0 is deprecated. You should really update Elasticsearch and Elastic to a recent version.

However, if you cannot update for some reason, don't worry. Version 1.0 is still available. All you need to do is go-get it and change your import path as described above.

Status

We use Elastic in production since 2012. Elastic is stable but the API changes now and then. We strive for API compatibility. However, Elasticsearch sometimes introduces breaking changes and we sometimes have to adapt.

Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that Elastic is in sync with Elasticsearch.

Elastic has been used in production with the following Elasticsearch versions: 0.90, 1.0-1.7, and 2.0-2.4.1. Furthermore, we use Travis CI to test Elastic with the most recent versions of Elasticsearch and Go. See the .travis.yml file for the exact matrix and Travis for the results.

Elasticsearch has quite a few features. Most of them are implemented by Elastic. I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)

Having said that, I hope you find the project useful.

Getting Started

The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200 by default.

You typically create one client for your app. Here's a complete example of creating a client, creating an index, adding a document, executing a search etc.

An example is available here.

Here's a link to a complete working example for v6.

See the wiki for more details.

There are also some recipes for bulk indexing, scrolling through documents in indices etc.

API Status

Document APIs
  • Index API
  • Get API
  • Delete API
  • Delete By Query API
  • Update API
  • Update By Query API
  • Multi Get API
  • Bulk API
  • Reindex API
  • Term Vectors
  • Multi termvectors API
Search APIs
  • Search
  • Search Template
  • Multi Search Template
  • Search Shards API
  • Suggesters
    • Term Suggester
    • Phrase Suggester
    • Completion Suggester
    • Context Suggester
  • Multi Search API
  • Count API
  • Validate API
  • Explain API
  • Profile API
  • Field Capabilities API
Aggregations
  • Metrics Aggregations
    • Avg
    • Cardinality
    • Extended Stats
    • Geo Bounds
    • Geo Centroid
    • Max
    • Min
    • Percentiles
    • Percentile Ranks
    • Scripted Metric
    • Stats
    • Sum
    • Top Hits
    • Value Count
  • Bucket Aggregations
    • Adjacency Matrix
    • Children
    • Date Histogram
    • Date Range
    • Diversified Sampler
    • Filter
    • Filters
    • Geo Distance
    • GeoHash Grid
    • Global
    • Histogram
    • IP Range
    • Missing
    • Nested
    • Range
    • Reverse Nested
    • Sampler
    • Significant Terms
    • Significant Text
    • Terms
    • Composite
  • Pipeline Aggregations
    • Avg Bucket
    • Derivative
    • Max Bucket
    • Min Bucket
    • Sum Bucket
    • Stats Bucket
    • Extended Stats Bucket
    • Percentiles Bucket
    • Moving Average
    • Cumulative Sum
    • Bucket Script
    • Bucket Selector
    • Bucket Sort
    • Serial Differencing
  • Matrix Aggregations
    • Matrix Stats
  • Aggregation Metadata
Indices APIs
  • Create Index
  • Delete Index
  • Get Index
  • Indices Exists
  • Open / Close Index
  • Shrink Index
  • Rollover Index
  • Put Mapping
  • Get Mapping
  • Get Field Mapping
  • Types Exists
  • Index Aliases
  • Update Indices Settings
  • Get Settings
  • Analyze
    • Explain Analyze
  • Index Templates
  • Indices Stats
  • Indices Segments
  • Indices Recovery
  • Indices Shard Stores
  • Clear Cache
  • Flush
    • Synced Flush
  • Refresh
  • Force Merge
cat APIs

The cat APIs are not implemented as of now. We think they are better suited for operating with Elasticsearch on the command line.

  • cat aliases
  • cat allocation
  • cat count
  • cat fielddata
  • cat health
  • cat indices
  • cat master
  • cat nodeattrs
  • cat nodes
  • cat pending tasks
  • cat plugins
  • cat recovery
  • cat repositories
  • cat thread pool
  • cat shards
  • cat segments
  • cat snapshots
  • cat templates
Cluster APIs
  • Cluster Health
  • Cluster State
  • Cluster Stats
  • Pending Cluster Tasks
  • Cluster Reroute
  • Cluster Update Settings
  • Nodes Stats
  • Nodes Info
  • Nodes Feature Usage
  • Remote Cluster Info
  • Task Management API
  • Nodes hot_threads
  • Cluster Allocation Explain API
Query DSL
  • Match All Query
  • Inner hits
  • Full text queries
    • Match Query
    • Match Phrase Query
    • Match Phrase Prefix Query
    • Multi Match Query
    • Common Terms Query
    • Query String Query
    • Simple Query String Query
  • Term level queries
    • Term Query
    • Terms Query
    • Terms Set Query
    • Range Query
    • Exists Query
    • Prefix Query
    • Wildcard Query
    • Regexp Query
    • Fuzzy Query
    • Type Query
    • Ids Query
  • Compound queries
    • Constant Score Query
    • Bool Query
    • Dis Max Query
    • Function Score Query
    • Boosting Query
  • Joining queries
    • Nested Query
    • Has Child Query
    • Has Parent Query
    • Parent Id Query
  • Geo queries
    • GeoShape Query
    • Geo Bounding Box Query
    • Geo Distance Query
    • Geo Polygon Query
  • Specialized queries
    • More Like This Query
    • Script Query
    • Percolate Query
  • Span queries
    • Span Term Query
    • Span Multi Term Query
    • Span First Query
    • Span Near Query
    • Span Or Query
    • Span Not Query
    • Span Containing Query
    • Span Within Query
    • Span Field Masking Query
  • Minimum Should Match
  • Multi Term Query Rewrite
Modules
  • Snapshot and Restore
    • Repositories
    • Snapshot
    • Restore
    • Snapshot status
    • Monitoring snapshot/restore status
    • Stopping currently running snapshot and restore
Sorting
  • Sort by score
  • Sort by field
  • Sort by geo distance
  • Sort by script
  • Sort by doc
Scrolling

Scrolling is supported via a ScrollService. It supports an iterator-like interface. The ClearScroll API is implemented as well.

A pattern for efficiently scrolling in parallel is described in the Wiki.

How to contribute

Read the contribution guidelines.

Credits

Thanks a lot for the great folks working hard on Elasticsearch and Go.

Elastic uses portions of the uritemplates library by Joshua Tacoma, backoff by Cenk Altı and leaktest by Ian Chiles.

LICENSE

MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.

Documentation

Overview

Package elastic provides an interface to the Elasticsearch server (https://www.elastic.co/products/elasticsearch).

The first thing you do is to create a Client. If you have Elasticsearch installed and running with its default settings (i.e. available at http://127.0.0.1:9200), all you need to do is:

client, err := elastic.NewClient()
if err != nil {
	// Handle error
}

If your Elasticsearch server is running on a different IP and/or port, just provide a URL to NewClient:

// Create a client and connect to http://192.168.2.10:9201
client, err := elastic.NewClient(elastic.SetURL("http://192.168.2.10:9201"))
if err != nil {
  // Handle error
}

You can pass many more configuration parameters to NewClient. Review the documentation of NewClient for more information.

If no Elasticsearch server is available, services will fail when creating a new request and will return ErrNoClient.

A Client provides services. The services usually come with a variety of methods to prepare the query and a Do function to execute it against the Elasticsearch REST interface and return a response. Here is an example of the IndexExists service that checks if a given index already exists.

exists, err := client.IndexExists("twitter").Do(context.Background())
if err != nil {
	// Handle error
}
if !exists {
	// Index does not exist yet.
}

Look up the documentation for Client to get an idea of the services provided and what kinds of responses you get when executing the Do function of a service. Also see the wiki on Github for more details.

Example
package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"os"
	"reflect"
	"time"

	elastic "github.com/olivere/elastic"
)

type Tweet struct {
	User     string                `json:"user"`
	Message  string                `json:"message"`
	Retweets int                   `json:"retweets"`
	Image    string                `json:"image,omitempty"`
	Created  time.Time             `json:"created,omitempty"`
	Tags     []string              `json:"tags,omitempty"`
	Location string                `json:"location,omitempty"`
	Suggest  *elastic.SuggestField `json:"suggest_field,omitempty"`
}

func main() {
	errorlog := log.New(os.Stdout, "APP ", log.LstdFlags)

	// Obtain a client. You can also provide your own HTTP client here.
	client, err := elastic.NewClient(elastic.SetErrorLog(errorlog))
	if err != nil {
		// Handle error
		panic(err)
	}

	// Trace request and response details like this
	//client.SetTracer(log.New(os.Stdout, "", 0))

	// Ping the Elasticsearch server to get e.g. the version number
	info, code, err := client.Ping("http://127.0.0.1:9200").Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	fmt.Printf("Elasticsearch returned with code %d and version %s\n", code, info.Version.Number)

	// Getting the ES version number is quite common, so there's a shortcut
	esversion, err := client.ElasticsearchVersion("http://127.0.0.1:9200")
	if err != nil {
		// Handle error
		panic(err)
	}
	fmt.Printf("Elasticsearch version %s\n", esversion)

	// Use the IndexExists service to check if a specified index exists.
	exists, err := client.IndexExists("twitter").Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	if !exists {
		// Create a new index.
		mapping := `
{
	"settings":{
		"number_of_shards":1,
		"number_of_replicas":0
	},
	"mappings":{
		"doc":{
			"properties":{
				"user":{
					"type":"keyword"
				},
				"message":{
					"type":"text",
					"store": true,
					"fielddata": true
				},
                "retweets":{
                    "type":"long"
                },
				"tags":{
					"type":"keyword"
				},
				"location":{
					"type":"geo_point"
				},
				"suggest_field":{
					"type":"completion"
				}
			}
		}
	}
}
`
		createIndex, err := client.CreateIndex("twitter").Body(mapping).Do(context.Background())
		if err != nil {
			// Handle error
			panic(err)
		}
		if !createIndex.Acknowledged {
			// Not acknowledged
		}
	}

	// Index a tweet (using JSON serialization)
	tweet1 := Tweet{User: "olivere", Message: "Take Five", Retweets: 0}
	put1, err := client.Index().
		Index("twitter").
		Type("doc").
		Id("1").
		BodyJson(tweet1).
		Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	fmt.Printf("Indexed tweet %s to index %s, type %s\n", put1.Id, put1.Index, put1.Type)

	// Index a second tweet (by string)
	tweet2 := `{"user" : "olivere", "message" : "It's a Raggy Waltz"}`
	put2, err := client.Index().
		Index("twitter").
		Type("doc").
		Id("2").
		BodyString(tweet2).
		Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	fmt.Printf("Indexed tweet %s to index %s, type %s\n", put2.Id, put2.Index, put2.Type)

	// Get tweet with specified ID
	get1, err := client.Get().
		Index("twitter").
		Type("doc").
		Id("1").
		Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	if get1.Found {
		fmt.Printf("Got document %s in version %d from index %s, type %s\n", get1.Id, get1.Version, get1.Index, get1.Type)
	}

	// Flush to make sure the documents got written.
	_, err = client.Flush().Index("twitter").Do(context.Background())
	if err != nil {
		panic(err)
	}

	// Search with a term query
	termQuery := elastic.NewTermQuery("user", "olivere")
	searchResult, err := client.Search().
		Index("twitter").        // search in index "twitter"
		Query(termQuery).        // specify the query
		Sort("user", true).      // sort by "user" field, ascending
		From(0).Size(10).        // take documents 0-9
		Pretty(true).            // pretty print request and response JSON
		Do(context.Background()) // execute
	if err != nil {
		// Handle error
		panic(err)
	}

	// searchResult is of type SearchResult and returns hits, suggestions,
	// and all kinds of other information from Elasticsearch.
	fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)

	// Each is a convenience function that iterates over hits in a search result.
	// It makes sure you don't need to check for nil values in the response.
	// However, it ignores errors in serialization. If you want full control
	// over iterating the hits, see below.
	var ttyp Tweet
	for _, item := range searchResult.Each(reflect.TypeOf(ttyp)) {
		t := item.(Tweet)
		fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
	}
	// TotalHits is another convenience function that works even when something goes wrong.
	fmt.Printf("Found a total of %d tweets\n", searchResult.TotalHits())

	// Here's how you iterate through results with full control over each step.
	if searchResult.Hits.TotalHits > 0 {
		fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)

		// Iterate through results
		for _, hit := range searchResult.Hits.Hits {
			// hit.Index contains the name of the index

			// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
			var t Tweet
			err := json.Unmarshal(*hit.Source, &t)
			if err != nil {
				// Deserialization failed
			}

			// Work with tweet
			fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
		}
	} else {
		// No hits
		fmt.Print("Found no tweets\n")
	}

	// Update a tweet by the update API of Elasticsearch.
	// We just increment the number of retweets.
	script := elastic.NewScript("ctx._source.retweets += params.num").Param("num", 1)
	update, err := client.Update().Index("twitter").Type("doc").Id("1").
		Script(script).
		Upsert(map[string]interface{}{"retweets": 0}).
		Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	fmt.Printf("New version of tweet %q is now %d", update.Id, update.Version)

	// ...

	// Delete an index.
	deleteIndex, err := client.DeleteIndex("twitter").Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	if !deleteIndex.Acknowledged {
		// Not acknowledged
	}
}
Output:

Index

Examples

Constants

View Source
const (
	// Version is the current version of Elastic.
	Version = "6.1.11"

	// DefaultURL is the default endpoint of Elasticsearch on the local machine.
	// It is used e.g. when initializing a new Client without a specific URL.
	DefaultURL = "http://127.0.0.1:9200"

	// DefaultScheme is the default protocol scheme to use when sniffing
	// the Elasticsearch cluster.
	DefaultScheme = "http"

	// DefaultHealthcheckEnabled specifies if healthchecks are enabled by default.
	DefaultHealthcheckEnabled = true

	// DefaultHealthcheckTimeoutStartup is the time the healthcheck waits
	// for a response from Elasticsearch on startup, i.e. when creating a
	// client. After the client is started, a shorter timeout is commonly used
	// (its default is specified in DefaultHealthcheckTimeout).
	DefaultHealthcheckTimeoutStartup = 5 * time.Second

	// DefaultHealthcheckTimeout specifies the time a running client waits for
	// a response from Elasticsearch. Notice that the healthcheck timeout
	// when a client is created is larger by default (see DefaultHealthcheckTimeoutStartup).
	DefaultHealthcheckTimeout = 1 * time.Second

	// DefaultHealthcheckInterval is the default interval between
	// two health checks of the nodes in the cluster.
	DefaultHealthcheckInterval = 60 * time.Second

	// DefaultSnifferEnabled specifies if the sniffer is enabled by default.
	DefaultSnifferEnabled = true

	// DefaultSnifferInterval is the interval between two sniffing procedures,
	// i.e. the lookup of all nodes in the cluster and their addition/removal
	// from the list of actual connections.
	DefaultSnifferInterval = 15 * time.Minute

	// DefaultSnifferTimeoutStartup is the default timeout for the sniffing
	// process that is initiated while creating a new client. For subsequent
	// sniffing processes, DefaultSnifferTimeout is used (by default).
	DefaultSnifferTimeoutStartup = 5 * time.Second

	// DefaultSnifferTimeout is the default timeout after which the
	// sniffing process times out. Notice that for the initial sniffing
	// process, DefaultSnifferTimeoutStartup is used.
	DefaultSnifferTimeout = 2 * time.Second

	// DefaultSendGetBodyAs is the HTTP method to use when elastic is sending
	// a GET request with a body.
	DefaultSendGetBodyAs = "GET"
)
View Source
const (
	// DefaultScrollKeepAlive is the default time a scroll cursor will be kept alive.
	DefaultScrollKeepAlive = "5m"
)

Variables

View Source
var (
	// ErrNoClient is raised when no Elasticsearch node is available.
	ErrNoClient = errors.New("no Elasticsearch node available")

	// ErrRetry is raised when a request cannot be executed after the configured
	// number of retries.
	ErrRetry = errors.New("cannot connect after several retries")

	// ErrTimeout is raised when a request timed out, e.g. when WaitForStatus
	// didn't return in time.
	ErrTimeout = errors.New("timeout")
)

Functions

func IsConflict

func IsConflict(err interface{}) bool

IsConflict returns true if the given error indicates that the Elasticsearch operation resulted in a version conflict. This can occur in operations like `update` or `index` with `op_type=create`. The err parameter can be of type *elastic.Error, elastic.Error, *http.Response or int (indicating the HTTP status code).

func IsConnErr

func IsConnErr(err error) bool

IsConnErr returns true if the error indicates that Elastic could not find an Elasticsearch host to connect to.

func IsNotFound

func IsNotFound(err interface{}) bool

IsNotFound returns true if the given error indicates that Elasticsearch returned HTTP status 404. The err parameter can be of type *elastic.Error, elastic.Error, *http.Response or int (indicating the HTTP status code).

func IsStatusCode

func IsStatusCode(err interface{}, code int) bool

IsStatusCode returns true if the given error indicates that the Elasticsearch operation returned the specified HTTP status code. The err parameter can be of type *http.Response, *Error, Error, or int (indicating the HTTP status code).

func IsTimeout

func IsTimeout(err interface{}) bool

IsTimeout returns true if the given error indicates that Elasticsearch returned HTTP status 408. The err parameter can be of type *elastic.Error, elastic.Error, *http.Response or int (indicating the HTTP status code).

func Retry

func Retry(o Operation, b Backoff) error

Retry the function f until it does not return error or BackOff stops. f is guaranteed to be run at least once. It is the caller's responsibility to reset b after Retry returns.

Retry sleeps the goroutine for the duration returned by BackOff after a failed operation returns.

func RetryNotify

func RetryNotify(operation Operation, b Backoff, notify Notify) error

RetryNotify calls notify function with the error and wait duration for each failed attempt before sleep.

Types

type AcknowledgedResponse

type AcknowledgedResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

AcknowledgedResponse is returned from various APIs. It simply indicates whether the operation is ack'd or not.

type AdjacencyMatrixAggregation

type AdjacencyMatrixAggregation struct {
	// contains filtered or unexported fields
}

AdjacencyMatrixAggregation returning a form of adjacency matrix. The request provides a collection of named filter expressions, similar to the filters aggregation request. Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.

For details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-bucket-adjacency-matrix-aggregation.html

func NewAdjacencyMatrixAggregation

func NewAdjacencyMatrixAggregation() *AdjacencyMatrixAggregation

NewAdjacencyMatrixAggregation initializes a new AdjacencyMatrixAggregation.

func (*AdjacencyMatrixAggregation) Filters

Filters adds the filter

func (*AdjacencyMatrixAggregation) Meta

func (a *AdjacencyMatrixAggregation) Meta(metaData map[string]interface{}) *AdjacencyMatrixAggregation

Meta sets the meta data to be included in the aggregation response.

func (*AdjacencyMatrixAggregation) Source

func (a *AdjacencyMatrixAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

func (*AdjacencyMatrixAggregation) SubAggregation

func (a *AdjacencyMatrixAggregation) SubAggregation(name string, subAggregation Aggregation) *AdjacencyMatrixAggregation

SubAggregation adds a sub-aggregation to this aggregation.

type Aggregation

type Aggregation interface {
	// Source returns a JSON-serializable aggregation that is a fragment
	// of the request sent to Elasticsearch.
	Source() (interface{}, error)
}

Aggregations can be seen as a unit-of-work that build analytic information over a set of documents. It is (in many senses) the follow-up of facets in Elasticsearch. For more details about aggregations, visit: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations.html

type AggregationBucketAdjacencyMatrix

type AggregationBucketAdjacencyMatrix struct {
	Aggregations

	Buckets []*AggregationBucketKeyItem //`json:"buckets"`
	Meta    map[string]interface{}      // `json:"meta,omitempty"`
}

AggregationBucketAdjacencyMatrix is a multi-bucket aggregation that is returned with a AdjacencyMatrix aggregation.

func (*AggregationBucketAdjacencyMatrix) UnmarshalJSON

func (a *AggregationBucketAdjacencyMatrix) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketAdjacencyMatrix structure.

type AggregationBucketCompositeItem

type AggregationBucketCompositeItem struct {
	Aggregations

	Key      map[string]interface{} //`json:"key"`
	DocCount int64                  //`json:"doc_count"`
}

AggregationBucketCompositeItem is a single bucket of an AggregationBucketCompositeItems structure.

func (*AggregationBucketCompositeItem) UnmarshalJSON

func (a *AggregationBucketCompositeItem) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketCompositeItem structure.

type AggregationBucketCompositeItems

type AggregationBucketCompositeItems struct {
	Aggregations

	Buckets []*AggregationBucketCompositeItem //`json:"buckets"`
	Meta    map[string]interface{}            // `json:"meta,omitempty"`
}

AggregationBucketCompositeItems implements the response structure for a bucket aggregation of type composite.

func (*AggregationBucketCompositeItems) UnmarshalJSON

func (a *AggregationBucketCompositeItems) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketCompositeItems structure.

type AggregationBucketFilters

type AggregationBucketFilters struct {
	Aggregations

	Buckets      []*AggregationBucketKeyItem          //`json:"buckets"`
	NamedBuckets map[string]*AggregationBucketKeyItem //`json:"buckets"`
	Meta         map[string]interface{}               // `json:"meta,omitempty"`
}

AggregationBucketFilters is a multi-bucket aggregation that is returned with a filters aggregation.

func (*AggregationBucketFilters) UnmarshalJSON

func (a *AggregationBucketFilters) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketFilters structure.

type AggregationBucketHistogramItem

type AggregationBucketHistogramItem struct {
	Aggregations

	Key         float64 //`json:"key"`
	KeyAsString *string //`json:"key_as_string"`
	DocCount    int64   //`json:"doc_count"`
}

AggregationBucketHistogramItem is a single bucket of an AggregationBucketHistogramItems structure.

func (*AggregationBucketHistogramItem) UnmarshalJSON

func (a *AggregationBucketHistogramItem) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketHistogramItem structure.

type AggregationBucketHistogramItems

type AggregationBucketHistogramItems struct {
	Aggregations

	Buckets []*AggregationBucketHistogramItem //`json:"buckets"`
	Meta    map[string]interface{}            // `json:"meta,omitempty"`
}

AggregationBucketHistogramItems is a bucket aggregation that is returned with a date histogram aggregation.

func (*AggregationBucketHistogramItems) UnmarshalJSON

func (a *AggregationBucketHistogramItems) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketHistogramItems structure.

type AggregationBucketKeyItem

type AggregationBucketKeyItem struct {
	Aggregations

	Key         interface{} //`json:"key"`
	KeyAsString *string     //`json:"key_as_string"`
	KeyNumber   json.Number
	DocCount    int64 //`json:"doc_count"`
}

AggregationBucketKeyItem is a single bucket of an AggregationBucketKeyItems structure.

func (*AggregationBucketKeyItem) UnmarshalJSON

func (a *AggregationBucketKeyItem) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketKeyItem structure.

type AggregationBucketKeyItems

type AggregationBucketKeyItems struct {
	Aggregations

	DocCountErrorUpperBound int64                       //`json:"doc_count_error_upper_bound"`
	SumOfOtherDocCount      int64                       //`json:"sum_other_doc_count"`
	Buckets                 []*AggregationBucketKeyItem //`json:"buckets"`
	Meta                    map[string]interface{}      // `json:"meta,omitempty"`
}

AggregationBucketKeyItems is a bucket aggregation that is e.g. returned with a terms aggregation.

func (*AggregationBucketKeyItems) UnmarshalJSON

func (a *AggregationBucketKeyItems) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketKeyItems structure.

type AggregationBucketKeyedRangeItems

type AggregationBucketKeyedRangeItems struct {
	Aggregations

	DocCountErrorUpperBound int64                                  //`json:"doc_count_error_upper_bound"`
	SumOfOtherDocCount      int64                                  //`json:"sum_other_doc_count"`
	Buckets                 map[string]*AggregationBucketRangeItem //`json:"buckets"`
	Meta                    map[string]interface{}                 // `json:"meta,omitempty"`
}

AggregationBucketKeyedRangeItems is a bucket aggregation that is e.g. returned with a keyed range aggregation.

func (*AggregationBucketKeyedRangeItems) UnmarshalJSON

func (a *AggregationBucketKeyedRangeItems) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketRangeItems structure.

type AggregationBucketRangeItem

type AggregationBucketRangeItem struct {
	Aggregations

	Key          string   //`json:"key"`
	DocCount     int64    //`json:"doc_count"`
	From         *float64 //`json:"from"`
	FromAsString string   //`json:"from_as_string"`
	To           *float64 //`json:"to"`
	ToAsString   string   //`json:"to_as_string"`
}

AggregationBucketRangeItem is a single bucket of an AggregationBucketRangeItems structure.

func (*AggregationBucketRangeItem) UnmarshalJSON

func (a *AggregationBucketRangeItem) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketRangeItem structure.

type AggregationBucketRangeItems

type AggregationBucketRangeItems struct {
	Aggregations

	DocCountErrorUpperBound int64                         //`json:"doc_count_error_upper_bound"`
	SumOfOtherDocCount      int64                         //`json:"sum_other_doc_count"`
	Buckets                 []*AggregationBucketRangeItem //`json:"buckets"`
	Meta                    map[string]interface{}        // `json:"meta,omitempty"`
}

AggregationBucketRangeItems is a bucket aggregation that is e.g. returned with a range aggregation.

func (*AggregationBucketRangeItems) UnmarshalJSON

func (a *AggregationBucketRangeItems) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketRangeItems structure.

type AggregationBucketSignificantTerm

type AggregationBucketSignificantTerm struct {
	Aggregations

	Key      string  //`json:"key"`
	DocCount int64   //`json:"doc_count"`
	BgCount  int64   //`json:"bg_count"`
	Score    float64 //`json:"score"`
}

AggregationBucketSignificantTerm is a single bucket of an AggregationBucketSignificantTerms structure.

func (*AggregationBucketSignificantTerm) UnmarshalJSON

func (a *AggregationBucketSignificantTerm) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketSignificantTerm structure.

type AggregationBucketSignificantTerms

type AggregationBucketSignificantTerms struct {
	Aggregations

	DocCount int64                               //`json:"doc_count"`
	Buckets  []*AggregationBucketSignificantTerm //`json:"buckets"`
	Meta     map[string]interface{}              // `json:"meta,omitempty"`
}

AggregationBucketSignificantTerms is a bucket aggregation returned with a significant terms aggregation.

func (*AggregationBucketSignificantTerms) UnmarshalJSON

func (a *AggregationBucketSignificantTerms) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationBucketSignificantTerms structure.

type AggregationExtendedStatsMetric

type AggregationExtendedStatsMetric struct {
	Aggregations

	Count        int64                  // `json:"count"`
	Min          *float64               //`json:"min,omitempty"`
	Max          *float64               //`json:"max,omitempty"`
	Avg          *float64               //`json:"avg,omitempty"`
	Sum          *float64               //`json:"sum,omitempty"`
	SumOfSquares *float64               //`json:"sum_of_squares,omitempty"`
	Variance     *float64               //`json:"variance,omitempty"`
	StdDeviation *float64               //`json:"std_deviation,omitempty"`
	Meta         map[string]interface{} // `json:"meta,omitempty"`
}

AggregationExtendedStatsMetric is a multi-value metric, returned by an ExtendedStats aggregation.

func (*AggregationExtendedStatsMetric) UnmarshalJSON

func (a *AggregationExtendedStatsMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationExtendedStatsMetric structure.

type AggregationGeoBoundsMetric

type AggregationGeoBoundsMetric struct {
	Aggregations

	Bounds struct {
		TopLeft struct {
			Latitude  float64 `json:"lat"`
			Longitude float64 `json:"lon"`
		} `json:"top_left"`
		BottomRight struct {
			Latitude  float64 `json:"lat"`
			Longitude float64 `json:"lon"`
		} `json:"bottom_right"`
	} `json:"bounds"`

	Meta map[string]interface{} // `json:"meta,omitempty"`
}

AggregationGeoBoundsMetric is a metric as returned by a GeoBounds aggregation.

func (*AggregationGeoBoundsMetric) UnmarshalJSON

func (a *AggregationGeoBoundsMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationGeoBoundsMetric structure.

type AggregationGeoCentroidMetric

type AggregationGeoCentroidMetric struct {
	Aggregations

	Location struct {
		Latitude  float64 `json:"lat"`
		Longitude float64 `json:"lon"`
	} `json:"location"`

	Count int // `json:"count,omitempty"`

	Meta map[string]interface{} // `json:"meta,omitempty"`
}

AggregationGeocentroidMetric is a metric as returned by a GeoCentroid aggregation.

func (*AggregationGeoCentroidMetric) UnmarshalJSON

func (a *AggregationGeoCentroidMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationGeoCentroidMetric structure.

type AggregationMatrixStats

type AggregationMatrixStats struct {
	Aggregations

	Fields []*AggregationMatrixStatsField // `json:"field,omitempty"`
	Meta   map[string]interface{}         // `json:"meta,omitempty"`
}

AggregationMatrixStats is returned by a MatrixStats aggregation.

func (*AggregationMatrixStats) UnmarshalJSON

func (a *AggregationMatrixStats) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationMatrixStats structure.

type AggregationMatrixStatsField

type AggregationMatrixStatsField struct {
	Name        string             `json:"name"`
	Count       int64              `json:"count"`
	Mean        float64            `json:"mean,omitempty"`
	Variance    float64            `json:"variance,omitempty"`
	Skewness    float64            `json:"skewness,omitempty"`
	Kurtosis    float64            `json:"kurtosis,omitempty"`
	Covariance  map[string]float64 `json:"covariance,omitempty"`
	Correlation map[string]float64 `json:"correlation,omitempty"`
}

AggregationMatrixStatsField represents running stats of a single field returned from MatrixStats aggregation.

type AggregationPercentilesMetric

type AggregationPercentilesMetric struct {
	Aggregations

	Values map[string]float64     // `json:"values"`
	Meta   map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPercentilesMetric is a multi-value metric, returned by a Percentiles aggregation.

func (*AggregationPercentilesMetric) UnmarshalJSON

func (a *AggregationPercentilesMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPercentilesMetric structure.

type AggregationPipelineBucketMetricValue

type AggregationPipelineBucketMetricValue struct {
	Aggregations

	Keys          []interface{}          // `json:"keys"`
	Value         *float64               // `json:"value"`
	ValueAsString string                 // `json:"value_as_string"`
	Meta          map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPipelineBucketMetricValue is a value returned e.g. by a MaxBucket aggregation.

func (*AggregationPipelineBucketMetricValue) UnmarshalJSON

func (a *AggregationPipelineBucketMetricValue) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPipelineBucketMetricValue structure.

type AggregationPipelineDerivative

type AggregationPipelineDerivative struct {
	Aggregations

	Value                   *float64               // `json:"value"`
	ValueAsString           string                 // `json:"value_as_string"`
	NormalizedValue         *float64               // `json:"normalized_value"`
	NormalizedValueAsString string                 // `json:"normalized_value_as_string"`
	Meta                    map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPipelineDerivative is the value returned by a Derivative aggregation.

func (*AggregationPipelineDerivative) UnmarshalJSON

func (a *AggregationPipelineDerivative) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPipelineDerivative structure.

type AggregationPipelinePercentilesMetric

type AggregationPipelinePercentilesMetric struct {
	Aggregations

	Values map[string]float64     // `json:"values"`
	Meta   map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPipelinePercentilesMetric is the value returned by a pipeline percentiles Metric aggregation

func (*AggregationPipelinePercentilesMetric) UnmarshalJSON

func (a *AggregationPipelinePercentilesMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPipelinePercentilesMetric structure.

type AggregationPipelineSimpleValue

type AggregationPipelineSimpleValue struct {
	Aggregations

	Value         *float64               // `json:"value"`
	ValueAsString string                 // `json:"value_as_string"`
	Meta          map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPipelineSimpleValue is a simple value, returned e.g. by a MovAvg aggregation.

func (*AggregationPipelineSimpleValue) UnmarshalJSON

func (a *AggregationPipelineSimpleValue) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPipelineSimpleValue structure.

type AggregationPipelineStatsMetric

type AggregationPipelineStatsMetric struct {
	Aggregations

	Count         int64    // `json:"count"`
	CountAsString string   // `json:"count_as_string"`
	Min           *float64 // `json:"min"`
	MinAsString   string   // `json:"min_as_string"`
	Max           *float64 // `json:"max"`
	MaxAsString   string   // `json:"max_as_string"`
	Avg           *float64 // `json:"avg"`
	AvgAsString   string   // `json:"avg_as_string"`
	Sum           *float64 // `json:"sum"`
	SumAsString   string   // `json:"sum_as_string"`

	Meta map[string]interface{} // `json:"meta,omitempty"`
}

AggregationPipelineStatsMetric is a simple value, returned e.g. by a MovAvg aggregation.

func (*AggregationPipelineStatsMetric) UnmarshalJSON

func (a *AggregationPipelineStatsMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationPipelineStatsMetric structure.

type AggregationSingleBucket

type AggregationSingleBucket struct {
	Aggregations

	DocCount int64                  // `json:"doc_count"`
	Meta     map[string]interface{} // `json:"meta,omitempty"`
}

AggregationSingleBucket is a single bucket, returned e.g. via an aggregation of type Global.

func (*AggregationSingleBucket) UnmarshalJSON

func (a *AggregationSingleBucket) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationSingleBucket structure.

type AggregationStatsMetric

type AggregationStatsMetric struct {
	Aggregations

	Count int64                  // `json:"count"`
	Min   *float64               //`json:"min,omitempty"`
	Max   *float64               //`json:"max,omitempty"`
	Avg   *float64               //`json:"avg,omitempty"`
	Sum   *float64               //`json:"sum,omitempty"`
	Meta  map[string]interface{} // `json:"meta,omitempty"`
}

AggregationStatsMetric is a multi-value metric, returned by a Stats aggregation.

func (*AggregationStatsMetric) UnmarshalJSON

func (a *AggregationStatsMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationStatsMetric structure.

type AggregationTopHitsMetric

type AggregationTopHitsMetric struct {
	Aggregations

	Hits *SearchHits            //`json:"hits"`
	Meta map[string]interface{} // `json:"meta,omitempty"`
}

AggregationTopHitsMetric is a metric returned by a TopHits aggregation.

func (*AggregationTopHitsMetric) UnmarshalJSON

func (a *AggregationTopHitsMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationTopHitsMetric structure.

type AggregationValueMetric

type AggregationValueMetric struct {
	Aggregations

	Value *float64               //`json:"value"`
	Meta  map[string]interface{} // `json:"meta,omitempty"`
}

AggregationValueMetric is a single-value metric, returned e.g. by a Min or Max aggregation.

func (*AggregationValueMetric) UnmarshalJSON

func (a *AggregationValueMetric) UnmarshalJSON(data []byte) error

UnmarshalJSON decodes JSON data and initializes an AggregationValueMetric structure.

type Aggregations

type Aggregations map[string]*json.RawMessage

Aggregations is a list of aggregations that are part of a search result.

Example
package main

import (
	"context"
	"fmt"
	"log"

	elastic "github.com/olivere/elastic"
)

func main() {
	// Get a client to the local Elasticsearch instance.
	client, err := elastic.NewClient()
	if err != nil {
		// Handle error
		panic(err)
	}

	// Create an aggregation for users and a sub-aggregation for a date histogram of tweets (per year).
	timeline := elastic.NewTermsAggregation().Field("user").Size(10).OrderByCountDesc()
	histogram := elastic.NewDateHistogramAggregation().Field("created").Interval("year")
	timeline = timeline.SubAggregation("history", histogram)

	// Search with a term query
	searchResult, err := client.Search().
		Index("twitter").                  // search in index "twitter"
		Query(elastic.NewMatchAllQuery()). // return all results, but ...
		SearchType("count").               // ... do not return hits, just the count
		Aggregation("timeline", timeline). // add our aggregation to the query
		Pretty(true).                      // pretty print request and response JSON
		Do(context.Background())           // execute
	if err != nil {
		// Handle error
		panic(err)
	}

	// Access "timeline" aggregate in search result.
	agg, found := searchResult.Aggregations.Terms("timeline")
	if !found {
		log.Fatalf("we should have a terms aggregation called %q", "timeline")
	}
	for _, userBucket := range agg.Buckets {
		// Every bucket should have the user field as key.
		user := userBucket.Key

		// The sub-aggregation history should have the number of tweets per year.
		histogram, found := userBucket.DateHistogram("history")
		if found {
			for _, year := range histogram.Buckets {
				fmt.Printf("user %q has %d tweets in %q\n", user, year.DocCount, year.KeyAsString)
			}
		}
	}
}
Output:

func (Aggregations) AvgBucket

AvgBucket returns average bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-avg-bucket-aggregation.html

func (Aggregations) BucketScript

func (a Aggregations) BucketScript(name string) (*AggregationPipelineSimpleValue, bool)

BucketScript returns bucket script pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-bucket-script-aggregation.html

func (Aggregations) Cardinality

func (a Aggregations) Cardinality(name string) (*AggregationValueMetric, bool)

Cardinality returns cardinality aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-cardinality-aggregation.html

func (Aggregations) Composite

Composite returns composite bucket aggregation results.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html for details.

func (Aggregations) CumulativeSum

func (a Aggregations) CumulativeSum(name string) (*AggregationPipelineSimpleValue, bool)

CumulativeSum returns a cumulative sum pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-cumulative-sum-aggregation.html

func (Aggregations) DateHistogram

func (a Aggregations) DateHistogram(name string) (*AggregationBucketHistogramItems, bool)

DateHistogram returns date histogram aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-datehistogram-aggregation.html

func (Aggregations) Derivative

func (a Aggregations) Derivative(name string) (*AggregationPipelineDerivative, bool)

Derivative returns derivative pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-derivative-aggregation.html

func (Aggregations) DiversifiedSampler

func (a Aggregations) DiversifiedSampler(name string) (*AggregationSingleBucket, bool)

DiversifiedSampler returns diversified_sampler aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-diversified-sampler-aggregation.html

func (Aggregations) ExtendedStats

func (a Aggregations) ExtendedStats(name string) (*AggregationExtendedStatsMetric, bool)

ExtendedStats returns extended stats aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-extendedstats-aggregation.html

func (Aggregations) GeoDistance

func (a Aggregations) GeoDistance(name string) (*AggregationBucketRangeItems, bool)

GeoDistance returns geo distance aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-geodistance-aggregation.html

func (Aggregations) MatrixStats

func (a Aggregations) MatrixStats(name string) (*AggregationMatrixStats, bool)

MatrixStats returns matrix stats aggregation results. https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-matrix-stats-aggregation.html

func (Aggregations) MaxBucket

MaxBucket returns maximum bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-max-bucket-aggregation.html

func (Aggregations) MinBucket

MinBucket returns minimum bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-min-bucket-aggregation.html

func (Aggregations) MovAvg

MovAvg returns moving average pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html

func (Aggregations) PercentilesBucket

func (a Aggregations) PercentilesBucket(name string) (*AggregationPipelinePercentilesMetric, bool)

PercentilesBucket returns stats bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-percentiles-bucket-aggregation.html

func (Aggregations) SerialDiff

func (a Aggregations) SerialDiff(name string) (*AggregationPipelineSimpleValue, bool)

SerialDiff returns serial differencing pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-serialdiff-aggregation.html

func (Aggregations) SignificantTerms

func (a Aggregations) SignificantTerms(name string) (*AggregationBucketSignificantTerms, bool)

SignificantTerms returns significant terms aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html

func (Aggregations) StatsBucket

func (a Aggregations) StatsBucket(name string) (*AggregationPipelineStatsMetric, bool)

StatsBucket returns stats bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-stats-bucket-aggregation.html

func (Aggregations) SumBucket

SumBucket returns sum bucket pipeline aggregation results. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-sum-bucket-aggregation.html

func (Aggregations) ValueCount

func (a Aggregations) ValueCount(name string) (*AggregationValueMetric, bool)

ValueCount returns value-count aggregation results. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-valuecount-aggregation.html

type AliasAction

type AliasAction interface {
	Source() (interface{}, error)
}

AliasAction is an action to apply to an alias, e.g. "add" or "remove".

type AliasAddAction

type AliasAddAction struct {
	// contains filtered or unexported fields
}

AliasAddAction is an action to add to an alias.

func NewAliasAddAction

func NewAliasAddAction(alias string) *AliasAddAction

NewAliasAddAction returns an action to add an alias.

func (*AliasAddAction) Filter

func (a *AliasAddAction) Filter(filter Query) *AliasAddAction

Filter associates a filter to the alias.

func (*AliasAddAction) Index

func (a *AliasAddAction) Index(index ...string) *AliasAddAction

Index associates one or more indices to the alias.

func (*AliasAddAction) IndexRouting

func (a *AliasAddAction) IndexRouting(routing string) *AliasAddAction

IndexRouting associates an index routing value to the alias.

func (*AliasAddAction) Routing

func (a *AliasAddAction) Routing(routing string) *AliasAddAction

Routing associates a routing value to the alias. This basically sets index and search routing to the same value.

func (*AliasAddAction) SearchRouting

func (a *AliasAddAction) SearchRouting(routing ...string) *AliasAddAction

SearchRouting associates a search routing value to the alias.

func (*AliasAddAction) Source

func (a *AliasAddAction) Source() (interface{}, error)

Source returns the JSON-serializable data.

func (*AliasAddAction) Validate

func (a *AliasAddAction) Validate() error

Validate checks if the operation is valid.

type AliasRemoveAction

type AliasRemoveAction struct {
	// contains filtered or unexported fields
}

AliasRemoveAction is an action to remove an alias.

func NewAliasRemoveAction

func NewAliasRemoveAction(alias string) *AliasRemoveAction

NewAliasRemoveAction returns an action to remove an alias.

func (*AliasRemoveAction) Index

func (a *AliasRemoveAction) Index(index ...string) *AliasRemoveAction

Index associates one or more indices to the alias.

func (*AliasRemoveAction) Source

func (a *AliasRemoveAction) Source() (interface{}, error)

Source returns the JSON-serializable data.

func (*AliasRemoveAction) Validate

func (a *AliasRemoveAction) Validate() error

Validate checks if the operation is valid.

type AliasResult

type AliasResult struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

AliasResult is the outcome of calling Do on AliasService.

type AliasService

type AliasService struct {
	// contains filtered or unexported fields
}

AliasService enables users to add or remove an alias. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-aliases.html for details.

func NewAliasService

func NewAliasService(client *Client) *AliasService

NewAliasService implements a service to manage aliases.

func (*AliasService) Action

func (s *AliasService) Action(action ...AliasAction) *AliasService

Action accepts one or more AliasAction instances which can be of type AliasAddAction or AliasRemoveAction.

func (*AliasService) Add

func (s *AliasService) Add(indexName string, aliasName string) *AliasService

Add adds an alias to an index.

func (*AliasService) AddWithFilter

func (s *AliasService) AddWithFilter(indexName string, aliasName string, filter Query) *AliasService

Add adds an alias to an index and associates a filter to the alias.

func (*AliasService) Do

func (s *AliasService) Do(ctx context.Context) (*AliasResult, error)

Do executes the command.

func (*AliasService) Pretty

func (s *AliasService) Pretty(pretty bool) *AliasService

Pretty asks Elasticsearch to indent the HTTP response.

func (*AliasService) Remove

func (s *AliasService) Remove(indexName string, aliasName string) *AliasService

Remove removes an alias.

type AliasesResult

type AliasesResult struct {
	Indices map[string]indexResult
}

func (AliasesResult) IndicesByAlias

func (ar AliasesResult) IndicesByAlias(aliasName string) []string

type AliasesService

type AliasesService struct {
	// contains filtered or unexported fields
}

AliasesService returns the aliases associated with one or more indices. See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-aliases.html.

func NewAliasesService

func NewAliasesService(client *Client) *AliasesService

NewAliasesService instantiates a new AliasesService.

func (*AliasesService) Do

func (*AliasesService) Index

func (s *AliasesService) Index(index ...string) *AliasesService

Index adds one or more indices.

func (*AliasesService) Pretty

func (s *AliasesService) Pretty(pretty bool) *AliasesService

Pretty asks Elasticsearch to indent the returned JSON.

type AvgAggregation

type AvgAggregation struct {
	// contains filtered or unexported fields
}

AvgAggregation is a single-value metrics aggregation that computes the average of numeric values that are extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-avg-aggregation.html

func NewAvgAggregation

func NewAvgAggregation() *AvgAggregation

func (*AvgAggregation) Field

func (a *AvgAggregation) Field(field string) *AvgAggregation

func (*AvgAggregation) Format

func (a *AvgAggregation) Format(format string) *AvgAggregation

func (*AvgAggregation) Meta

func (a *AvgAggregation) Meta(metaData map[string]interface{}) *AvgAggregation

Meta sets the meta data to be included in the aggregation response.

func (*AvgAggregation) Script

func (a *AvgAggregation) Script(script *Script) *AvgAggregation

func (*AvgAggregation) Source

func (a *AvgAggregation) Source() (interface{}, error)

func (*AvgAggregation) SubAggregation

func (a *AvgAggregation) SubAggregation(name string, subAggregation Aggregation) *AvgAggregation

type AvgBucketAggregation

type AvgBucketAggregation struct {
	// contains filtered or unexported fields
}

AvgBucketAggregation is a sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-avg-bucket-aggregation.html

func NewAvgBucketAggregation

func NewAvgBucketAggregation() *AvgBucketAggregation

NewAvgBucketAggregation creates and initializes a new AvgBucketAggregation.

func (*AvgBucketAggregation) BucketsPath

func (a *AvgBucketAggregation) BucketsPath(bucketsPaths ...string) *AvgBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*AvgBucketAggregation) Format

Format to use on the output of this aggregation.

func (*AvgBucketAggregation) GapInsertZeros

func (a *AvgBucketAggregation) GapInsertZeros() *AvgBucketAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*AvgBucketAggregation) GapPolicy

func (a *AvgBucketAggregation) GapPolicy(gapPolicy string) *AvgBucketAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*AvgBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*AvgBucketAggregation) Meta

func (a *AvgBucketAggregation) Meta(metaData map[string]interface{}) *AvgBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*AvgBucketAggregation) Source

func (a *AvgBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type Backoff

type Backoff interface {
	// Next implements a BackoffFunc.
	Next(retry int) (time.Duration, bool)
}

Backoff allows callers to implement their own Backoff strategy.

type BackoffFunc

type BackoffFunc func(retry int) (time.Duration, bool)

BackoffFunc specifies the signature of a function that returns the time to wait before the next call to a resource. To stop retrying return false in the 2nd return value.

type BackoffRetrier

type BackoffRetrier struct {
	// contains filtered or unexported fields
}

BackoffRetrier is an implementation that does nothing but return nil on Retry.

func NewBackoffRetrier

func NewBackoffRetrier(backoff Backoff) *BackoffRetrier

NewBackoffRetrier returns a retrier that uses the given backoff strategy.

func (*BackoffRetrier) Retry

func (r *BackoffRetrier) Retry(ctx context.Context, retry int, req *http.Request, resp *http.Response, err error) (time.Duration, bool, error)

Retry calls into the backoff strategy and its wait interval.

type BoolQuery

type BoolQuery struct {
	Query
	// contains filtered or unexported fields
}

A bool query matches documents matching boolean combinations of other queries. For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-bool-query.html

func NewBoolQuery

func NewBoolQuery() *BoolQuery

Creates a new bool query.

func (*BoolQuery) AdjustPureNegative

func (q *BoolQuery) AdjustPureNegative(adjustPureNegative bool) *BoolQuery

func (*BoolQuery) Boost

func (q *BoolQuery) Boost(boost float64) *BoolQuery

func (*BoolQuery) Filter

func (q *BoolQuery) Filter(filters ...Query) *BoolQuery

func (*BoolQuery) MinimumNumberShouldMatch

func (q *BoolQuery) MinimumNumberShouldMatch(minimumNumberShouldMatch int) *BoolQuery

func (*BoolQuery) MinimumShouldMatch

func (q *BoolQuery) MinimumShouldMatch(minimumShouldMatch string) *BoolQuery

func (*BoolQuery) Must

func (q *BoolQuery) Must(queries ...Query) *BoolQuery

func (*BoolQuery) MustNot

func (q *BoolQuery) MustNot(queries ...Query) *BoolQuery

func (*BoolQuery) QueryName

func (q *BoolQuery) QueryName(queryName string) *BoolQuery

func (*BoolQuery) Should

func (q *BoolQuery) Should(queries ...Query) *BoolQuery

func (*BoolQuery) Source

func (q *BoolQuery) Source() (interface{}, error)

Creates the query source for the bool query.

type BoostingQuery

type BoostingQuery struct {
	Query
	// contains filtered or unexported fields
}

A boosting query can be used to effectively demote results that match a given query. For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-boosting-query.html

func NewBoostingQuery

func NewBoostingQuery() *BoostingQuery

Creates a new boosting query.

func (*BoostingQuery) Boost

func (q *BoostingQuery) Boost(boost float64) *BoostingQuery

func (*BoostingQuery) Negative

func (q *BoostingQuery) Negative(negative Query) *BoostingQuery

func (*BoostingQuery) NegativeBoost

func (q *BoostingQuery) NegativeBoost(negativeBoost float64) *BoostingQuery

func (*BoostingQuery) Positive

func (q *BoostingQuery) Positive(positive Query) *BoostingQuery

func (*BoostingQuery) Source

func (q *BoostingQuery) Source() (interface{}, error)

Creates the query source for the boosting query.

type BucketCountThresholds

type BucketCountThresholds struct {
	MinDocCount      *int64
	ShardMinDocCount *int64
	RequiredSize     *int
	ShardSize        *int
}

BucketCountThresholds is used in e.g. terms and significant text aggregations.

type BucketScriptAggregation

type BucketScriptAggregation struct {
	// contains filtered or unexported fields
}

BucketScriptAggregation is a parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-bucket-script-aggregation.html

func NewBucketScriptAggregation

func NewBucketScriptAggregation() *BucketScriptAggregation

NewBucketScriptAggregation creates and initializes a new BucketScriptAggregation.

func (*BucketScriptAggregation) AddBucketsPath

func (a *BucketScriptAggregation) AddBucketsPath(name, path string) *BucketScriptAggregation

AddBucketsPath adds a bucket path to use for this pipeline aggregator.

func (*BucketScriptAggregation) BucketsPathsMap

func (a *BucketScriptAggregation) BucketsPathsMap(bucketsPathsMap map[string]string) *BucketScriptAggregation

BucketsPathsMap sets the paths to the buckets to use for this pipeline aggregator.

func (*BucketScriptAggregation) Format

Format to use on the output of this aggregation.

func (*BucketScriptAggregation) GapInsertZeros

func (a *BucketScriptAggregation) GapInsertZeros() *BucketScriptAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*BucketScriptAggregation) GapPolicy

func (a *BucketScriptAggregation) GapPolicy(gapPolicy string) *BucketScriptAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*BucketScriptAggregation) GapSkip

GapSkip skips gaps in the series.

func (*BucketScriptAggregation) Meta

func (a *BucketScriptAggregation) Meta(metaData map[string]interface{}) *BucketScriptAggregation

Meta sets the meta data to be included in the aggregation response.

func (*BucketScriptAggregation) Script

Script is the script to run.

func (*BucketScriptAggregation) Source

func (a *BucketScriptAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type BucketSelectorAggregation

type BucketSelectorAggregation struct {
	// contains filtered or unexported fields
}

BucketSelectorAggregation is a parent pipeline aggregation which determines whether the current bucket will be retained in the parent multi-bucket aggregation. The specific metric must be numeric and the script must return a boolean value. If the script language is expression then a numeric return value is permitted. In this case 0.0 will be evaluated as false and all other values will evaluate to true.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-bucket-selector-aggregation.html

func NewBucketSelectorAggregation

func NewBucketSelectorAggregation() *BucketSelectorAggregation

NewBucketSelectorAggregation creates and initializes a new BucketSelectorAggregation.

func (*BucketSelectorAggregation) AddBucketsPath

func (a *BucketSelectorAggregation) AddBucketsPath(name, path string) *BucketSelectorAggregation

AddBucketsPath adds a bucket path to use for this pipeline aggregator.

func (*BucketSelectorAggregation) BucketsPathsMap

func (a *BucketSelectorAggregation) BucketsPathsMap(bucketsPathsMap map[string]string) *BucketSelectorAggregation

BucketsPathsMap sets the paths to the buckets to use for this pipeline aggregator.

func (*BucketSelectorAggregation) Format

Format to use on the output of this aggregation.

func (*BucketSelectorAggregation) GapInsertZeros

GapInsertZeros inserts zeros for gaps in the series.

func (*BucketSelectorAggregation) GapPolicy

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*BucketSelectorAggregation) GapSkip

GapSkip skips gaps in the series.

func (*BucketSelectorAggregation) Meta

func (a *BucketSelectorAggregation) Meta(metaData map[string]interface{}) *BucketSelectorAggregation

Meta sets the meta data to be included in the aggregation response.

func (*BucketSelectorAggregation) Script

Script is the script to run.

func (*BucketSelectorAggregation) Source

func (a *BucketSelectorAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type BucketSortAggregation

type BucketSortAggregation struct {
	// contains filtered or unexported fields
}

BucketSortAggregation parent pipeline aggregation which sorts the buckets of its parent multi-bucket aggregation. Zero or more sort fields may be specified together with the corresponding sort order. Each bucket may be sorted based on its _key, _count or its sub-aggregations. In addition, parameters from and size may be set in order to truncate the result buckets.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-bucket-sort-aggregation.html

func NewBucketSortAggregation

func NewBucketSortAggregation() *BucketSortAggregation

NewBucketSortAggregation creates and initializes a new BucketSortAggregation.

func (*BucketSortAggregation) From

From adds the "from" parameter to the aggregation.

func (*BucketSortAggregation) GapInsertZeros

func (a *BucketSortAggregation) GapInsertZeros() *BucketSortAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*BucketSortAggregation) GapPolicy

func (a *BucketSortAggregation) GapPolicy(gapPolicy string) *BucketSortAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "skip".

func (*BucketSortAggregation) GapSkip

GapSkip skips gaps in the series.

func (*BucketSortAggregation) Meta

func (a *BucketSortAggregation) Meta(meta map[string]interface{}) *BucketSortAggregation

Meta sets the meta data in the aggregation. Although metadata is supported for this aggregation by Elasticsearch, it's important to note that there's no use to it because this aggregation does not include new data in the response. It merely reorders parent buckets.

func (*BucketSortAggregation) Size

Size adds the "size" parameter to the aggregation.

func (*BucketSortAggregation) Sort

func (a *BucketSortAggregation) Sort(field string, ascending bool) *BucketSortAggregation

Sort adds a sort order to the list of sorters.

func (*BucketSortAggregation) SortWithInfo

func (a *BucketSortAggregation) SortWithInfo(info SortInfo) *BucketSortAggregation

SortWithInfo adds a SortInfo to the list of sorters.

func (*BucketSortAggregation) Source

func (a *BucketSortAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type BulkAfterFunc

type BulkAfterFunc func(executionId int64, requests []BulkableRequest, response *BulkResponse, err error)

BulkAfterFunc defines the signature of callbacks that are executed after a commit to Elasticsearch. The err parameter signals an error.

type BulkBeforeFunc

type BulkBeforeFunc func(executionId int64, requests []BulkableRequest)

BulkBeforeFunc defines the signature of callbacks that are executed before a commit to Elasticsearch.

type BulkDeleteRequest

type BulkDeleteRequest struct {
	BulkableRequest
	// contains filtered or unexported fields
}

BulkDeleteRequest is a request to remove a document from Elasticsearch.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func NewBulkDeleteRequest

func NewBulkDeleteRequest() *BulkDeleteRequest

NewBulkDeleteRequest returns a new BulkDeleteRequest.

func (*BulkDeleteRequest) Id

Id specifies the identifier of the document to delete.

func (*BulkDeleteRequest) Index

func (r *BulkDeleteRequest) Index(index string) *BulkDeleteRequest

Index specifies the Elasticsearch index to use for this delete request. If unspecified, the index set on the BulkService will be used.

func (*BulkDeleteRequest) Parent

func (r *BulkDeleteRequest) Parent(parent string) *BulkDeleteRequest

Parent specifies the parent of the request, which is used in parent/child mappings.

func (*BulkDeleteRequest) Routing

func (r *BulkDeleteRequest) Routing(routing string) *BulkDeleteRequest

Routing specifies a routing value for the request.

func (*BulkDeleteRequest) Source

func (r *BulkDeleteRequest) Source() ([]string, error)

Source returns the on-wire representation of the delete request, split into an action-and-meta-data line and an (optional) source line. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func (*BulkDeleteRequest) String

func (r *BulkDeleteRequest) String() string

String returns the on-wire representation of the delete request, concatenated as a single string.

func (*BulkDeleteRequest) Type

Type specifies the Elasticsearch type to use for this delete request. If unspecified, the type set on the BulkService will be used.

func (*BulkDeleteRequest) UseEasyJSON

func (r *BulkDeleteRequest) UseEasyJSON(enable bool) *BulkDeleteRequest

UseEasyJSON is an experimental setting that enables serialization with github.com/mailru/easyjson, which should in faster serialization time and less allocations, but removed compatibility with encoding/json, usage of unsafe etc. See https://github.com/mailru/easyjson#issues-notes-and-limitations for details. This setting is disabled by default.

func (*BulkDeleteRequest) Version

func (r *BulkDeleteRequest) Version(version int64) *BulkDeleteRequest

Version indicates the version to be deleted as part of an optimistic concurrency model.

func (*BulkDeleteRequest) VersionType

func (r *BulkDeleteRequest) VersionType(versionType string) *BulkDeleteRequest

VersionType can be "internal" (default), "external", "external_gte", or "external_gt".

type BulkIndexByScrollResponse

type BulkIndexByScrollResponse struct {
	Took             int64  `json:"took"`
	SliceId          *int64 `json:"slice_id,omitempty"`
	TimedOut         bool   `json:"timed_out"`
	Total            int64  `json:"total"`
	Updated          int64  `json:"updated,omitempty"`
	Created          int64  `json:"created,omitempty"`
	Deleted          int64  `json:"deleted"`
	Batches          int64  `json:"batches"`
	VersionConflicts int64  `json:"version_conflicts"`
	Noops            int64  `json:"noops"`
	Retries          struct {
		Bulk   int64 `json:"bulk"`
		Search int64 `json:"search"`
	} `json:"retries,omitempty"`
	Throttled            string                             `json:"throttled"`
	ThrottledMillis      int64                              `json:"throttled_millis"`
	RequestsPerSecond    float64                            `json:"requests_per_second"`
	Canceled             string                             `json:"canceled,omitempty"`
	ThrottledUntil       string                             `json:"throttled_until"`
	ThrottledUntilMillis int64                              `json:"throttled_until_millis"`
	Failures             []bulkIndexByScrollResponseFailure `json:"failures"`
}

BulkIndexByScrollResponse is the outcome of executing Do with DeleteByQueryService and UpdateByQueryService.

type BulkIndexRequest

type BulkIndexRequest struct {
	BulkableRequest
	// contains filtered or unexported fields
}

BulkIndexRequest is a request to add a document to Elasticsearch.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func NewBulkIndexRequest

func NewBulkIndexRequest() *BulkIndexRequest

NewBulkIndexRequest returns a new BulkIndexRequest. The operation type is "index" by default.

func (*BulkIndexRequest) Doc

func (r *BulkIndexRequest) Doc(doc interface{}) *BulkIndexRequest

Doc specifies the document to index.

func (*BulkIndexRequest) Id

Id specifies the identifier of the document to index.

func (*BulkIndexRequest) Index

func (r *BulkIndexRequest) Index(index string) *BulkIndexRequest

Index specifies the Elasticsearch index to use for this index request. If unspecified, the index set on the BulkService will be used.

func (*BulkIndexRequest) OpType

func (r *BulkIndexRequest) OpType(opType string) *BulkIndexRequest

OpType specifies if this request should follow create-only or upsert behavior. This follows the OpType of the standard document index API. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-index_.html#operation-type for details.

func (*BulkIndexRequest) Parent

func (r *BulkIndexRequest) Parent(parent string) *BulkIndexRequest

Parent specifies the identifier of the parent document (if available).

func (*BulkIndexRequest) Pipeline

func (r *BulkIndexRequest) Pipeline(pipeline string) *BulkIndexRequest

Pipeline to use while processing the request.

func (*BulkIndexRequest) RetryOnConflict

func (r *BulkIndexRequest) RetryOnConflict(retryOnConflict int) *BulkIndexRequest

RetryOnConflict specifies how often to retry in case of a version conflict.

func (*BulkIndexRequest) Routing

func (r *BulkIndexRequest) Routing(routing string) *BulkIndexRequest

Routing specifies a routing value for the request.

func (*BulkIndexRequest) Source

func (r *BulkIndexRequest) Source() ([]string, error)

Source returns the on-wire representation of the index request, split into an action-and-meta-data line and an (optional) source line. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func (*BulkIndexRequest) String

func (r *BulkIndexRequest) String() string

String returns the on-wire representation of the index request, concatenated as a single string.

func (*BulkIndexRequest) Type

Type specifies the Elasticsearch type to use for this index request. If unspecified, the type set on the BulkService will be used.

func (*BulkIndexRequest) UseEasyJSON

func (r *BulkIndexRequest) UseEasyJSON(enable bool) *BulkIndexRequest

UseEasyJSON is an experimental setting that enables serialization with github.com/mailru/easyjson, which should in faster serialization time and less allocations, but removed compatibility with encoding/json, usage of unsafe etc. See https://github.com/mailru/easyjson#issues-notes-and-limitations for details. This setting is disabled by default.

func (*BulkIndexRequest) Version

func (r *BulkIndexRequest) Version(version int64) *BulkIndexRequest

Version indicates the version of the document as part of an optimistic concurrency model.

func (*BulkIndexRequest) VersionType

func (r *BulkIndexRequest) VersionType(versionType string) *BulkIndexRequest

VersionType specifies how versions are created. It can be e.g. internal, external, external_gte, or force.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-index_.html#index-versioning for details.

type BulkProcessor

type BulkProcessor struct {
	// contains filtered or unexported fields
}

BulkProcessor encapsulates a task that accepts bulk requests and orchestrates committing them to Elasticsearch via one or more workers.

BulkProcessor is returned by setting up a BulkProcessorService and calling the Do method.

func (*BulkProcessor) Add

func (p *BulkProcessor) Add(request BulkableRequest)

Add adds a single request to commit by the BulkProcessorService.

The caller is responsible for setting the index and type on the request.

func (*BulkProcessor) Close

func (p *BulkProcessor) Close() error

Close stops the bulk processor previously started with Do. If it is already stopped, this is a no-op and nil is returned.

By implementing Close, BulkProcessor implements the io.Closer interface.

func (*BulkProcessor) Flush

func (p *BulkProcessor) Flush() error

Flush manually asks all workers to commit their outstanding requests. It returns only when all workers acknowledge completion.

func (*BulkProcessor) Start

func (p *BulkProcessor) Start(ctx context.Context) error

Start starts the bulk processor. If the processor is already started, nil is returned.

func (*BulkProcessor) Stats

func (p *BulkProcessor) Stats() BulkProcessorStats

Stats returns the latest bulk processor statistics. Collecting stats must be enabled first by calling Stats(true) on the service that created this processor.

func (*BulkProcessor) Stop

func (p *BulkProcessor) Stop() error

Stop is an alias for Close.

type BulkProcessorService

type BulkProcessorService struct {
	// contains filtered or unexported fields
}

BulkProcessorService allows to easily process bulk requests. It allows setting policies when to flush new bulk requests, e.g. based on a number of actions, on the size of the actions, and/or to flush periodically. It also allows to control the number of concurrent bulk requests allowed to be executed in parallel.

BulkProcessorService, by default, commits either every 1000 requests or when the (estimated) size of the bulk requests exceeds 5 MB. However, it does not commit periodically. BulkProcessorService also does retry by default, using an exponential backoff algorithm.

The caller is responsible for setting the index and type on every bulk request added to BulkProcessorService.

BulkProcessorService takes ideas from the BulkProcessor of the Elasticsearch Java API as documented in https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs-bulk-processor.html.

func NewBulkProcessorService

func NewBulkProcessorService(client *Client) *BulkProcessorService

NewBulkProcessorService creates a new BulkProcessorService.

func (*BulkProcessorService) After

After specifies a function to be executed when bulk requests have been comitted to Elasticsearch. The After callback executes both when the commit was successful as well as on failures.

func (*BulkProcessorService) Backoff

func (s *BulkProcessorService) Backoff(backoff Backoff) *BulkProcessorService

Backoff sets the backoff strategy to use for errors.

func (*BulkProcessorService) Before

Before specifies a function to be executed before bulk requests get comitted to Elasticsearch.

func (*BulkProcessorService) BulkActions

func (s *BulkProcessorService) BulkActions(bulkActions int) *BulkProcessorService

BulkActions specifies when to flush based on the number of actions currently added. Defaults to 1000 and can be set to -1 to be disabled.

func (*BulkProcessorService) BulkSize

func (s *BulkProcessorService) BulkSize(bulkSize int) *BulkProcessorService

BulkSize specifies when to flush based on the size (in bytes) of the actions currently added. Defaults to 5 MB and can be set to -1 to be disabled.

func (*BulkProcessorService) Do

Do creates a new BulkProcessor and starts it. Consider the BulkProcessor as a running instance that accepts bulk requests and commits them to Elasticsearch, spreading the work across one or more workers.

You can interoperate with the BulkProcessor returned by Do, e.g. Start and Stop (or Close) it.

Context is an optional context that is passed into the bulk request service calls. In contrast to other operations, this context is used in a long running process. You could use it to pass e.g. loggers, but you shouldn't use it for cancellation.

Calling Do several times returns new BulkProcessors. You probably don't want to do this. BulkProcessorService implements just a builder pattern.

func (*BulkProcessorService) FlushInterval

func (s *BulkProcessorService) FlushInterval(interval time.Duration) *BulkProcessorService

FlushInterval specifies when to flush at the end of the given interval. This is disabled by default. If you want the bulk processor to operate completely asynchronously, set both BulkActions and BulkSize to -1 and set the FlushInterval to a meaningful interval.

func (*BulkProcessorService) Name

Name is an optional name to identify this bulk processor.

func (*BulkProcessorService) Stats

func (s *BulkProcessorService) Stats(wantStats bool) *BulkProcessorService

Stats tells bulk processor to gather stats while running. Use Stats to return the stats. This is disabled by default.

func (*BulkProcessorService) Workers

Workers is the number of concurrent workers allowed to be executed. Defaults to 1 and must be greater or equal to 1.

type BulkProcessorStats

type BulkProcessorStats struct {
	Flushed   int64 // number of times the flush interval has been invoked
	Committed int64 // # of times workers committed bulk requests
	Indexed   int64 // # of requests indexed
	Created   int64 // # of requests that ES reported as creates (201)
	Updated   int64 // # of requests that ES reported as updates
	Deleted   int64 // # of requests that ES reported as deletes
	Succeeded int64 // # of requests that ES reported as successful
	Failed    int64 // # of requests that ES reported as failed

	Workers []*BulkProcessorWorkerStats // stats for each worker
}

BulkProcessorStats contains various statistics of a bulk processor while it is running. Use the Stats func to return it while running.

type BulkProcessorWorkerStats

type BulkProcessorWorkerStats struct {
	Queued       int64         // # of requests queued in this worker
	LastDuration time.Duration // duration of last commit
}

BulkProcessorWorkerStats represents per-worker statistics.

type BulkResponse

type BulkResponse struct {
	Took   int                            `json:"took,omitempty"`
	Errors bool                           `json:"errors,omitempty"`
	Items  []map[string]*BulkResponseItem `json:"items,omitempty"`
}

BulkResponse is a response to a bulk execution.

Example:

{
  "took":3,
  "errors":false,
  "items":[{
    "index":{
      "_index":"index1",
      "_type":"tweet",
      "_id":"1",
      "_version":3,
      "status":201
    }
  },{
    "index":{
      "_index":"index2",
      "_type":"tweet",
      "_id":"2",
      "_version":3,
      "status":200
    }
  },{
    "delete":{
      "_index":"index1",
      "_type":"tweet",
      "_id":"1",
      "_version":4,
      "status":200,
      "found":true
    }
  },{
    "update":{
      "_index":"index2",
      "_type":"tweet",
      "_id":"2",
      "_version":4,
      "status":200
    }
  }]
}

func (*BulkResponse) ByAction

func (r *BulkResponse) ByAction(action string) []*BulkResponseItem

ByAction returns all bulk request results of a certain action, e.g. "index" or "delete".

func (*BulkResponse) ById

func (r *BulkResponse) ById(id string) []*BulkResponseItem

ById returns all bulk request results of a given document id, regardless of the action ("index", "delete" etc.).

func (*BulkResponse) Created

func (r *BulkResponse) Created() []*BulkResponseItem

Created returns all bulk request results of "create" actions.

func (*BulkResponse) Deleted

func (r *BulkResponse) Deleted() []*BulkResponseItem

Deleted returns all bulk request results of "delete" actions.

func (*BulkResponse) Failed

func (r *BulkResponse) Failed() []*BulkResponseItem

Failed returns those items of a bulk response that have errors, i.e. those that don't have a status code between 200 and 299.

func (*BulkResponse) Indexed

func (r *BulkResponse) Indexed() []*BulkResponseItem

Indexed returns all bulk request results of "index" actions.

func (*BulkResponse) Succeeded

func (r *BulkResponse) Succeeded() []*BulkResponseItem

Succeeded returns those items of a bulk response that have no errors, i.e. those have a status code between 200 and 299.

func (*BulkResponse) Updated

func (r *BulkResponse) Updated() []*BulkResponseItem

Updated returns all bulk request results of "update" actions.

type BulkResponseItem

type BulkResponseItem struct {
	Index         string        `json:"_index,omitempty"`
	Type          string        `json:"_type,omitempty"`
	Id            string        `json:"_id,omitempty"`
	Version       int64         `json:"_version,omitempty"`
	Result        string        `json:"result,omitempty"`
	Shards        *shardsInfo   `json:"_shards,omitempty"`
	SeqNo         int64         `json:"_seq_no,omitempty"`
	PrimaryTerm   int64         `json:"_primary_term,omitempty"`
	Status        int           `json:"status,omitempty"`
	ForcedRefresh bool          `json:"forced_refresh,omitempty"`
	Error         *ErrorDetails `json:"error,omitempty"`
	GetResult     *GetResult    `json:"get,omitempty"`
}

BulkResponseItem is the result of a single bulk request.

type BulkService

type BulkService struct {
	// contains filtered or unexported fields
}

BulkService allows for batching bulk requests and sending them to Elasticsearch in one roundtrip. Use the Add method with BulkIndexRequest, BulkUpdateRequest, and BulkDeleteRequest to add bulk requests to a batch, then use Do to send them to Elasticsearch.

BulkService will be reset after each Do call. In other words, you can reuse BulkService to send many batches. You do not have to create a new BulkService for each batch.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for more details.

func NewBulkService

func NewBulkService(client *Client) *BulkService

NewBulkService initializes a new BulkService.

func (*BulkService) Add

func (s *BulkService) Add(requests ...BulkableRequest) *BulkService

Add adds bulkable requests, i.e. BulkIndexRequest, BulkUpdateRequest, and/or BulkDeleteRequest.

func (*BulkService) Do

func (s *BulkService) Do(ctx context.Context) (*BulkResponse, error)

Do sends the batched requests to Elasticsearch. Note that, when successful, you can reuse the BulkService for the next batch as the list of bulk requests is cleared on success.

func (*BulkService) EstimatedSizeInBytes

func (s *BulkService) EstimatedSizeInBytes() int64

EstimatedSizeInBytes returns the estimated size of all bulkable requests added via Add.

func (*BulkService) Index

func (s *BulkService) Index(index string) *BulkService

Index specifies the index to use for all batches. You may also leave this blank and specify the index in the individual bulk requests.

func (*BulkService) NumberOfActions

func (s *BulkService) NumberOfActions() int

NumberOfActions returns the number of bulkable requests that need to be sent to Elasticsearch on the next batch.

func (*BulkService) Pipeline

func (s *BulkService) Pipeline(pipeline string) *BulkService

Pipeline specifies the pipeline id to preprocess incoming documents with.

func (*BulkService) Pretty

func (s *BulkService) Pretty(pretty bool) *BulkService

Pretty tells Elasticsearch whether to return a formatted JSON response.

func (*BulkService) Refresh

func (s *BulkService) Refresh(refresh string) *BulkService

Refresh controls when changes made by this request are made visible to search. The allowed values are: "true" (refresh the relevant primary and replica shards immediately), "wait_for" (wait for the changes to be made visible by a refresh before applying), or "false" (no refresh related actions).

func (*BulkService) Retrier

func (s *BulkService) Retrier(retrier Retrier) *BulkService

Retrier allows to set specific retry logic for this BulkService. If not specified, it will use the client's default retrier.

func (*BulkService) Routing

func (s *BulkService) Routing(routing string) *BulkService

Routing specifies the routing value.

func (*BulkService) Timeout

func (s *BulkService) Timeout(timeout string) *BulkService

Timeout is a global timeout for processing bulk requests. This is a server-side timeout, i.e. it tells Elasticsearch the time after which it should stop processing.

func (*BulkService) Type

func (s *BulkService) Type(typ string) *BulkService

Type specifies the type to use for all batches. You may also leave this blank and specify the type in the individual bulk requests.

func (*BulkService) WaitForActiveShards

func (s *BulkService) WaitForActiveShards(waitForActiveShards string) *BulkService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the bulk operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

type BulkUpdateRequest

type BulkUpdateRequest struct {
	BulkableRequest
	// contains filtered or unexported fields
}

BulkUpdateRequest is a request to update a document in Elasticsearch.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func NewBulkUpdateRequest

func NewBulkUpdateRequest() *BulkUpdateRequest

NewBulkUpdateRequest returns a new BulkUpdateRequest.

func (*BulkUpdateRequest) DetectNoop

func (r *BulkUpdateRequest) DetectNoop(detectNoop bool) *BulkUpdateRequest

DetectNoop specifies whether changes that don't affect the document should be ignored (true) or unignored (false). This is enabled by default in Elasticsearch.

func (*BulkUpdateRequest) Doc

func (r *BulkUpdateRequest) Doc(doc interface{}) *BulkUpdateRequest

Doc specifies the updated document.

func (*BulkUpdateRequest) DocAsUpsert

func (r *BulkUpdateRequest) DocAsUpsert(docAsUpsert bool) *BulkUpdateRequest

DocAsUpsert indicates whether the contents of Doc should be used as the Upsert value.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-update.html#_literal_doc_as_upsert_literal for details.

func (*BulkUpdateRequest) Id

Id specifies the identifier of the document to update.

func (*BulkUpdateRequest) Index

func (r *BulkUpdateRequest) Index(index string) *BulkUpdateRequest

Index specifies the Elasticsearch index to use for this update request. If unspecified, the index set on the BulkService will be used.

func (*BulkUpdateRequest) Parent

func (r *BulkUpdateRequest) Parent(parent string) *BulkUpdateRequest

Parent specifies the identifier of the parent document (if available).

func (*BulkUpdateRequest) RetryOnConflict

func (r *BulkUpdateRequest) RetryOnConflict(retryOnConflict int) *BulkUpdateRequest

RetryOnConflict specifies how often to retry in case of a version conflict.

func (*BulkUpdateRequest) ReturnSource

func (r *BulkUpdateRequest) ReturnSource(source bool) *BulkUpdateRequest

ReturnSource specifies whether Elasticsearch should return the source after the update. In the request, this responds to the `_source` field. It is false by default.

func (*BulkUpdateRequest) Routing

func (r *BulkUpdateRequest) Routing(routing string) *BulkUpdateRequest

Routing specifies a routing value for the request.

func (*BulkUpdateRequest) ScriptedUpsert

func (r *BulkUpdateRequest) ScriptedUpsert(upsert bool) *BulkUpdateRequest

ScripedUpsert specifies if your script will run regardless of whether the document exists or not.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-update.html#_literal_scripted_upsert_literal

func (*BulkUpdateRequest) Source

func (r *BulkUpdateRequest) Source() ([]string, error)

Source returns the on-wire representation of the update request, split into an action-and-meta-data line and an (optional) source line. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-bulk.html for details.

func (*BulkUpdateRequest) String

func (r *BulkUpdateRequest) String() string

String returns the on-wire representation of the update request, concatenated as a single string.

func (*BulkUpdateRequest) Type

Type specifies the Elasticsearch type to use for this update request. If unspecified, the type set on the BulkService will be used.

func (*BulkUpdateRequest) Upsert

func (r *BulkUpdateRequest) Upsert(doc interface{}) *BulkUpdateRequest

Upsert specifies the document to use for upserts. It will be used for create if the original document does not exist.

func (*BulkUpdateRequest) UseEasyJSON

func (r *BulkUpdateRequest) UseEasyJSON(enable bool) *BulkUpdateRequest

UseEasyJSON is an experimental setting that enables serialization with github.com/mailru/easyjson, which should in faster serialization time and less allocations, but removed compatibility with encoding/json, usage of unsafe etc. See https://github.com/mailru/easyjson#issues-notes-and-limitations for details. This setting is disabled by default.

func (*BulkUpdateRequest) Version

func (r *BulkUpdateRequest) Version(version int64) *BulkUpdateRequest

Version indicates the version of the document as part of an optimistic concurrency model.

func (*BulkUpdateRequest) VersionType

func (r *BulkUpdateRequest) VersionType(versionType string) *BulkUpdateRequest

VersionType can be "internal" (default), "external", "external_gte", or "external_gt".

type BulkableRequest

type BulkableRequest interface {
	fmt.Stringer
	Source() ([]string, error)
}

BulkableRequest is a generic interface to bulkable requests.

type CandidateGenerator

type CandidateGenerator interface {
	Type() string
	Source() (interface{}, error)
}

type CardinalityAggregation

type CardinalityAggregation struct {
	// contains filtered or unexported fields
}

CardinalityAggregation is a single-value metrics aggregation that calculates an approximate count of distinct values. Values can be extracted either from specific fields in the document or generated by a script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-cardinality-aggregation.html

func NewCardinalityAggregation

func NewCardinalityAggregation() *CardinalityAggregation

func (*CardinalityAggregation) Field

func (*CardinalityAggregation) Format

func (*CardinalityAggregation) Meta

func (a *CardinalityAggregation) Meta(metaData map[string]interface{}) *CardinalityAggregation

Meta sets the meta data to be included in the aggregation response.

func (*CardinalityAggregation) PrecisionThreshold

func (a *CardinalityAggregation) PrecisionThreshold(threshold int64) *CardinalityAggregation

func (*CardinalityAggregation) Rehash

func (*CardinalityAggregation) Script

func (*CardinalityAggregation) Source

func (a *CardinalityAggregation) Source() (interface{}, error)

func (*CardinalityAggregation) SubAggregation

func (a *CardinalityAggregation) SubAggregation(name string, subAggregation Aggregation) *CardinalityAggregation

type ChiSquareSignificanceHeuristic

type ChiSquareSignificanceHeuristic struct {
	// contains filtered or unexported fields
}

ChiSquareSignificanceHeuristic implements Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5.2.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_chi_square for details.

func NewChiSquareSignificanceHeuristic

func NewChiSquareSignificanceHeuristic() *ChiSquareSignificanceHeuristic

NewChiSquareSignificanceHeuristic initializes a new ChiSquareSignificanceHeuristic.

func (*ChiSquareSignificanceHeuristic) BackgroundIsSuperset

func (sh *ChiSquareSignificanceHeuristic) BackgroundIsSuperset(backgroundIsSuperset bool) *ChiSquareSignificanceHeuristic

BackgroundIsSuperset indicates whether you defined a custom background filter that represents a difference set of documents that you want to compare to.

func (*ChiSquareSignificanceHeuristic) IncludeNegatives

func (sh *ChiSquareSignificanceHeuristic) IncludeNegatives(includeNegatives bool) *ChiSquareSignificanceHeuristic

IncludeNegatives indicates whether to filter out the terms that appear much less in the subset than in the background without the subset.

func (*ChiSquareSignificanceHeuristic) Name

Name returns the name of the heuristic in the REST interface.

func (*ChiSquareSignificanceHeuristic) Source

func (sh *ChiSquareSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type ChildrenAggregation

type ChildrenAggregation struct {
	// contains filtered or unexported fields
}

ChildrenAggregation is a special single bucket aggregation that enables aggregating from buckets on parent document types to buckets on child documents. It is available from 1.4.0.Beta1 upwards. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-children-aggregation.html

func NewChildrenAggregation

func NewChildrenAggregation() *ChildrenAggregation

func (*ChildrenAggregation) Meta

func (a *ChildrenAggregation) Meta(metaData map[string]interface{}) *ChildrenAggregation

Meta sets the meta data to be included in the aggregation response.

func (*ChildrenAggregation) Source

func (a *ChildrenAggregation) Source() (interface{}, error)

func (*ChildrenAggregation) SubAggregation

func (a *ChildrenAggregation) SubAggregation(name string, subAggregation Aggregation) *ChildrenAggregation

func (*ChildrenAggregation) Type

type ClearScrollResponse

type ClearScrollResponse struct {
}

ClearScrollResponse is the response of ClearScrollService.Do.

type ClearScrollService

type ClearScrollService struct {
	// contains filtered or unexported fields
}

ClearScrollService clears one or more scroll contexts by their ids.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-scroll.html#_clear_scroll_api for details.

func NewClearScrollService

func NewClearScrollService(client *Client) *ClearScrollService

NewClearScrollService creates a new ClearScrollService.

func (*ClearScrollService) Do

Do executes the operation.

func (*ClearScrollService) Pretty

func (s *ClearScrollService) Pretty(pretty bool) *ClearScrollService

Pretty indicates that the JSON response be indented and human readable.

func (*ClearScrollService) ScrollId

func (s *ClearScrollService) ScrollId(scrollIds ...string) *ClearScrollService

ScrollId is a list of scroll IDs to clear. Use _all to clear all search contexts.

func (*ClearScrollService) Validate

func (s *ClearScrollService) Validate() error

Validate checks if the operation is valid.

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is an Elasticsearch client. Create one by calling NewClient.

func NewClient

func NewClient(options ...ClientOptionFunc) (*Client, error)

NewClient creates a new client to work with Elasticsearch.

NewClient, by default, is meant to be long-lived and shared across your application. If you need a short-lived client, e.g. for request-scope, consider using NewSimpleClient instead.

The caller can configure the new client by passing configuration options to the func.

Example:

client, err := elastic.NewClient(
  elastic.SetURL("http://127.0.0.1:9200", "http://127.0.0.1:9201"),
  elastic.SetBasicAuth("user", "secret"))

If no URL is configured, Elastic uses DefaultURL by default.

If the sniffer is enabled (the default), the new client then sniffes the cluster via the Nodes Info API (see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/cluster-nodes-info.html#cluster-nodes-info). It uses the URLs specified by the caller. The caller is responsible to only pass a list of URLs of nodes that belong to the same cluster. This sniffing process is run on startup and periodically. Use SnifferInterval to set the interval between two sniffs (default is 15 minutes). In other words: By default, the client will find new nodes in the cluster and remove those that are no longer available every 15 minutes. Disable the sniffer by passing SetSniff(false) to NewClient.

The list of nodes found in the sniffing process will be used to make connections to the REST API of Elasticsearch. These nodes are also periodically checked in a shorter time frame. This process is called a health check. By default, a health check is done every 60 seconds. You can set a shorter or longer interval by SetHealthcheckInterval. Disabling health checks is not recommended, but can be done by SetHealthcheck(false).

Connections are automatically marked as dead or healthy while making requests to Elasticsearch. When a request fails, Elastic will call into the Retry strategy which can be specified with SetRetry. The Retry strategy is also responsible for handling backoff i.e. the time to wait before starting the next request. There are various standard backoff implementations, e.g. ExponentialBackoff or SimpleBackoff. Retries are disabled by default.

If no HttpClient is configured, then http.DefaultClient is used. You can use your own http.Client with some http.Transport for advanced scenarios.

An error is also returned when some configuration option is invalid or the new client cannot sniff the cluster (if enabled).

func NewClientFromConfig

func NewClientFromConfig(cfg *config.Config) (*Client, error)

NewClientFromConfig initializes a client from a configuration.

func NewSimpleClient

func NewSimpleClient(options ...ClientOptionFunc) (*Client, error)

NewSimpleClient creates a new short-lived Client that can be used in use cases where you need e.g. one client per request.

While NewClient by default sets up e.g. periodic health checks and sniffing for new nodes in separate goroutines, NewSimpleClient does not and is meant as a simple replacement where you don't need all the heavy lifting of NewClient.

NewSimpleClient does the following by default: First, all health checks are disabled, including timeouts and periodic checks. Second, sniffing is disabled, including timeouts and periodic checks. The number of retries is set to 1. NewSimpleClient also does not start any goroutines.

Notice that you can still override settings by passing additional options, just like with NewClient.

func (*Client) Alias

func (c *Client) Alias() *AliasService

Alias enables the caller to add and/or remove aliases.

func (*Client) Aliases

func (c *Client) Aliases() *AliasesService

Aliases returns aliases by index name(s).

func (*Client) Bulk

func (c *Client) Bulk() *BulkService

Bulk is the entry point to mass insert/update/delete documents.

func (*Client) BulkProcessor

func (c *Client) BulkProcessor() *BulkProcessorService

BulkProcessor allows setting up a concurrent processor of bulk requests.

func (*Client) ClearScroll

func (c *Client) ClearScroll(scrollIds ...string) *ClearScrollService

ClearScroll can be used to clear search contexts manually.

func (*Client) CloseIndex

func (c *Client) CloseIndex(name string) *IndicesCloseService

CloseIndex closes an index.

func (*Client) ClusterHealth

func (c *Client) ClusterHealth() *ClusterHealthService

ClusterHealth retrieves the health of the cluster.

func (*Client) ClusterState

func (c *Client) ClusterState() *ClusterStateService

ClusterState retrieves the state of the cluster.

func (*Client) ClusterStats

func (c *Client) ClusterStats() *ClusterStatsService

ClusterStats retrieves cluster statistics.

func (*Client) Count

func (c *Client) Count(indices ...string) *CountService

Count documents.

func (*Client) CreateIndex

func (c *Client) CreateIndex(name string) *IndicesCreateService

CreateIndex returns a service to create a new index.

func (*Client) Delete

func (c *Client) Delete() *DeleteService

Delete a document.

func (*Client) DeleteByQuery

func (c *Client) DeleteByQuery(indices ...string) *DeleteByQueryService

DeleteByQuery deletes documents as found by a query.

func (*Client) DeleteIndex

func (c *Client) DeleteIndex(indices ...string) *IndicesDeleteService

DeleteIndex returns a service to delete an index.

func (*Client) ElasticsearchVersion

func (c *Client) ElasticsearchVersion(url string) (string, error)

ElasticsearchVersion returns the version number of Elasticsearch running on the given URL.

func (*Client) Exists

func (c *Client) Exists() *ExistsService

Exists checks if a document exists.

func (*Client) Explain

func (c *Client) Explain(index, typ, id string) *ExplainService

Explain computes a score explanation for a query and a specific document.

func (*Client) FieldCaps

func (c *Client) FieldCaps(indices ...string) *FieldCapsService

FieldCaps returns statistical information about fields in indices.

func (*Client) Flush

func (c *Client) Flush(indices ...string) *IndicesFlushService

Flush asks Elasticsearch to free memory from the index and flush data to disk.

func (*Client) Forcemerge

func (c *Client) Forcemerge(indices ...string) *IndicesForcemergeService

Forcemerge optimizes one or more indices. It replaces the deprecated Optimize API.

func (*Client) Get

func (c *Client) Get() *GetService

Get a document.

func (*Client) GetFieldMapping

func (c *Client) GetFieldMapping() *IndicesGetFieldMappingService

GetFieldMapping gets mapping for fields.

func (*Client) GetMapping

func (c *Client) GetMapping() *IndicesGetMappingService

GetMapping gets a mapping.

func (*Client) HasPlugin

func (c *Client) HasPlugin(name string) (bool, error)

HasPlugin indicates whether the cluster has the named plugin.

func (*Client) Index

func (c *Client) Index() *IndexService

Index a document.

func (*Client) IndexAnalyze

func (c *Client) IndexAnalyze() *IndicesAnalyzeService

IndexAnalyze performs the analysis process on a text and returns the token breakdown of the text.

func (*Client) IndexDeleteTemplate

func (c *Client) IndexDeleteTemplate(name string) *IndicesDeleteTemplateService

IndexDeleteTemplate deletes an index template. Use XXXTemplate funcs to manage search templates.

func (*Client) IndexExists

func (c *Client) IndexExists(indices ...string) *IndicesExistsService

IndexExists allows to check if an index exists.

func (*Client) IndexGet

func (c *Client) IndexGet(indices ...string) *IndicesGetService

IndexGet retrieves information about one or more indices. IndexGet is only available for Elasticsearch 1.4 or later.

func (*Client) IndexGetSettings

func (c *Client) IndexGetSettings(indices ...string) *IndicesGetSettingsService

IndexGetSettings retrieves settings of all, one or more indices.

func (*Client) IndexGetTemplate

func (c *Client) IndexGetTemplate(names ...string) *IndicesGetTemplateService

IndexGetTemplate gets an index template. Use XXXTemplate funcs to manage search templates.

func (*Client) IndexNames

func (c *Client) IndexNames() ([]string, error)

IndexNames returns the names of all indices in the cluster.

func (*Client) IndexPutSettings

func (c *Client) IndexPutSettings(indices ...string) *IndicesPutSettingsService

IndexPutSettings sets settings for all, one or more indices.

func (*Client) IndexPutTemplate

func (c *Client) IndexPutTemplate(name string) *IndicesPutTemplateService

IndexPutTemplate creates or updates an index template. Use XXXTemplate funcs to manage search templates.

func (*Client) IndexSegments

func (c *Client) IndexSegments(indices ...string) *IndicesSegmentsService

IndexSegments retrieves low level segment information for all, one or more indices.

func (*Client) IndexStats

func (c *Client) IndexStats(indices ...string) *IndicesStatsService

IndexStats provides statistics on different operations happining in one or more indices.

func (*Client) IndexTemplateExists

func (c *Client) IndexTemplateExists(name string) *IndicesExistsTemplateService

IndexTemplateExists gets check if an index template exists. Use XXXTemplate funcs to manage search templates.

func (*Client) IngestDeletePipeline

func (c *Client) IngestDeletePipeline(id string) *IngestDeletePipelineService

IngestDeletePipeline deletes a pipeline by ID.

func (*Client) IngestGetPipeline

func (c *Client) IngestGetPipeline(ids ...string) *IngestGetPipelineService

IngestGetPipeline returns pipelines based on ID.

func (*Client) IngestPutPipeline

func (c *Client) IngestPutPipeline(id string) *IngestPutPipelineService

IngestPutPipeline adds pipelines and updates existing pipelines in the cluster.

func (*Client) IngestSimulatePipeline

func (c *Client) IngestSimulatePipeline() *IngestSimulatePipelineService

IngestSimulatePipeline executes a specific pipeline against the set of documents provided in the body of the request.

func (*Client) IsRunning

func (c *Client) IsRunning() bool

IsRunning returns true if the background processes of the client are running, false otherwise.

func (*Client) Mget

func (c *Client) Mget() *MgetService

Mget retrieves multiple documents in one roundtrip.

func (*Client) MultiGet

func (c *Client) MultiGet() *MgetService

MultiGet retrieves multiple documents in one roundtrip.

func (*Client) MultiSearch

func (c *Client) MultiSearch() *MultiSearchService

MultiSearch is the entry point for multi searches.

func (*Client) MultiTermVectors

func (c *Client) MultiTermVectors() *MultiTermvectorService

MultiTermVectors returns information and statistics on terms in the fields of multiple documents.

func (*Client) NodesInfo

func (c *Client) NodesInfo() *NodesInfoService

NodesInfo retrieves one or more or all of the cluster nodes information.

func (*Client) NodesStats

func (c *Client) NodesStats() *NodesStatsService

NodesStats retrieves one or more or all of the cluster nodes statistics.

func (*Client) OpenIndex

func (c *Client) OpenIndex(name string) *IndicesOpenService

OpenIndex opens an index.

func (*Client) PerformRequest

func (c *Client) PerformRequest(ctx context.Context, opt PerformRequestOptions) (*Response, error)

PerformRequest does a HTTP request to Elasticsearch. It returns a response (which might be nil) and an error on failure.

Optionally, a list of HTTP error codes to ignore can be passed. This is necessary for services that expect e.g. HTTP status 404 as a valid outcome (Exists, IndicesExists, IndicesTypeExists).

func (*Client) Ping

func (c *Client) Ping(url string) *PingService

Ping checks if a given node in a cluster exists and (optionally) returns some basic information about the Elasticsearch server, e.g. the Elasticsearch version number.

Notice that you need to specify a URL here explicitly.

func (*Client) Plugins

func (c *Client) Plugins() ([]string, error)

Plugins returns the list of all registered plugins.

func (*Client) PutMapping

func (c *Client) PutMapping() *IndicesPutMappingService

PutMapping registers a mapping.

func (*Client) Refresh

func (c *Client) Refresh(indices ...string) *RefreshService

Refresh asks Elasticsearch to refresh one or more indices.

func (*Client) Reindex

func (c *Client) Reindex() *ReindexService

Reindex copies data from a source index into a destination index.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html for details on the Reindex API.

func (*Client) RolloverIndex

func (c *Client) RolloverIndex(alias string) *IndicesRolloverService

RolloverIndex rolls an alias over to a new index when the existing index is considered to be too large or too old.

func (*Client) Scroll

func (c *Client) Scroll(indices ...string) *ScrollService

Scroll through documents. Use this to efficiently scroll through results while returning the results to a client.

func (*Client) Search

func (c *Client) Search(indices ...string) *SearchService

Search is the entry point for searches.

func (*Client) SearchShards

func (c *Client) SearchShards(indices ...string) *SearchShardsService

SearchShards returns statistical information about nodes and shards.

func (*Client) ShrinkIndex

func (c *Client) ShrinkIndex(source, target string) *IndicesShrinkService

ShrinkIndex returns a service to shrink one index into another.

func (*Client) SnapshotCreate

func (c *Client) SnapshotCreate(repository string, snapshot string) *SnapshotCreateService

SnapshotCreate creates a snapshot.

func (*Client) SnapshotCreateRepository

func (c *Client) SnapshotCreateRepository(repository string) *SnapshotCreateRepositoryService

SnapshotCreateRepository creates or updates a snapshot repository.

func (*Client) SnapshotDeleteRepository

func (c *Client) SnapshotDeleteRepository(repositories ...string) *SnapshotDeleteRepositoryService

SnapshotDeleteRepository deletes a snapshot repository.

func (*Client) SnapshotGetRepository

func (c *Client) SnapshotGetRepository(repositories ...string) *SnapshotGetRepositoryService

SnapshotGetRepository gets a snapshot repository.

func (*Client) SnapshotVerifyRepository

func (c *Client) SnapshotVerifyRepository(repository string) *SnapshotVerifyRepositoryService

SnapshotVerifyRepository verifies a snapshot repository.

func (*Client) Start

func (c *Client) Start()

Start starts the background processes like sniffing the cluster and periodic health checks. You don't need to run Start when creating a client with NewClient; the background processes are run by default.

If the background processes are already running, this is a no-op.

func (*Client) Stop

func (c *Client) Stop()

Stop stops the background processes that the client is running, i.e. sniffing the cluster periodically and running health checks on the nodes.

If the background processes are not running, this is a no-op.

func (*Client) String

func (c *Client) String() string

String returns a string representation of the client status.

func (*Client) TasksCancel

func (c *Client) TasksCancel() *TasksCancelService

TasksCancel cancels tasks running on the specified nodes.

func (*Client) TasksGetTask

func (c *Client) TasksGetTask() *TasksGetTaskService

TasksGetTask retrieves a task running on the cluster.

func (*Client) TasksList

func (c *Client) TasksList() *TasksListService

TasksList retrieves the list of tasks running on the specified nodes.

func (*Client) TermVectors

func (c *Client) TermVectors(index, typ string) *TermvectorsService

TermVectors returns information and statistics on terms in the fields of a particular document.

func (*Client) TypeExists

func (c *Client) TypeExists() *IndicesExistsTypeService

TypeExists allows to check if one or more types exist in one or more indices.

func (*Client) Update

func (c *Client) Update() *UpdateService

Update a document.

func (*Client) UpdateByQuery

func (c *Client) UpdateByQuery(indices ...string) *UpdateByQueryService

UpdateByQuery performs an update on a set of documents.

func (*Client) Validate

func (c *Client) Validate(indices ...string) *ValidateService

Validate allows a user to validate a potentially expensive query without executing it.

func (*Client) WaitForGreenStatus

func (c *Client) WaitForGreenStatus(timeout string) error

WaitForGreenStatus waits for the cluster to have the "green" status. See WaitForStatus for more details.

func (*Client) WaitForStatus

func (c *Client) WaitForStatus(status string, timeout string) error

WaitForStatus waits for the cluster to have the given status. This is a shortcut method for the ClusterHealth service.

WaitForStatus waits for the specified timeout, e.g. "10s". If the cluster will have the given state within the timeout, nil is returned. If the request timed out, ErrTimeout is returned.

func (*Client) WaitForYellowStatus

func (c *Client) WaitForYellowStatus(timeout string) error

WaitForYellowStatus waits for the cluster to have the "yellow" status. See WaitForStatus for more details.

type ClientOptionFunc

type ClientOptionFunc func(*Client) error

ClientOptionFunc is a function that configures a Client. It is used in NewClient.

func SetBasicAuth

func SetBasicAuth(username, password string) ClientOptionFunc

SetBasicAuth can be used to specify the HTTP Basic Auth credentials to use when making HTTP requests to Elasticsearch.

func SetDecoder

func SetDecoder(decoder Decoder) ClientOptionFunc

SetDecoder sets the Decoder to use when decoding data from Elasticsearch. DefaultDecoder is used by default.

func SetErrorLog

func SetErrorLog(logger Logger) ClientOptionFunc

SetErrorLog sets the logger for critical messages like nodes joining or leaving the cluster or failing requests. It is nil by default.

func SetHealthcheck

func SetHealthcheck(enabled bool) ClientOptionFunc

SetHealthcheck enables or disables healthchecks (enabled by default).

func SetHealthcheckInterval

func SetHealthcheckInterval(interval time.Duration) ClientOptionFunc

SetHealthcheckInterval sets the interval between two health checks. The default interval is 60 seconds.

func SetHealthcheckTimeout

func SetHealthcheckTimeout(timeout time.Duration) ClientOptionFunc

SetHealthcheckTimeout sets the timeout for periodic health checks. The default timeout is 1 second (see DefaultHealthcheckTimeout). Notice that a different (usually larger) timeout is used for the initial healthcheck, which is initiated while creating a new client. The startup timeout can be modified with SetHealthcheckTimeoutStartup.

func SetHealthcheckTimeoutStartup

func SetHealthcheckTimeoutStartup(timeout time.Duration) ClientOptionFunc

SetHealthcheckTimeoutStartup sets the timeout for the initial health check. The default timeout is 5 seconds (see DefaultHealthcheckTimeoutStartup). Notice that timeouts for subsequent health checks can be modified with SetHealthcheckTimeout.

func SetHttpClient

func SetHttpClient(httpClient *http.Client) ClientOptionFunc

SetHttpClient can be used to specify the http.Client to use when making HTTP requests to Elasticsearch.

func SetInfoLog

func SetInfoLog(logger Logger) ClientOptionFunc

SetInfoLog sets the logger for informational messages, e.g. requests and their response times. It is nil by default.

func SetMaxRetries deprecated

func SetMaxRetries(maxRetries int) ClientOptionFunc

SetMaxRetries sets the maximum number of retries before giving up when performing a HTTP request to Elasticsearch.

Deprecated: Replace with a Retry implementation.

func SetRequiredPlugins

func SetRequiredPlugins(plugins ...string) ClientOptionFunc

SetRequiredPlugins can be used to indicate that some plugins are required before a Client will be created.

func SetRetrier

func SetRetrier(retrier Retrier) ClientOptionFunc

SetRetrier specifies the retry strategy that handles errors during HTTP request/response with Elasticsearch.

func SetScheme

func SetScheme(scheme string) ClientOptionFunc

SetScheme sets the HTTP scheme to look for when sniffing (http or https). This is http by default.

func SetSendGetBodyAs

func SetSendGetBodyAs(httpMethod string) ClientOptionFunc

SetSendGetBodyAs specifies the HTTP method to use when sending a GET request with a body. It is GET by default.

func SetSniff

func SetSniff(enabled bool) ClientOptionFunc

SetSniff enables or disables the sniffer (enabled by default).

func SetSnifferCallback

func SetSnifferCallback(f SnifferCallback) ClientOptionFunc

SetSnifferCallback allows the caller to modify sniffer decisions. When setting the callback, the given SnifferCallback is called for each (healthy) node found during the sniffing process. If the callback returns false, the node is ignored: No requests are routed to it.

func SetSnifferInterval

func SetSnifferInterval(interval time.Duration) ClientOptionFunc

SetSnifferInterval sets the interval between two sniffing processes. The default interval is 15 minutes.

func SetSnifferTimeout

func SetSnifferTimeout(timeout time.Duration) ClientOptionFunc

SetSnifferTimeout sets the timeout for the sniffer that finds the nodes in a cluster. The default is 2 seconds. Notice that the timeout used when creating a new client on startup is usually greater and can be set with SetSnifferTimeoutStartup.

func SetSnifferTimeoutStartup

func SetSnifferTimeoutStartup(timeout time.Duration) ClientOptionFunc

SetSnifferTimeoutStartup sets the timeout for the sniffer that is used when creating a new client. The default is 5 seconds. Notice that the timeout being used for subsequent sniffing processes is set with SetSnifferTimeout.

func SetTraceLog

func SetTraceLog(logger Logger) ClientOptionFunc

SetTraceLog specifies the log.Logger to use for output of HTTP requests and responses which is helpful during debugging. It is nil by default.

func SetURL

func SetURL(urls ...string) ClientOptionFunc

SetURL defines the URL endpoints of the Elasticsearch nodes. Notice that when sniffing is enabled, these URLs are used to initially sniff the cluster on startup.

type ClusterHealthResponse

type ClusterHealthResponse struct {
	ClusterName                    string  `json:"cluster_name"`
	Status                         string  `json:"status"`
	TimedOut                       bool    `json:"timed_out"`
	NumberOfNodes                  int     `json:"number_of_nodes"`
	NumberOfDataNodes              int     `json:"number_of_data_nodes"`
	ActivePrimaryShards            int     `json:"active_primary_shards"`
	ActiveShards                   int     `json:"active_shards"`
	RelocatingShards               int     `json:"relocating_shards"`
	InitializingShards             int     `json:"initializing_shards"`
	UnassignedShards               int     `json:"unassigned_shards"`
	DelayedUnassignedShards        int     `json:"delayed_unassigned_shards"`
	NumberOfPendingTasks           int     `json:"number_of_pending_tasks"`
	NumberOfInFlightFetch          int     `json:"number_of_in_flight_fetch"`
	TaskMaxWaitTimeInQueueInMillis int     `json:"task_max_waiting_in_queue_millis"`
	ActiveShardsPercentAsNumber    float64 `json:"active_shards_percent_as_number"`

	// Validation failures -> index name -> array of validation failures
	ValidationFailures []map[string][]string `json:"validation_failures"`

	// Index name -> index health
	Indices map[string]*ClusterIndexHealth `json:"indices"`
}

ClusterHealthResponse is the response of ClusterHealthService.Do.

type ClusterHealthService

type ClusterHealthService struct {
	// contains filtered or unexported fields
}

ClusterHealthService allows to get a very simple status on the health of the cluster.

See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/cluster-health.html for details.

Example
package main

import (
	"context"
	"fmt"

	elastic "github.com/olivere/elastic"
)

func main() {
	client, err := elastic.NewClient()
	if err != nil {
		panic(err)
	}

	// Get cluster health
	res, err := client.ClusterHealth().Index("twitter").Do(context.Background())
	if err != nil {
		panic(err)
	}
	if res == nil {
		panic(err)
	}
	fmt.Printf("Cluster status is %q\n", res.Status)
}
Output:

func NewClusterHealthService

func NewClusterHealthService(client *Client) *ClusterHealthService

NewClusterHealthService creates a new ClusterHealthService.

func (*ClusterHealthService) Do

Do executes the operation.

func (*ClusterHealthService) Index

func (s *ClusterHealthService) Index(indices ...string) *ClusterHealthService

Index limits the information returned to specific indices.

func (*ClusterHealthService) Level

Level specifies the level of detail for returned information.

func (*ClusterHealthService) Local

Local indicates whether to return local information. If it is true, we do not retrieve the state from master node (default: false).

func (*ClusterHealthService) MasterTimeout

func (s *ClusterHealthService) MasterTimeout(masterTimeout string) *ClusterHealthService

MasterTimeout specifies an explicit operation timeout for connection to master node.

func (*ClusterHealthService) Pretty

func (s *ClusterHealthService) Pretty(pretty bool) *ClusterHealthService

Pretty indicates that the JSON response be indented and human readable.

func (*ClusterHealthService) Timeout

func (s *ClusterHealthService) Timeout(timeout string) *ClusterHealthService

Timeout specifies an explicit operation timeout.

func (*ClusterHealthService) Validate

func (s *ClusterHealthService) Validate() error

Validate checks if the operation is valid.

func (*ClusterHealthService) WaitForActiveShards

func (s *ClusterHealthService) WaitForActiveShards(waitForActiveShards int) *ClusterHealthService

WaitForActiveShards can be used to wait until the specified number of shards are active.

func (*ClusterHealthService) WaitForGreenStatus

func (s *ClusterHealthService) WaitForGreenStatus() *ClusterHealthService

WaitForGreenStatus will wait for the "green" state.

func (*ClusterHealthService) WaitForNoRelocatingShards

func (s *ClusterHealthService) WaitForNoRelocatingShards(waitForNoRelocatingShards bool) *ClusterHealthService

WaitForNoRelocatingShards can be used to wait until all shard relocations are finished.

func (*ClusterHealthService) WaitForNodes

func (s *ClusterHealthService) WaitForNodes(waitForNodes string) *ClusterHealthService

WaitForNodes can be used to wait until the specified number of nodes are available. Example: "12" to wait for exact values, ">12" and "<12" for ranges.

func (*ClusterHealthService) WaitForStatus

func (s *ClusterHealthService) WaitForStatus(waitForStatus string) *ClusterHealthService

WaitForStatus can be used to wait until the cluster is in a specific state. Valid values are: green, yellow, or red.

func (*ClusterHealthService) WaitForYellowStatus

func (s *ClusterHealthService) WaitForYellowStatus() *ClusterHealthService

WaitForYellowStatus will wait for the "yellow" state.

type ClusterIndexHealth

type ClusterIndexHealth struct {
	Status              string `json:"status"`
	NumberOfShards      int    `json:"number_of_shards"`
	NumberOfReplicas    int    `json:"number_of_replicas"`
	ActivePrimaryShards int    `json:"active_primary_shards"`
	ActiveShards        int    `json:"active_shards"`
	RelocatingShards    int    `json:"relocating_shards"`
	InitializingShards  int    `json:"initializing_shards"`
	UnassignedShards    int    `json:"unassigned_shards"`
	// Validation failures
	ValidationFailures []string `json:"validation_failures"`
	// Shards by id, e.g. "0" or "1"
	Shards map[string]*ClusterShardHealth `json:"shards"`
}

ClusterIndexHealth will be returned as part of ClusterHealthResponse.

type ClusterShardHealth

type ClusterShardHealth struct {
	Status             string `json:"status"`
	PrimaryActive      bool   `json:"primary_active"`
	ActiveShards       int    `json:"active_shards"`
	RelocatingShards   int    `json:"relocating_shards"`
	InitializingShards int    `json:"initializing_shards"`
	UnassignedShards   int    `json:"unassigned_shards"`
}

ClusterShardHealth will be returned as part of ClusterHealthResponse.

type ClusterStateResponse

type ClusterStateResponse struct {
	ClusterName  string                               `json:"cluster_name"`
	Version      int64                                `json:"version"`
	StateUUID    string                               `json:"state_uuid"`
	MasterNode   string                               `json:"master_node"`
	Blocks       map[string]*clusterBlocks            `json:"blocks"`
	Nodes        map[string]*discoveryNode            `json:"nodes"`
	Metadata     *clusterStateMetadata                `json:"metadata"`
	RoutingTable map[string]*clusterStateRoutingTable `json:"routing_table"`
	RoutingNodes *clusterStateRoutingNode             `json:"routing_nodes"`
	Customs      map[string]interface{}               `json:"customs"`
}

ClusterStateResponse is the response of ClusterStateService.Do.

type ClusterStateService

type ClusterStateService struct {
	// contains filtered or unexported fields
}

ClusterStateService allows to get a comprehensive state information of the whole cluster.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/cluster-state.html for details.

Example
package main

import (
	"context"
	"fmt"

	elastic "github.com/olivere/elastic"
)

func main() {
	client, err := elastic.NewClient()
	if err != nil {
		panic(err)
	}

	// Get cluster state
	res, err := client.ClusterState().Metric("version").Do(context.Background())
	if err != nil {
		panic(err)
	}
	fmt.Printf("Cluster %q has version %d", res.ClusterName, res.Version)
}
Output:

func NewClusterStateService

func NewClusterStateService(client *Client) *ClusterStateService

NewClusterStateService creates a new ClusterStateService.

func (*ClusterStateService) AllowNoIndices

func (s *ClusterStateService) AllowNoIndices(allowNoIndices bool) *ClusterStateService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*ClusterStateService) Do

Do executes the operation.

func (*ClusterStateService) ExpandWildcards

func (s *ClusterStateService) ExpandWildcards(expandWildcards string) *ClusterStateService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*ClusterStateService) FlatSettings

func (s *ClusterStateService) FlatSettings(flatSettings bool) *ClusterStateService

FlatSettings, when set, returns settings in flat format (default: false).

func (*ClusterStateService) IgnoreUnavailable

func (s *ClusterStateService) IgnoreUnavailable(ignoreUnavailable bool) *ClusterStateService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*ClusterStateService) Index

func (s *ClusterStateService) Index(indices ...string) *ClusterStateService

Index is a list of index names. Use _all or an empty string to perform the operation on all indices.

func (*ClusterStateService) Local

func (s *ClusterStateService) Local(local bool) *ClusterStateService

Local indicates whether to return local information. When set, it does not retrieve the state from master node (default: false).

func (*ClusterStateService) MasterTimeout

func (s *ClusterStateService) MasterTimeout(masterTimeout string) *ClusterStateService

MasterTimeout specifies timeout for connection to master.

func (*ClusterStateService) Metric

func (s *ClusterStateService) Metric(metrics ...string) *ClusterStateService

Metric limits the information returned to the specified metric. It can be one of: version, master_node, nodes, routing_table, metadata, blocks, or customs.

func (*ClusterStateService) Pretty

func (s *ClusterStateService) Pretty(pretty bool) *ClusterStateService

Pretty indicates that the JSON response be indented and human readable.

func (*ClusterStateService) Validate

func (s *ClusterStateService) Validate() error

Validate checks if the operation is valid.

type ClusterStatsIndices

type ClusterStatsIndices struct {
	Count       int                             `json:"count"`
	Shards      *ClusterStatsIndicesShards      `json:"shards"`
	Docs        *ClusterStatsIndicesDocs        `json:"docs"`
	Store       *ClusterStatsIndicesStore       `json:"store"`
	FieldData   *ClusterStatsIndicesFieldData   `json:"fielddata"`
	FilterCache *ClusterStatsIndicesFilterCache `json:"filter_cache"`
	IdCache     *ClusterStatsIndicesIdCache     `json:"id_cache"`
	Completion  *ClusterStatsIndicesCompletion  `json:"completion"`
	Segments    *ClusterStatsIndicesSegments    `json:"segments"`
	Percolate   *ClusterStatsIndicesPercolate   `json:"percolate"`
}

type ClusterStatsIndicesCompletion

type ClusterStatsIndicesCompletion struct {
	Size        string `json:"size"` // e.g. "61.3kb"
	SizeInBytes int64  `json:"size_in_bytes"`
	Fields      map[string]struct {
		Size        string `json:"size"` // e.g. "61.3kb"
		SizeInBytes int64  `json:"size_in_bytes"`
	} `json:"fields"`
}

type ClusterStatsIndicesDocs

type ClusterStatsIndicesDocs struct {
	Count   int `json:"count"`
	Deleted int `json:"deleted"`
}

type ClusterStatsIndicesFieldData

type ClusterStatsIndicesFieldData struct {
	MemorySize        string `json:"memory_size"` // e.g. "61.3kb"
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	Evictions         int64  `json:"evictions"`
	Fields            map[string]struct {
		MemorySize        string `json:"memory_size"` // e.g. "61.3kb"
		MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	} `json:"fields"`
}

type ClusterStatsIndicesFilterCache

type ClusterStatsIndicesFilterCache struct {
	MemorySize        string `json:"memory_size"` // e.g. "61.3kb"
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	Evictions         int64  `json:"evictions"`
}

type ClusterStatsIndicesIdCache

type ClusterStatsIndicesIdCache struct {
	MemorySize        string `json:"memory_size"` // e.g. "61.3kb"
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
}

type ClusterStatsIndicesPercolate

type ClusterStatsIndicesPercolate struct {
	Total int64 `json:"total"`
	// TODO(oe) The JSON tag here is wrong as of ES 1.5.2 it seems
	Time              string `json:"get_time"` // e.g. "1s"
	TimeInBytes       int64  `json:"time_in_millis"`
	Current           int64  `json:"current"`
	MemorySize        string `json:"memory_size"` // e.g. "61.3kb"
	MemorySizeInBytes int64  `json:"memory_sitze_in_bytes"`
	Queries           int64  `json:"queries"`
}

type ClusterStatsIndicesSegments

type ClusterStatsIndicesSegments struct {
	Count                       int64  `json:"count"`
	Memory                      string `json:"memory"` // e.g. "61.3kb"
	MemoryInBytes               int64  `json:"memory_in_bytes"`
	IndexWriterMemory           string `json:"index_writer_memory"` // e.g. "61.3kb"
	IndexWriterMemoryInBytes    int64  `json:"index_writer_memory_in_bytes"`
	IndexWriterMaxMemory        string `json:"index_writer_max_memory"` // e.g. "61.3kb"
	IndexWriterMaxMemoryInBytes int64  `json:"index_writer_max_memory_in_bytes"`
	VersionMapMemory            string `json:"version_map_memory"` // e.g. "61.3kb"
	VersionMapMemoryInBytes     int64  `json:"version_map_memory_in_bytes"`
	FixedBitSet                 string `json:"fixed_bit_set"` // e.g. "61.3kb"
	FixedBitSetInBytes          int64  `json:"fixed_bit_set_memory_in_bytes"`
}

type ClusterStatsIndicesShards

type ClusterStatsIndicesShards struct {
	Total       int                             `json:"total"`
	Primaries   int                             `json:"primaries"`
	Replication float64                         `json:"replication"`
	Index       *ClusterStatsIndicesShardsIndex `json:"index"`
}

type ClusterStatsIndicesShardsIndex

type ClusterStatsIndicesShardsIndex struct {
	Shards      *ClusterStatsIndicesShardsIndexIntMinMax     `json:"shards"`
	Primaries   *ClusterStatsIndicesShardsIndexIntMinMax     `json:"primaries"`
	Replication *ClusterStatsIndicesShardsIndexFloat64MinMax `json:"replication"`
}

type ClusterStatsIndicesShardsIndexFloat64MinMax

type ClusterStatsIndicesShardsIndexFloat64MinMax struct {
	Min float64 `json:"min"`
	Max float64 `json:"max"`
	Avg float64 `json:"avg"`
}

type ClusterStatsIndicesShardsIndexIntMinMax

type ClusterStatsIndicesShardsIndexIntMinMax struct {
	Min int     `json:"min"`
	Max int     `json:"max"`
	Avg float64 `json:"avg"`
}

type ClusterStatsIndicesStore

type ClusterStatsIndicesStore struct {
	Size        string `json:"size"` // e.g. "5.3gb"
	SizeInBytes int64  `json:"size_in_bytes"`
}

type ClusterStatsNodes

type ClusterStatsNodes struct {
	Count    *ClusterStatsNodesCount        `json:"count"`
	Versions []string                       `json:"versions"`
	OS       *ClusterStatsNodesOsStats      `json:"os"`
	Process  *ClusterStatsNodesProcessStats `json:"process"`
	JVM      *ClusterStatsNodesJvmStats     `json:"jvm"`
	FS       *ClusterStatsNodesFsStats      `json:"fs"`
	Plugins  []*ClusterStatsNodesPlugin     `json:"plugins"`
}

type ClusterStatsNodesCount

type ClusterStatsNodesCount struct {
	Total            int `json:"total"`
	Data             int `json:"data"`
	CoordinatingOnly int `json:"coordinating_only"`
	Master           int `json:"master"`
	Ingest           int `json:"ingest"`
}

type ClusterStatsNodesFsStats

type ClusterStatsNodesFsStats struct {
	Path                 string `json:"path"`
	Mount                string `json:"mount"`
	Dev                  string `json:"dev"`
	Total                string `json:"total"` // e.g. "930.7gb"`
	TotalInBytes         int64  `json:"total_in_bytes"`
	Free                 string `json:"free"` // e.g. "930.7gb"`
	FreeInBytes          int64  `json:"free_in_bytes"`
	Available            string `json:"available"` // e.g. "930.7gb"`
	AvailableInBytes     int64  `json:"available_in_bytes"`
	DiskReads            int64  `json:"disk_reads"`
	DiskWrites           int64  `json:"disk_writes"`
	DiskIOOp             int64  `json:"disk_io_op"`
	DiskReadSize         string `json:"disk_read_size"` // e.g. "0b"`
	DiskReadSizeInBytes  int64  `json:"disk_read_size_in_bytes"`
	DiskWriteSize        string `json:"disk_write_size"` // e.g. "0b"`
	DiskWriteSizeInBytes int64  `json:"disk_write_size_in_bytes"`
	DiskIOSize           string `json:"disk_io_size"` // e.g. "0b"`
	DiskIOSizeInBytes    int64  `json:"disk_io_size_in_bytes"`
	DiskQueue            string `json:"disk_queue"`
	DiskServiceTime      string `json:"disk_service_time"`
}

type ClusterStatsNodesJvmStats

type ClusterStatsNodesJvmStats struct {
	MaxUptime         string                              `json:"max_uptime"` // e.g. "5h"
	MaxUptimeInMillis int64                               `json:"max_uptime_in_millis"`
	Versions          []*ClusterStatsNodesJvmStatsVersion `json:"versions"`
	Mem               *ClusterStatsNodesJvmStatsMem       `json:"mem"`
	Threads           int64                               `json:"threads"`
}

type ClusterStatsNodesJvmStatsMem

type ClusterStatsNodesJvmStatsMem struct {
	HeapUsed        string `json:"heap_used"`
	HeapUsedInBytes int64  `json:"heap_used_in_bytes"`
	HeapMax         string `json:"heap_max"`
	HeapMaxInBytes  int64  `json:"heap_max_in_bytes"`
}

type ClusterStatsNodesJvmStatsVersion

type ClusterStatsNodesJvmStatsVersion struct {
	Version   string `json:"version"`    // e.g. "1.8.0_45"
	VMName    string `json:"vm_name"`    // e.g. "Java HotSpot(TM) 64-Bit Server VM"
	VMVersion string `json:"vm_version"` // e.g. "25.45-b02"
	VMVendor  string `json:"vm_vendor"`  // e.g. "Oracle Corporation"
	Count     int    `json:"count"`
}

type ClusterStatsNodesOsStats

type ClusterStatsNodesOsStats struct {
	AvailableProcessors int                            `json:"available_processors"`
	Mem                 *ClusterStatsNodesOsStatsMem   `json:"mem"`
	CPU                 []*ClusterStatsNodesOsStatsCPU `json:"cpu"`
}

type ClusterStatsNodesOsStatsCPU

type ClusterStatsNodesOsStatsCPU struct {
	Vendor           string `json:"vendor"`
	Model            string `json:"model"`
	MHz              int    `json:"mhz"`
	TotalCores       int    `json:"total_cores"`
	TotalSockets     int    `json:"total_sockets"`
	CoresPerSocket   int    `json:"cores_per_socket"`
	CacheSize        string `json:"cache_size"` // e.g. "256b"
	CacheSizeInBytes int64  `json:"cache_size_in_bytes"`
	Count            int    `json:"count"`
}

type ClusterStatsNodesOsStatsMem

type ClusterStatsNodesOsStatsMem struct {
	Total        string `json:"total"` // e.g. "16gb"
	TotalInBytes int64  `json:"total_in_bytes"`
}

type ClusterStatsNodesPlugin

type ClusterStatsNodesPlugin struct {
	Name        string `json:"name"`
	Version     string `json:"version"`
	Description string `json:"description"`
	URL         string `json:"url"`
	JVM         bool   `json:"jvm"`
	Site        bool   `json:"site"`
}

type ClusterStatsNodesProcessStats

type ClusterStatsNodesProcessStats struct {
	CPU                 *ClusterStatsNodesProcessStatsCPU                 `json:"cpu"`
	OpenFileDescriptors *ClusterStatsNodesProcessStatsOpenFileDescriptors `json:"open_file_descriptors"`
}

type ClusterStatsNodesProcessStatsCPU

type ClusterStatsNodesProcessStatsCPU struct {
	Percent float64 `json:"percent"`
}

type ClusterStatsNodesProcessStatsOpenFileDescriptors

type ClusterStatsNodesProcessStatsOpenFileDescriptors struct {
	Min int64 `json:"min"`
	Max int64 `json:"max"`
	Avg int64 `json:"avg"`
}

type ClusterStatsResponse

type ClusterStatsResponse struct {
	Timestamp   int64                `json:"timestamp"`
	ClusterName string               `json:"cluster_name"`
	ClusterUUID string               `json:"uuid"`
	Status      string               `json:"status"`
	Indices     *ClusterStatsIndices `json:"indices"`
	Nodes       *ClusterStatsNodes   `json:"nodes"`
}

ClusterStatsResponse is the response of ClusterStatsService.Do.

type ClusterStatsService

type ClusterStatsService struct {
	// contains filtered or unexported fields
}

ClusterStatsService is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/cluster-stats.html.

func NewClusterStatsService

func NewClusterStatsService(client *Client) *ClusterStatsService

NewClusterStatsService creates a new ClusterStatsService.

func (*ClusterStatsService) Do

Do executes the operation.

func (*ClusterStatsService) FlatSettings

func (s *ClusterStatsService) FlatSettings(flatSettings bool) *ClusterStatsService

FlatSettings is documented as: Return settings in flat format (default: false).

func (*ClusterStatsService) Human

func (s *ClusterStatsService) Human(human bool) *ClusterStatsService

Human is documented as: Whether to return time and byte values in human-readable format..

func (*ClusterStatsService) NodeId

func (s *ClusterStatsService) NodeId(nodeId []string) *ClusterStatsService

NodeId is documented as: A comma-separated list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes.

func (*ClusterStatsService) Pretty

func (s *ClusterStatsService) Pretty(pretty bool) *ClusterStatsService

Pretty indicates that the JSON response be indented and human readable.

func (*ClusterStatsService) Validate

func (s *ClusterStatsService) Validate() error

Validate checks if the operation is valid.

type CollapseBuilder

type CollapseBuilder struct {
	// contains filtered or unexported fields
}

CollapseBuilder enables field collapsing on a search request. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-collapse.html for details.

func NewCollapseBuilder

func NewCollapseBuilder(field string) *CollapseBuilder

NewCollapseBuilder creates a new CollapseBuilder.

func (*CollapseBuilder) Field

func (b *CollapseBuilder) Field(field string) *CollapseBuilder

Field to collapse.

func (*CollapseBuilder) InnerHit

func (b *CollapseBuilder) InnerHit(innerHit *InnerHit) *CollapseBuilder

InnerHit option to expand the collapsed results.

func (*CollapseBuilder) MaxConcurrentGroupRequests

func (b *CollapseBuilder) MaxConcurrentGroupRequests(max int) *CollapseBuilder

MaxConcurrentGroupRequests is the maximum number of group requests that are allowed to be ran concurrently in the inner_hits phase.

func (*CollapseBuilder) Source

func (b *CollapseBuilder) Source() (interface{}, error)

Source generates the JSON serializable fragment for the CollapseBuilder.

type CollectorResult

type CollectorResult struct {
	Name      string            `json:"name,omitempty"`
	Reason    string            `json:"reason,omitempty"`
	Time      string            `json:"time,omitempty"`
	TimeNanos int64             `json:"time_in_nanos,omitempty"`
	Children  []CollectorResult `json:"children,omitempty"`
}

CollectorResult holds the profile timings of the collectors used in the search. Children's CollectorResults may be embedded inside of a parent CollectorResult.

type CommonTermsQuery

type CommonTermsQuery struct {
	Query
	// contains filtered or unexported fields
}

CommonTermsQuery is a modern alternative to stopwords which improves the precision and recall of search results (by taking stopwords into account), without sacrificing performance. For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-common-terms-query.html

func NewCommonTermsQuery

func NewCommonTermsQuery(name string, text interface{}) *CommonTermsQuery

NewCommonTermsQuery creates and initializes a new common terms query.

func (*CommonTermsQuery) Analyzer

func (q *CommonTermsQuery) Analyzer(analyzer string) *CommonTermsQuery

func (*CommonTermsQuery) Boost

func (q *CommonTermsQuery) Boost(boost float64) *CommonTermsQuery

func (*CommonTermsQuery) CutoffFrequency

func (q *CommonTermsQuery) CutoffFrequency(f float64) *CommonTermsQuery

func (*CommonTermsQuery) HighFreq

func (q *CommonTermsQuery) HighFreq(f float64) *CommonTermsQuery

func (*CommonTermsQuery) HighFreqMinimumShouldMatch

func (q *CommonTermsQuery) HighFreqMinimumShouldMatch(minShouldMatch string) *CommonTermsQuery

func (*CommonTermsQuery) HighFreqOperator

func (q *CommonTermsQuery) HighFreqOperator(op string) *CommonTermsQuery

func (*CommonTermsQuery) LowFreq

func (q *CommonTermsQuery) LowFreq(f float64) *CommonTermsQuery

func (*CommonTermsQuery) LowFreqMinimumShouldMatch

func (q *CommonTermsQuery) LowFreqMinimumShouldMatch(minShouldMatch string) *CommonTermsQuery

func (*CommonTermsQuery) LowFreqOperator

func (q *CommonTermsQuery) LowFreqOperator(op string) *CommonTermsQuery

func (*CommonTermsQuery) QueryName

func (q *CommonTermsQuery) QueryName(queryName string) *CommonTermsQuery

func (*CommonTermsQuery) Source

func (q *CommonTermsQuery) Source() (interface{}, error)

Creates the query source for the common query.

type CompletionSuggester

type CompletionSuggester struct {
	Suggester
	// contains filtered or unexported fields
}

CompletionSuggester is a fast suggester for e.g. type-ahead completion. See https://www.elastic.co/guide/en/elasticsearch/reference/5.2/search-suggesters-completion.html for more details.

func NewCompletionSuggester

func NewCompletionSuggester(name string) *CompletionSuggester

Creates a new completion suggester.

func (*CompletionSuggester) Analyzer

func (q *CompletionSuggester) Analyzer(analyzer string) *CompletionSuggester

func (*CompletionSuggester) ContextQueries

func (q *CompletionSuggester) ContextQueries(queries ...SuggesterContextQuery) *CompletionSuggester

func (*CompletionSuggester) ContextQuery

func (*CompletionSuggester) Field

func (*CompletionSuggester) Fuzziness

func (q *CompletionSuggester) Fuzziness(fuzziness interface{}) *CompletionSuggester

func (*CompletionSuggester) FuzzyOptions

func (*CompletionSuggester) Name

func (q *CompletionSuggester) Name() string

func (*CompletionSuggester) Prefix

func (q *CompletionSuggester) Prefix(prefix string) *CompletionSuggester

func (*CompletionSuggester) PrefixWithEditDistance

func (q *CompletionSuggester) PrefixWithEditDistance(prefix string, editDistance interface{}) *CompletionSuggester

func (*CompletionSuggester) PrefixWithOptions

func (q *CompletionSuggester) PrefixWithOptions(prefix string, options *FuzzyCompletionSuggesterOptions) *CompletionSuggester

func (*CompletionSuggester) Regex

func (*CompletionSuggester) RegexOptions

func (*CompletionSuggester) RegexWithOptions

func (q *CompletionSuggester) RegexWithOptions(regex string, options *RegexCompletionSuggesterOptions) *CompletionSuggester

func (*CompletionSuggester) ShardSize

func (q *CompletionSuggester) ShardSize(shardSize int) *CompletionSuggester

func (*CompletionSuggester) Size

func (*CompletionSuggester) SkipDuplicates

func (q *CompletionSuggester) SkipDuplicates(skipDuplicates bool) *CompletionSuggester

func (*CompletionSuggester) Source

func (q *CompletionSuggester) Source(includeName bool) (interface{}, error)

Source creates the JSON data for the completion suggester.

func (*CompletionSuggester) Text

type CompositeAggregation

type CompositeAggregation struct {
	// contains filtered or unexported fields
}

CompositeAggregation is a multi-bucket values source based aggregation that can be used to calculate unique composite values from source documents.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html for details.

func NewCompositeAggregation

func NewCompositeAggregation() *CompositeAggregation

NewCompositeAggregation creates a new CompositeAggregation.

func (*CompositeAggregation) AggregateAfter

func (a *CompositeAggregation) AggregateAfter(after map[string]interface{}) *CompositeAggregation

AggregateAfter sets the values that indicate which composite bucket this request should "aggregate after".

func (*CompositeAggregation) Meta

func (a *CompositeAggregation) Meta(metaData map[string]interface{}) *CompositeAggregation

Meta sets the meta data to be included in the aggregation response.

func (*CompositeAggregation) Size

Size represents the number of composite buckets to return. Defaults to 10 as of Elasticsearch 6.1.

func (*CompositeAggregation) Source

func (a *CompositeAggregation) Source() (interface{}, error)

Source returns the serializable JSON for this aggregation.

func (*CompositeAggregation) Sources

Sources specifies the list of CompositeAggregationValuesSource instances to use in the aggregation.

func (*CompositeAggregation) SubAggregation

func (a *CompositeAggregation) SubAggregation(name string, subAggregation Aggregation) *CompositeAggregation

SubAggregations of this aggregation.

type CompositeAggregationDateHistogramValuesSource

type CompositeAggregationDateHistogramValuesSource struct {
	// contains filtered or unexported fields
}

CompositeAggregationDateHistogramValuesSource is a source for the CompositeAggregation that handles date histograms it works very similar to a date histogram aggregation with slightly different syntax

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html#_date_histogram for details.

func NewCompositeAggregationDateHistogramValuesSource

func NewCompositeAggregationDateHistogramValuesSource(name string, interval interface{}) *CompositeAggregationDateHistogramValuesSource

NewCompositeAggregationDateHistogramValuesSource creates and initializes a new CompositeAggregationDateHistogramValuesSource.

func (*CompositeAggregationDateHistogramValuesSource) Asc

Asc ensures the order of the values produced is ascending.

func (*CompositeAggregationDateHistogramValuesSource) Desc

Desc ensures the order of the values produced is descending.

func (*CompositeAggregationDateHistogramValuesSource) Field

Field to use for this source.

func (*CompositeAggregationDateHistogramValuesSource) Interval

Interval to use for the date histogram, e.g. "1d" or a numeric value like "60".

func (*CompositeAggregationDateHistogramValuesSource) Missing

Missing specifies the value to use when the source finds a missing value in a document.

func (*CompositeAggregationDateHistogramValuesSource) Order

Order specifies the order in the values produced by this source. It can be either "asc" or "desc".

func (*CompositeAggregationDateHistogramValuesSource) Script

Script to use for this source.

func (*CompositeAggregationDateHistogramValuesSource) Source

func (a *CompositeAggregationDateHistogramValuesSource) Source() (interface{}, error)

Source returns the serializable JSON for this values source.

func (*CompositeAggregationDateHistogramValuesSource) TimeZone

TimeZone to use for the dates.

func (*CompositeAggregationDateHistogramValuesSource) ValueType

ValueType specifies the type of values produced by this source, e.g. "string" or "date".

type CompositeAggregationHistogramValuesSource

type CompositeAggregationHistogramValuesSource struct {
	// contains filtered or unexported fields
}

CompositeAggregationHistogramValuesSource is a source for the CompositeAggregation that handles histograms it works very similar to a terms histogram with slightly different syntax

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html#_histogram for details.

func NewCompositeAggregationHistogramValuesSource

func NewCompositeAggregationHistogramValuesSource(name string, interval float64) *CompositeAggregationHistogramValuesSource

NewCompositeAggregationHistogramValuesSource creates and initializes a new CompositeAggregationHistogramValuesSource.

func (*CompositeAggregationHistogramValuesSource) Asc

Asc ensures the order of the values produced is ascending.

func (*CompositeAggregationHistogramValuesSource) Desc

Desc ensures the order of the values produced is descending.

func (*CompositeAggregationHistogramValuesSource) Field

Field to use for this source.

func (*CompositeAggregationHistogramValuesSource) Interval

Interval specifies the interval to use.

func (*CompositeAggregationHistogramValuesSource) Missing

Missing specifies the value to use when the source finds a missing value in a document.

func (*CompositeAggregationHistogramValuesSource) Order

Order specifies the order in the values produced by this source. It can be either "asc" or "desc".

func (*CompositeAggregationHistogramValuesSource) Script

Script to use for this source.

func (*CompositeAggregationHistogramValuesSource) Source

func (a *CompositeAggregationHistogramValuesSource) Source() (interface{}, error)

Source returns the serializable JSON for this values source.

func (*CompositeAggregationHistogramValuesSource) ValueType

ValueType specifies the type of values produced by this source, e.g. "string" or "date".

type CompositeAggregationTermsValuesSource

type CompositeAggregationTermsValuesSource struct {
	// contains filtered or unexported fields
}

CompositeAggregationTermsValuesSource is a source for the CompositeAggregation that handles terms it works very similar to a terms aggregation with slightly different syntax

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html#_terms for details.

func NewCompositeAggregationTermsValuesSource

func NewCompositeAggregationTermsValuesSource(name string) *CompositeAggregationTermsValuesSource

NewCompositeAggregationTermsValuesSource creates and initializes a new CompositeAggregationTermsValuesSource.

func (*CompositeAggregationTermsValuesSource) Asc

Asc ensures the order of the values produced is ascending.

func (*CompositeAggregationTermsValuesSource) Desc

Desc ensures the order of the values produced is descending.

func (*CompositeAggregationTermsValuesSource) Field

Field to use for this source.

func (*CompositeAggregationTermsValuesSource) Missing

Missing specifies the value to use when the source finds a missing value in a document.

func (*CompositeAggregationTermsValuesSource) Order

Order specifies the order in the values produced by this source. It can be either "asc" or "desc".

func (*CompositeAggregationTermsValuesSource) Script

Script to use for this source.

func (*CompositeAggregationTermsValuesSource) Source

func (a *CompositeAggregationTermsValuesSource) Source() (interface{}, error)

Source returns the serializable JSON for this values source.

func (*CompositeAggregationTermsValuesSource) ValueType

ValueType specifies the type of values produced by this source, e.g. "string" or "date".

type CompositeAggregationValuesSource

type CompositeAggregationValuesSource interface {
	Source() (interface{}, error)
}

CompositeAggregationValuesSource specifies the interface that all implementations for CompositeAggregation's Sources method need to implement.

The different implementations are described in https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-aggregations-bucket-composite-aggregation.html#_values_source_2.

type ConstantBackoff

type ConstantBackoff struct {
	// contains filtered or unexported fields
}

ConstantBackoff is a backoff policy that always returns the same delay.

func NewConstantBackoff

func NewConstantBackoff(interval time.Duration) *ConstantBackoff

NewConstantBackoff returns a new ConstantBackoff.

func (*ConstantBackoff) Next

func (b *ConstantBackoff) Next(retry int) (time.Duration, bool)

Next implements BackoffFunc for ConstantBackoff.

type ConstantScoreQuery

type ConstantScoreQuery struct {
	// contains filtered or unexported fields
}

ConstantScoreQuery is a query that wraps a filter and simply returns a constant score equal to the query boost for every document in the filter.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-constant-score-query.html

func NewConstantScoreQuery

func NewConstantScoreQuery(filter Query) *ConstantScoreQuery

ConstantScoreQuery creates and initializes a new constant score query.

func (*ConstantScoreQuery) Boost

Boost sets the boost for this query. Documents matching this query will (in addition to the normal weightings) have their score multiplied by the boost provided.

func (*ConstantScoreQuery) Source

func (q *ConstantScoreQuery) Source() (interface{}, error)

Source returns the query source.

type ContextSuggester

type ContextSuggester struct {
	Suggester
	// contains filtered or unexported fields
}

ContextSuggester is a fast suggester for e.g. type-ahead completion that supports filtering and boosting based on contexts. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/suggester-context.html for more details.

func NewContextSuggester

func NewContextSuggester(name string) *ContextSuggester

Creates a new context suggester.

func (*ContextSuggester) ContextQueries

func (q *ContextSuggester) ContextQueries(queries ...SuggesterContextQuery) *ContextSuggester

func (*ContextSuggester) ContextQuery

func (*ContextSuggester) Field

func (q *ContextSuggester) Field(field string) *ContextSuggester

func (*ContextSuggester) Name

func (q *ContextSuggester) Name() string

func (*ContextSuggester) Prefix

func (q *ContextSuggester) Prefix(prefix string) *ContextSuggester

func (*ContextSuggester) Size

func (q *ContextSuggester) Size(size int) *ContextSuggester

func (*ContextSuggester) Source

func (q *ContextSuggester) Source(includeName bool) (interface{}, error)

Creates the source for the context suggester.

type CountResponse

type CountResponse struct {
	Count  int64      `json:"count"`
	Shards shardsInfo `json:"_shards,omitempty"`
}

CountResponse is the response of using the Count API.

type CountService

type CountService struct {
	// contains filtered or unexported fields
}

CountService is a convenient service for determining the number of documents in an index. Use SearchService with a SearchType of count for counting with queries etc.

func NewCountService

func NewCountService(client *Client) *CountService

NewCountService creates a new CountService.

func (*CountService) AllowNoIndices

func (s *CountService) AllowNoIndices(allowNoIndices bool) *CountService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes "_all" string or when no indices have been specified).

func (*CountService) AnalyzeWildcard

func (s *CountService) AnalyzeWildcard(analyzeWildcard bool) *CountService

AnalyzeWildcard specifies whether wildcard and prefix queries should be analyzed (default: false).

func (*CountService) Analyzer

func (s *CountService) Analyzer(analyzer string) *CountService

Analyzer specifies the analyzer to use for the query string.

func (*CountService) BodyJson

func (s *CountService) BodyJson(body interface{}) *CountService

BodyJson specifies the query to restrict the results specified with the Query DSL (optional). The interface{} will be serialized to a JSON document, so use a map[string]interface{}.

func (*CountService) BodyString

func (s *CountService) BodyString(body string) *CountService

Body specifies a query to restrict the results specified with the Query DSL (optional).

func (*CountService) DefaultOperator

func (s *CountService) DefaultOperator(defaultOperator string) *CountService

DefaultOperator specifies the default operator for query string query (AND or OR).

func (*CountService) Df

func (s *CountService) Df(df string) *CountService

Df specifies the field to use as default where no field prefix is given in the query string.

func (*CountService) Do

func (s *CountService) Do(ctx context.Context) (int64, error)

Do executes the operation.

func (*CountService) ExpandWildcards

func (s *CountService) ExpandWildcards(expandWildcards string) *CountService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*CountService) IgnoreUnavailable

func (s *CountService) IgnoreUnavailable(ignoreUnavailable bool) *CountService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*CountService) Index

func (s *CountService) Index(index ...string) *CountService

Index sets the names of the indices to restrict the results.

func (*CountService) Lenient

func (s *CountService) Lenient(lenient bool) *CountService

Lenient specifies whether format-based query failures (such as providing text to a numeric field) should be ignored.

func (*CountService) LowercaseExpandedTerms

func (s *CountService) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *CountService

LowercaseExpandedTerms specifies whether query terms should be lowercased.

func (*CountService) MinScore

func (s *CountService) MinScore(minScore interface{}) *CountService

MinScore indicates to include only documents with a specific `_score` value in the result.

func (*CountService) Preference

func (s *CountService) Preference(preference string) *CountService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*CountService) Pretty

func (s *CountService) Pretty(pretty bool) *CountService

Pretty indicates that the JSON response be indented and human readable.

func (*CountService) Q

func (s *CountService) Q(q string) *CountService

Q in the Lucene query string syntax. You can also use Query to pass a Query struct.

func (*CountService) Query

func (s *CountService) Query(query Query) *CountService

Query specifies the query to pass. You can also pass a query string with Q.

func (*CountService) Routing

func (s *CountService) Routing(routing string) *CountService

Routing specifies the routing value.

func (*CountService) TerminateAfter

func (s *CountService) TerminateAfter(terminateAfter int) *CountService

TerminateAfter indicates the maximum count for each shard, upon reaching which the query execution will terminate early.

func (*CountService) Type

func (s *CountService) Type(typ ...string) *CountService

Type sets the types to use to restrict the results.

func (*CountService) Validate

func (s *CountService) Validate() error

Validate checks if the operation is valid.

type CumulativeSumAggregation

type CumulativeSumAggregation struct {
	// contains filtered or unexported fields
}

CumulativeSumAggregation is a parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram) aggregation. The specified metric must be numeric and the enclosing histogram must have min_doc_count set to 0 (default for histogram aggregations).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-cumulative-sum-aggregation.html

func NewCumulativeSumAggregation

func NewCumulativeSumAggregation() *CumulativeSumAggregation

NewCumulativeSumAggregation creates and initializes a new CumulativeSumAggregation.

func (*CumulativeSumAggregation) BucketsPath

func (a *CumulativeSumAggregation) BucketsPath(bucketsPaths ...string) *CumulativeSumAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*CumulativeSumAggregation) Format

Format to use on the output of this aggregation.

func (*CumulativeSumAggregation) Meta

func (a *CumulativeSumAggregation) Meta(metaData map[string]interface{}) *CumulativeSumAggregation

Meta sets the meta data to be included in the aggregation response.

func (*CumulativeSumAggregation) Source

func (a *CumulativeSumAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type DateHistogramAggregation

type DateHistogramAggregation struct {
	// contains filtered or unexported fields
}

DateHistogramAggregation is a multi-bucket aggregation similar to the histogram except it can only be applied on date values. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-datehistogram-aggregation.html

func NewDateHistogramAggregation

func NewDateHistogramAggregation() *DateHistogramAggregation

NewDateHistogramAggregation creates a new DateHistogramAggregation.

func (*DateHistogramAggregation) ExtendedBounds

func (a *DateHistogramAggregation) ExtendedBounds(min, max interface{}) *DateHistogramAggregation

ExtendedBounds accepts int, int64, string, or time.Time values. In case the lower value in the histogram would be greater than min or the upper value would be less than max, empty buckets will be generated.

func (*DateHistogramAggregation) ExtendedBoundsMax

func (a *DateHistogramAggregation) ExtendedBoundsMax(max interface{}) *DateHistogramAggregation

ExtendedBoundsMax accepts int, int64, string, or time.Time values.

func (*DateHistogramAggregation) ExtendedBoundsMin

func (a *DateHistogramAggregation) ExtendedBoundsMin(min interface{}) *DateHistogramAggregation

ExtendedBoundsMin accepts int, int64, string, or time.Time values.

func (*DateHistogramAggregation) Field

Field on which the aggregation is processed.

func (*DateHistogramAggregation) Format

Format sets the format to use for dates.

func (*DateHistogramAggregation) Interval

Interval by which the aggregation gets processed. Allowed values are: "year", "quarter", "month", "week", "day", "hour", "minute". It also supports time settings like "1.5h" (up to "w" for weeks).

func (*DateHistogramAggregation) Meta

func (a *DateHistogramAggregation) Meta(metaData map[string]interface{}) *DateHistogramAggregation

Meta sets the meta data to be included in the aggregation response.

func (*DateHistogramAggregation) MinDocCount

func (a *DateHistogramAggregation) MinDocCount(minDocCount int64) *DateHistogramAggregation

MinDocCount sets the minimum document count per bucket. Buckets with less documents than this min value will not be returned.

func (*DateHistogramAggregation) Missing

func (a *DateHistogramAggregation) Missing(missing interface{}) *DateHistogramAggregation

Missing configures the value to use when documents miss a value.

func (*DateHistogramAggregation) Offset

Offset sets the offset of time intervals in the histogram, e.g. "+6h".

func (*DateHistogramAggregation) Order

Order specifies the sort order. Valid values for order are: "_key", "_count", a sub-aggregation name, or a sub-aggregation name with a metric.

func (*DateHistogramAggregation) OrderByAggregation

func (a *DateHistogramAggregation) OrderByAggregation(aggName string, asc bool) *DateHistogramAggregation

OrderByAggregation creates a bucket ordering strategy which sorts buckets based on a single-valued calc get.

func (*DateHistogramAggregation) OrderByAggregationAndMetric

func (a *DateHistogramAggregation) OrderByAggregationAndMetric(aggName, metric string, asc bool) *DateHistogramAggregation

OrderByAggregationAndMetric creates a bucket ordering strategy which sorts buckets based on a multi-valued calc get.

func (*DateHistogramAggregation) OrderByCount

func (*DateHistogramAggregation) OrderByCountAsc

func (a *DateHistogramAggregation) OrderByCountAsc() *DateHistogramAggregation

func (*DateHistogramAggregation) OrderByCountDesc

func (a *DateHistogramAggregation) OrderByCountDesc() *DateHistogramAggregation

func (*DateHistogramAggregation) OrderByKey

func (*DateHistogramAggregation) OrderByKeyAsc

func (*DateHistogramAggregation) OrderByKeyDesc

func (*DateHistogramAggregation) Script

func (*DateHistogramAggregation) Source

func (a *DateHistogramAggregation) Source() (interface{}, error)

func (*DateHistogramAggregation) SubAggregation

func (a *DateHistogramAggregation) SubAggregation(name string, subAggregation Aggregation) *DateHistogramAggregation

func (*DateHistogramAggregation) TimeZone

TimeZone sets the timezone in which to translate dates before computing buckets.

type DateRangeAggregation

type DateRangeAggregation struct {
	// contains filtered or unexported fields
}

DateRangeAggregation is a range aggregation that is dedicated for date values. The main difference between this aggregation and the normal range aggregation is that the from and to values can be expressed in Date Math expressions, and it is also possible to specify a date format by which the from and to response fields will be returned. Note that this aggregration includes the from value and excludes the to value for each range. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-daterange-aggregation.html

func NewDateRangeAggregation

func NewDateRangeAggregation() *DateRangeAggregation

func (*DateRangeAggregation) AddRange

func (a *DateRangeAggregation) AddRange(from, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) AddRangeWithKey

func (a *DateRangeAggregation) AddRangeWithKey(key string, from, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) AddUnboundedFrom

func (a *DateRangeAggregation) AddUnboundedFrom(to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) AddUnboundedFromWithKey

func (a *DateRangeAggregation) AddUnboundedFromWithKey(key string, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) AddUnboundedTo

func (a *DateRangeAggregation) AddUnboundedTo(from interface{}) *DateRangeAggregation

func (*DateRangeAggregation) AddUnboundedToWithKey

func (a *DateRangeAggregation) AddUnboundedToWithKey(key string, from interface{}) *DateRangeAggregation

func (*DateRangeAggregation) Between

func (a *DateRangeAggregation) Between(from, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) BetweenWithKey

func (a *DateRangeAggregation) BetweenWithKey(key string, from, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) Field

func (*DateRangeAggregation) Format

func (*DateRangeAggregation) Gt

func (a *DateRangeAggregation) Gt(from interface{}) *DateRangeAggregation

func (*DateRangeAggregation) GtWithKey

func (a *DateRangeAggregation) GtWithKey(key string, from interface{}) *DateRangeAggregation

func (*DateRangeAggregation) Keyed

func (*DateRangeAggregation) Lt

func (a *DateRangeAggregation) Lt(to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) LtWithKey

func (a *DateRangeAggregation) LtWithKey(key string, to interface{}) *DateRangeAggregation

func (*DateRangeAggregation) Meta

func (a *DateRangeAggregation) Meta(metaData map[string]interface{}) *DateRangeAggregation

Meta sets the meta data to be included in the aggregation response.

func (*DateRangeAggregation) Script

func (a *DateRangeAggregation) Script(script *Script) *DateRangeAggregation

func (*DateRangeAggregation) Source

func (a *DateRangeAggregation) Source() (interface{}, error)

func (*DateRangeAggregation) SubAggregation

func (a *DateRangeAggregation) SubAggregation(name string, subAggregation Aggregation) *DateRangeAggregation

func (*DateRangeAggregation) TimeZone

func (a *DateRangeAggregation) TimeZone(timeZone string) *DateRangeAggregation

func (*DateRangeAggregation) Unmapped

func (a *DateRangeAggregation) Unmapped(unmapped bool) *DateRangeAggregation

type DateRangeAggregationEntry

type DateRangeAggregationEntry struct {
	Key  string
	From interface{}
	To   interface{}
}

type Decoder

type Decoder interface {
	Decode(data []byte, v interface{}) error
}

Decoder is used to decode responses from Elasticsearch. Users of elastic can implement their own marshaler for advanced purposes and set them per Client (see SetDecoder). If none is specified, DefaultDecoder is used.

type DefaultDecoder

type DefaultDecoder struct{}

DefaultDecoder uses json.Unmarshal from the Go standard library to decode JSON data.

func (*DefaultDecoder) Decode

func (u *DefaultDecoder) Decode(data []byte, v interface{}) error

Decode decodes with json.Unmarshal from the Go standard library.

type DeleteByQueryService

type DeleteByQueryService struct {
	// contains filtered or unexported fields
}

DeleteByQueryService deletes documents that match a query. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-delete-by-query.html.

func NewDeleteByQueryService

func NewDeleteByQueryService(client *Client) *DeleteByQueryService

NewDeleteByQueryService creates a new DeleteByQueryService. You typically use the client's DeleteByQuery to get a reference to the service.

func (*DeleteByQueryService) AbortOnVersionConflict

func (s *DeleteByQueryService) AbortOnVersionConflict() *DeleteByQueryService

AbortOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("abort").

func (*DeleteByQueryService) AllowNoIndices

func (s *DeleteByQueryService) AllowNoIndices(allow bool) *DeleteByQueryService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices (including the _all string or when no indices have been specified).

func (*DeleteByQueryService) AnalyzeWildcard

func (s *DeleteByQueryService) AnalyzeWildcard(analyzeWildcard bool) *DeleteByQueryService

AnalyzeWildcard specifies whether wildcard and prefix queries should be analyzed (default: false).

func (*DeleteByQueryService) Analyzer

func (s *DeleteByQueryService) Analyzer(analyzer string) *DeleteByQueryService

Analyzer to use for the query string.

func (*DeleteByQueryService) Body

Body specifies the body of the request. It overrides data being specified via SearchService.

func (*DeleteByQueryService) Conflicts

func (s *DeleteByQueryService) Conflicts(conflicts string) *DeleteByQueryService

Conflicts indicates what to do when the process detects version conflicts. Possible values are "proceed" and "abort".

func (*DeleteByQueryService) DF

func (s *DeleteByQueryService) DF(defaultField string) *DeleteByQueryService

DF is the field to use as default where no field prefix is given in the query string.

func (*DeleteByQueryService) DefaultField

func (s *DeleteByQueryService) DefaultField(defaultField string) *DeleteByQueryService

DefaultField is the field to use as default where no field prefix is given in the query string. It is an alias to the DF func.

func (*DeleteByQueryService) DefaultOperator

func (s *DeleteByQueryService) DefaultOperator(defaultOperator string) *DeleteByQueryService

DefaultOperator for query string query (AND or OR).

func (*DeleteByQueryService) Do

Do executes the delete-by-query operation.

func (*DeleteByQueryService) DocvalueFields

func (s *DeleteByQueryService) DocvalueFields(docvalueFields ...string) *DeleteByQueryService

DocvalueFields specifies the list of fields to return as the docvalue representation of a field for each hit.

func (*DeleteByQueryService) ExpandWildcards

func (s *DeleteByQueryService) ExpandWildcards(expand string) *DeleteByQueryService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both. It can be "open" or "closed".

func (*DeleteByQueryService) Explain

func (s *DeleteByQueryService) Explain(explain bool) *DeleteByQueryService

Explain specifies whether to return detailed information about score computation as part of a hit.

func (*DeleteByQueryService) From

From is the starting offset (default: 0).

func (*DeleteByQueryService) IgnoreUnavailable

func (s *DeleteByQueryService) IgnoreUnavailable(ignore bool) *DeleteByQueryService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*DeleteByQueryService) Index

func (s *DeleteByQueryService) Index(index ...string) *DeleteByQueryService

Index sets the indices on which to perform the delete operation.

func (*DeleteByQueryService) Lenient

func (s *DeleteByQueryService) Lenient(lenient bool) *DeleteByQueryService

Lenient specifies whether format-based query failures (such as providing text to a numeric field) should be ignored.

func (*DeleteByQueryService) LowercaseExpandedTerms

func (s *DeleteByQueryService) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *DeleteByQueryService

LowercaseExpandedTerms specifies whether query terms should be lowercased.

func (*DeleteByQueryService) Preference

func (s *DeleteByQueryService) Preference(preference string) *DeleteByQueryService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*DeleteByQueryService) Pretty

func (s *DeleteByQueryService) Pretty(pretty bool) *DeleteByQueryService

Pretty indents the JSON output from Elasticsearch.

func (*DeleteByQueryService) ProceedOnVersionConflict

func (s *DeleteByQueryService) ProceedOnVersionConflict() *DeleteByQueryService

ProceedOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("proceed").

func (*DeleteByQueryService) Q

Q specifies the query in Lucene query string syntax. You can also use Query to programmatically specify the query.

func (*DeleteByQueryService) Query

Query sets the query programmatically.

func (*DeleteByQueryService) QueryString

func (s *DeleteByQueryService) QueryString(query string) *DeleteByQueryService

QueryString is an alias to Q. Notice that you can also use Query to programmatically set the query.

func (*DeleteByQueryService) Refresh

func (s *DeleteByQueryService) Refresh(refresh string) *DeleteByQueryService

Refresh indicates whether the effected indexes should be refreshed.

func (*DeleteByQueryService) RequestCache

func (s *DeleteByQueryService) RequestCache(requestCache bool) *DeleteByQueryService

RequestCache specifies if request cache should be used for this request or not, defaults to index level setting.

func (*DeleteByQueryService) RequestsPerSecond

func (s *DeleteByQueryService) RequestsPerSecond(requestsPerSecond int) *DeleteByQueryService

RequestsPerSecond sets the throttle on this request in sub-requests per second. -1 means set no throttle as does "unlimited" which is the only non-float this accepts.

func (*DeleteByQueryService) Routing

func (s *DeleteByQueryService) Routing(routing ...string) *DeleteByQueryService

Routing is a list of specific routing values.

func (*DeleteByQueryService) Scroll

Scroll specifies how long a consistent view of the index should be maintained for scrolled search.

func (*DeleteByQueryService) ScrollSize

func (s *DeleteByQueryService) ScrollSize(scrollSize int) *DeleteByQueryService

ScrollSize is the size on the scroll request powering the update_by_query.

func (*DeleteByQueryService) SearchTimeout

func (s *DeleteByQueryService) SearchTimeout(searchTimeout string) *DeleteByQueryService

SearchTimeout defines an explicit timeout for each search request. Defaults to no timeout.

func (*DeleteByQueryService) SearchType

func (s *DeleteByQueryService) SearchType(searchType string) *DeleteByQueryService

SearchType is the search operation type. Possible values are "query_then_fetch" and "dfs_query_then_fetch".

func (*DeleteByQueryService) Size

Size represents the number of hits to return (default: 10).

func (*DeleteByQueryService) Sort

Sort is a list of <field>:<direction> pairs.

func (*DeleteByQueryService) SortByField

func (s *DeleteByQueryService) SortByField(field string, ascending bool) *DeleteByQueryService

SortByField adds a sort order.

func (*DeleteByQueryService) Stats

func (s *DeleteByQueryService) Stats(stats ...string) *DeleteByQueryService

Stats specifies specific tag(s) of the request for logging and statistical purposes.

func (*DeleteByQueryService) StoredFields

func (s *DeleteByQueryService) StoredFields(storedFields ...string) *DeleteByQueryService

StoredFields specifies the list of stored fields to return as part of a hit.

func (*DeleteByQueryService) SuggestField

func (s *DeleteByQueryService) SuggestField(suggestField string) *DeleteByQueryService

SuggestField specifies which field to use for suggestions.

func (*DeleteByQueryService) SuggestMode

func (s *DeleteByQueryService) SuggestMode(suggestMode string) *DeleteByQueryService

SuggestMode specifies the suggest mode. Possible values are "missing", "popular", and "always".

func (*DeleteByQueryService) SuggestSize

func (s *DeleteByQueryService) SuggestSize(suggestSize int) *DeleteByQueryService

SuggestSize specifies how many suggestions to return in response.

func (*DeleteByQueryService) SuggestText

func (s *DeleteByQueryService) SuggestText(suggestText string) *DeleteByQueryService

SuggestText specifies the source text for which the suggestions should be returned.

func (*DeleteByQueryService) TerminateAfter

func (s *DeleteByQueryService) TerminateAfter(terminateAfter int) *DeleteByQueryService

TerminateAfter indicates the maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early.

func (*DeleteByQueryService) Timeout

func (s *DeleteByQueryService) Timeout(timeout string) *DeleteByQueryService

Timeout is the time each individual bulk request should wait for shards that are unavailable.

func (*DeleteByQueryService) TimeoutInMillis

func (s *DeleteByQueryService) TimeoutInMillis(timeoutInMillis int) *DeleteByQueryService

TimeoutInMillis sets the timeout in milliseconds.

func (*DeleteByQueryService) TrackScores

func (s *DeleteByQueryService) TrackScores(trackScores bool) *DeleteByQueryService

TrackScores indicates whether to calculate and return scores even if they are not used for sorting.

func (*DeleteByQueryService) Type

Type limits the delete operation to the given types.

func (*DeleteByQueryService) Validate

func (s *DeleteByQueryService) Validate() error

Validate checks if the operation is valid.

func (*DeleteByQueryService) Version

func (s *DeleteByQueryService) Version(version bool) *DeleteByQueryService

Version specifies whether to return document version as part of a hit.

func (*DeleteByQueryService) WaitForActiveShards

func (s *DeleteByQueryService) WaitForActiveShards(waitForActiveShards string) *DeleteByQueryService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the update by query operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

func (*DeleteByQueryService) WaitForCompletion

func (s *DeleteByQueryService) WaitForCompletion(waitForCompletion bool) *DeleteByQueryService

WaitForCompletion indicates if the request should block until the reindex is complete.

func (*DeleteByQueryService) XSource

func (s *DeleteByQueryService) XSource(xSource ...string) *DeleteByQueryService

XSource is true or false to return the _source field or not, or a list of fields to return.

func (*DeleteByQueryService) XSourceExclude

func (s *DeleteByQueryService) XSourceExclude(xSourceExclude ...string) *DeleteByQueryService

XSourceExclude represents a list of fields to exclude from the returned _source field.

func (*DeleteByQueryService) XSourceInclude

func (s *DeleteByQueryService) XSourceInclude(xSourceInclude ...string) *DeleteByQueryService

XSourceInclude represents a list of fields to extract and return from the _source field.

type DeleteResponse

type DeleteResponse struct {
	Index         string      `json:"_index,omitempty"`
	Type          string      `json:"_type,omitempty"`
	Id            string      `json:"_id,omitempty"`
	Version       int64       `json:"_version,omitempty"`
	Result        string      `json:"result,omitempty"`
	Shards        *shardsInfo `json:"_shards,omitempty"`
	SeqNo         int64       `json:"_seq_no,omitempty"`
	PrimaryTerm   int64       `json:"_primary_term,omitempty"`
	Status        int         `json:"status,omitempty"`
	ForcedRefresh bool        `json:"forced_refresh,omitempty"`
}

DeleteResponse is the outcome of running DeleteService.Do.

type DeleteService

type DeleteService struct {
	// contains filtered or unexported fields
}

DeleteService allows to delete a typed JSON document from a specified index based on its id.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-delete.html for details.

func NewDeleteService

func NewDeleteService(client *Client) *DeleteService

NewDeleteService creates a new DeleteService.

func (*DeleteService) Do

Do executes the operation. If the document is not found (404), Elasticsearch will still return a response. This response is serialized and returned as well. In other words, for HTTP status code 404, both an error and a response might be returned.

func (*DeleteService) Id

func (s *DeleteService) Id(id string) *DeleteService

Id is the document ID.

func (*DeleteService) Index

func (s *DeleteService) Index(index string) *DeleteService

Index is the name of the index.

func (*DeleteService) Parent

func (s *DeleteService) Parent(parent string) *DeleteService

Parent is the ID of parent document.

func (*DeleteService) Pretty

func (s *DeleteService) Pretty(pretty bool) *DeleteService

Pretty indicates that the JSON response be indented and human readable.

func (*DeleteService) Refresh

func (s *DeleteService) Refresh(refresh string) *DeleteService

Refresh the index after performing the operation.

func (*DeleteService) Routing

func (s *DeleteService) Routing(routing string) *DeleteService

Routing is a specific routing value.

func (*DeleteService) Timeout

func (s *DeleteService) Timeout(timeout string) *DeleteService

Timeout is an explicit operation timeout.

func (*DeleteService) Type

func (s *DeleteService) Type(typ string) *DeleteService

Type is the type of the document.

func (*DeleteService) Validate

func (s *DeleteService) Validate() error

Validate checks if the operation is valid.

func (*DeleteService) Version

func (s *DeleteService) Version(version interface{}) *DeleteService

Version is an explicit version number for concurrency control.

func (*DeleteService) VersionType

func (s *DeleteService) VersionType(versionType string) *DeleteService

VersionType is a specific version type.

func (*DeleteService) WaitForActiveShards

func (s *DeleteService) WaitForActiveShards(waitForActiveShards string) *DeleteService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the delete operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

type DerivativeAggregation

type DerivativeAggregation struct {
	// contains filtered or unexported fields
}

DerivativeAggregation is a parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram) aggregation. The specified metric must be numeric and the enclosing histogram must have min_doc_count set to 0 (default for histogram aggregations).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-derivative-aggregation.html

func NewDerivativeAggregation

func NewDerivativeAggregation() *DerivativeAggregation

NewDerivativeAggregation creates and initializes a new DerivativeAggregation.

func (*DerivativeAggregation) BucketsPath

func (a *DerivativeAggregation) BucketsPath(bucketsPaths ...string) *DerivativeAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*DerivativeAggregation) Format

Format to use on the output of this aggregation.

func (*DerivativeAggregation) GapInsertZeros

func (a *DerivativeAggregation) GapInsertZeros() *DerivativeAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*DerivativeAggregation) GapPolicy

func (a *DerivativeAggregation) GapPolicy(gapPolicy string) *DerivativeAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*DerivativeAggregation) GapSkip

GapSkip skips gaps in the series.

func (*DerivativeAggregation) Meta

func (a *DerivativeAggregation) Meta(metaData map[string]interface{}) *DerivativeAggregation

Meta sets the meta data to be included in the aggregation response.

func (*DerivativeAggregation) Source

func (a *DerivativeAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

func (*DerivativeAggregation) Unit

Unit sets the unit provided, e.g. "1d" or "1y". It is only useful when calculating the derivative using a date_histogram.

type DirectCandidateGenerator

type DirectCandidateGenerator struct {
	// contains filtered or unexported fields
}

DirectCandidateGenerator implements a direct candidate generator. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-phrase.html#_smoothing_models for details about smoothing models.

func NewDirectCandidateGenerator

func NewDirectCandidateGenerator(field string) *DirectCandidateGenerator

func (*DirectCandidateGenerator) Accuracy

func (*DirectCandidateGenerator) Field

func (*DirectCandidateGenerator) MaxEdits

func (g *DirectCandidateGenerator) MaxEdits(maxEdits int) *DirectCandidateGenerator

func (*DirectCandidateGenerator) MaxInspections

func (g *DirectCandidateGenerator) MaxInspections(maxInspections int) *DirectCandidateGenerator

func (*DirectCandidateGenerator) MaxTermFreq

func (g *DirectCandidateGenerator) MaxTermFreq(maxTermFreq float64) *DirectCandidateGenerator

func (*DirectCandidateGenerator) MinDocFreq

func (g *DirectCandidateGenerator) MinDocFreq(minDocFreq float64) *DirectCandidateGenerator

func (*DirectCandidateGenerator) MinWordLength

func (g *DirectCandidateGenerator) MinWordLength(minWordLength int) *DirectCandidateGenerator

func (*DirectCandidateGenerator) PostFilter

func (g *DirectCandidateGenerator) PostFilter(postFilter string) *DirectCandidateGenerator

func (*DirectCandidateGenerator) PreFilter

func (g *DirectCandidateGenerator) PreFilter(preFilter string) *DirectCandidateGenerator

func (*DirectCandidateGenerator) PrefixLength

func (g *DirectCandidateGenerator) PrefixLength(prefixLength int) *DirectCandidateGenerator

func (*DirectCandidateGenerator) Size

func (*DirectCandidateGenerator) Sort

func (*DirectCandidateGenerator) Source

func (g *DirectCandidateGenerator) Source() (interface{}, error)

func (*DirectCandidateGenerator) StringDistance

func (g *DirectCandidateGenerator) StringDistance(stringDistance string) *DirectCandidateGenerator

func (*DirectCandidateGenerator) SuggestMode

func (g *DirectCandidateGenerator) SuggestMode(suggestMode string) *DirectCandidateGenerator

func (*DirectCandidateGenerator) Type

func (g *DirectCandidateGenerator) Type() string

type DisMaxQuery

type DisMaxQuery struct {
	// contains filtered or unexported fields
}

DisMaxQuery is a query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-dis-max-query.html

func NewDisMaxQuery

func NewDisMaxQuery() *DisMaxQuery

NewDisMaxQuery creates and initializes a new dis max query.

func (*DisMaxQuery) Boost

func (q *DisMaxQuery) Boost(boost float64) *DisMaxQuery

Boost sets the boost for this query. Documents matching this query will (in addition to the normal weightings) have their score multiplied by the boost provided.

func (*DisMaxQuery) Query

func (q *DisMaxQuery) Query(queries ...Query) *DisMaxQuery

Query adds one or more queries to the dis max query.

func (*DisMaxQuery) QueryName

func (q *DisMaxQuery) QueryName(queryName string) *DisMaxQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*DisMaxQuery) Source

func (q *DisMaxQuery) Source() (interface{}, error)

Source returns the JSON serializable content for this query.

func (*DisMaxQuery) TieBreaker

func (q *DisMaxQuery) TieBreaker(tieBreaker float64) *DisMaxQuery

TieBreaker is the factor by which the score of each non-maximum disjunct for a document is multiplied with and added into the final score.

If non-zero, the value should be small, on the order of 0.1, which says that 10 occurrences of word in a lower-scored field that is also in a higher scored field is just as good as a unique word in the lower scored field (i.e., one that is not in any higher scored field).

type DiscoveryNode

type DiscoveryNode struct {
	Name             string                 `json:"name"`
	TransportAddress string                 `json:"transport_address"`
	Host             string                 `json:"host"`
	IP               string                 `json:"ip"`
	Roles            []string               `json:"roles"` // "master", "data", or "ingest"
	Attributes       map[string]interface{} `json:"attributes"`
	// Tasks returns the tasks by its id (as a string).
	Tasks map[string]*TaskInfo `json:"tasks"`
}

type DiversifiedSamplerAggregation

type DiversifiedSamplerAggregation struct {
	// contains filtered or unexported fields
}

DiversifiedSamplerAggregation Like the ‘sampler` aggregation this is a filtering aggregation used to limit any sub aggregations’ processing to a sample of the top-scoring documents. The diversified_sampler aggregation adds the ability to limit the number of matches that share a common value such as an "author".

See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-diversified-sampler-aggregation.html

func NewDiversifiedSamplerAggregation

func NewDiversifiedSamplerAggregation() *DiversifiedSamplerAggregation

func (*DiversifiedSamplerAggregation) ExecutionHint

func (*DiversifiedSamplerAggregation) Field

Field on which the aggregation is processed.

func (*DiversifiedSamplerAggregation) MaxDocsPerValue

func (a *DiversifiedSamplerAggregation) MaxDocsPerValue(maxDocsPerValue int) *DiversifiedSamplerAggregation

func (*DiversifiedSamplerAggregation) Meta

func (a *DiversifiedSamplerAggregation) Meta(metaData map[string]interface{}) *DiversifiedSamplerAggregation

Meta sets the meta data to be included in the aggregation response.

func (*DiversifiedSamplerAggregation) Script

func (*DiversifiedSamplerAggregation) ShardSize

ShardSize sets the maximum number of docs returned from each shard.

func (*DiversifiedSamplerAggregation) Source

func (a *DiversifiedSamplerAggregation) Source() (interface{}, error)

func (*DiversifiedSamplerAggregation) SubAggregation

func (a *DiversifiedSamplerAggregation) SubAggregation(name string, subAggregation Aggregation) *DiversifiedSamplerAggregation

type EWMAMovAvgModel

type EWMAMovAvgModel struct {
	// contains filtered or unexported fields
}

EWMAMovAvgModel calculates an exponentially weighted moving average.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html#_ewma_exponentially_weighted

func NewEWMAMovAvgModel

func NewEWMAMovAvgModel() *EWMAMovAvgModel

NewEWMAMovAvgModel creates and initializes a new EWMAMovAvgModel.

func (*EWMAMovAvgModel) Alpha

func (m *EWMAMovAvgModel) Alpha(alpha float64) *EWMAMovAvgModel

Alpha controls the smoothing of the data. Alpha = 1 retains no memory of past values (e.g. a random walk), while alpha = 0 retains infinite memory of past values (e.g. the series mean). Useful values are somewhere in between. Defaults to 0.5.

func (*EWMAMovAvgModel) Name

func (m *EWMAMovAvgModel) Name() string

Name of the model.

func (*EWMAMovAvgModel) Settings

func (m *EWMAMovAvgModel) Settings() map[string]interface{}

Settings of the model.

type Error

type Error struct {
	Status  int           `json:"status"`
	Details *ErrorDetails `json:"error,omitempty"`
}

Error encapsulates error details as returned from Elasticsearch.

func (*Error) Error

func (e *Error) Error() string

Error returns a string representation of the error.

type ErrorDetails

type ErrorDetails struct {
	Type         string                   `json:"type"`
	Reason       string                   `json:"reason"`
	ResourceType string                   `json:"resource.type,omitempty"`
	ResourceId   string                   `json:"resource.id,omitempty"`
	Index        string                   `json:"index,omitempty"`
	Phase        string                   `json:"phase,omitempty"`
	Grouped      bool                     `json:"grouped,omitempty"`
	CausedBy     map[string]interface{}   `json:"caused_by,omitempty"`
	RootCause    []*ErrorDetails          `json:"root_cause,omitempty"`
	FailedShards []map[string]interface{} `json:"failed_shards,omitempty"`
}

ErrorDetails encapsulate error details from Elasticsearch. It is used in e.g. elastic.Error and elastic.BulkResponseItem.

type ExistsQuery

type ExistsQuery struct {
	// contains filtered or unexported fields
}

ExistsQuery is a query that only matches on documents that the field has a value in them.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-exists-query.html

func NewExistsQuery

func NewExistsQuery(name string) *ExistsQuery

NewExistsQuery creates and initializes a new dis max query.

func (*ExistsQuery) QueryName

func (q *ExistsQuery) QueryName(queryName string) *ExistsQuery

QueryName sets the query name for the filter that can be used when searching for matched queries per hit.

func (*ExistsQuery) Source

func (q *ExistsQuery) Source() (interface{}, error)

Source returns the JSON serializable content for this query.

type ExistsService

type ExistsService struct {
	// contains filtered or unexported fields
}

ExistsService checks for the existence of a document using HEAD.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-get.html for details.

func NewExistsService

func NewExistsService(client *Client) *ExistsService

NewExistsService creates a new ExistsService.

func (*ExistsService) Do

func (s *ExistsService) Do(ctx context.Context) (bool, error)

Do executes the operation.

func (*ExistsService) Id

func (s *ExistsService) Id(id string) *ExistsService

Id is the document ID.

func (*ExistsService) Index

func (s *ExistsService) Index(index string) *ExistsService

Index is the name of the index.

func (*ExistsService) Parent

func (s *ExistsService) Parent(parent string) *ExistsService

Parent is the ID of the parent document.

func (*ExistsService) Preference

func (s *ExistsService) Preference(preference string) *ExistsService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*ExistsService) Pretty

func (s *ExistsService) Pretty(pretty bool) *ExistsService

Pretty indicates that the JSON response be indented and human readable.

func (*ExistsService) Realtime

func (s *ExistsService) Realtime(realtime bool) *ExistsService

Realtime specifies whether to perform the operation in realtime or search mode.

func (*ExistsService) Refresh

func (s *ExistsService) Refresh(refresh string) *ExistsService

Refresh the shard containing the document before performing the operation.

func (*ExistsService) Routing

func (s *ExistsService) Routing(routing string) *ExistsService

Routing is a specific routing value.

func (*ExistsService) Type

func (s *ExistsService) Type(typ string) *ExistsService

Type is the type of the document (use `_all` to fetch the first document matching the ID across all types).

func (*ExistsService) Validate

func (s *ExistsService) Validate() error

Validate checks if the operation is valid.

type ExplainResponse

type ExplainResponse struct {
	Index       string                 `json:"_index"`
	Type        string                 `json:"_type"`
	Id          string                 `json:"_id"`
	Matched     bool                   `json:"matched"`
	Explanation map[string]interface{} `json:"explanation"`
}

ExplainResponse is the response of ExplainService.Do.

type ExplainService

type ExplainService struct {
	// contains filtered or unexported fields
}

ExplainService computes a score explanation for a query and a specific document. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-explain.html.

func NewExplainService

func NewExplainService(client *Client) *ExplainService

NewExplainService creates a new ExplainService.

func (*ExplainService) AnalyzeWildcard

func (s *ExplainService) AnalyzeWildcard(analyzeWildcard bool) *ExplainService

AnalyzeWildcard specifies whether wildcards and prefix queries in the query string query should be analyzed (default: false).

func (*ExplainService) Analyzer

func (s *ExplainService) Analyzer(analyzer string) *ExplainService

Analyzer is the analyzer for the query string query.

func (*ExplainService) BodyJson

func (s *ExplainService) BodyJson(body interface{}) *ExplainService

BodyJson sets the query definition using the Query DSL.

func (*ExplainService) BodyString

func (s *ExplainService) BodyString(body string) *ExplainService

BodyString sets the query definition using the Query DSL as a string.

func (*ExplainService) DefaultOperator

func (s *ExplainService) DefaultOperator(defaultOperator string) *ExplainService

DefaultOperator is the default operator for query string query (AND or OR).

func (*ExplainService) Df

Df is the default field for query string query (default: _all).

func (*ExplainService) Do

Do executes the operation.

func (*ExplainService) Fields

func (s *ExplainService) Fields(fields ...string) *ExplainService

Fields is a list of fields to return in the response.

func (*ExplainService) Id

Id is the document ID.

func (*ExplainService) Index

func (s *ExplainService) Index(index string) *ExplainService

Index is the name of the index.

func (*ExplainService) Lenient

func (s *ExplainService) Lenient(lenient bool) *ExplainService

Lenient specifies whether format-based query failures (such as providing text to a numeric field) should be ignored.

func (*ExplainService) LowercaseExpandedTerms

func (s *ExplainService) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *ExplainService

LowercaseExpandedTerms specifies whether query terms should be lowercased.

func (*ExplainService) Parent

func (s *ExplainService) Parent(parent string) *ExplainService

Parent is the ID of the parent document.

func (*ExplainService) Preference

func (s *ExplainService) Preference(preference string) *ExplainService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*ExplainService) Pretty

func (s *ExplainService) Pretty(pretty bool) *ExplainService

Pretty indicates that the JSON response be indented and human readable.

func (*ExplainService) Q

Query in the Lucene query string syntax.

func (*ExplainService) Query

func (s *ExplainService) Query(query Query) *ExplainService

Query sets a query definition using the Query DSL.

func (*ExplainService) Routing

func (s *ExplainService) Routing(routing string) *ExplainService

Routing sets a specific routing value.

func (*ExplainService) Source

func (s *ExplainService) Source(source string) *ExplainService

Source is the URL-encoded query definition (instead of using the request body).

func (*ExplainService) Type

func (s *ExplainService) Type(typ string) *ExplainService

Type is the type of the document.

func (*ExplainService) Validate

func (s *ExplainService) Validate() error

Validate checks if the operation is valid.

func (*ExplainService) XSource

func (s *ExplainService) XSource(xSource ...string) *ExplainService

XSource is true or false to return the _source field or not, or a list of fields to return.

func (*ExplainService) XSourceExclude

func (s *ExplainService) XSourceExclude(xSourceExclude ...string) *ExplainService

XSourceExclude is a list of fields to exclude from the returned _source field.

func (*ExplainService) XSourceInclude

func (s *ExplainService) XSourceInclude(xSourceInclude ...string) *ExplainService

XSourceInclude is a list of fields to extract and return from the _source field.

type ExponentialBackoff

type ExponentialBackoff struct {
	// contains filtered or unexported fields
}

ExponentialBackoff implements the simple exponential backoff described by Douglas Thain at http://dthain.blogspot.de/2009/02/exponential-backoff-in-distributed.html.

func NewExponentialBackoff

func NewExponentialBackoff(initialTimeout, maxTimeout time.Duration) *ExponentialBackoff

NewExponentialBackoff returns a ExponentialBackoff backoff policy. Use initialTimeout to set the first/minimal interval and maxTimeout to set the maximum wait interval.

func (*ExponentialBackoff) Next

func (b *ExponentialBackoff) Next(retry int) (time.Duration, bool)

Next implements BackoffFunc for ExponentialBackoff.

type ExponentialDecayFunction

type ExponentialDecayFunction struct {
	// contains filtered or unexported fields
}

ExponentialDecayFunction builds an exponential decay score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html for details.

func NewExponentialDecayFunction

func NewExponentialDecayFunction() *ExponentialDecayFunction

NewExponentialDecayFunction creates a new ExponentialDecayFunction.

func (*ExponentialDecayFunction) Decay

Decay defines how documents are scored at the distance given a Scale. If no decay is defined, documents at the distance Scale will be scored 0.5.

func (*ExponentialDecayFunction) FieldName

func (fn *ExponentialDecayFunction) FieldName(fieldName string) *ExponentialDecayFunction

FieldName specifies the name of the field to which this decay function is applied to.

func (*ExponentialDecayFunction) GetWeight

func (fn *ExponentialDecayFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*ExponentialDecayFunction) MultiValueMode

func (fn *ExponentialDecayFunction) MultiValueMode(mode string) *ExponentialDecayFunction

MultiValueMode specifies how the decay function should be calculated on a field that has multiple values. Valid modes are: min, max, avg, and sum.

func (*ExponentialDecayFunction) Name

func (fn *ExponentialDecayFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*ExponentialDecayFunction) Offset

func (fn *ExponentialDecayFunction) Offset(offset interface{}) *ExponentialDecayFunction

Offset, if defined, computes the decay function only for a distance greater than the defined offset.

func (*ExponentialDecayFunction) Origin

func (fn *ExponentialDecayFunction) Origin(origin interface{}) *ExponentialDecayFunction

Origin defines the "central point" by which the decay function calculates "distance".

func (*ExponentialDecayFunction) Scale

func (fn *ExponentialDecayFunction) Scale(scale interface{}) *ExponentialDecayFunction

Scale defines the scale to be used with Decay.

func (*ExponentialDecayFunction) Source

func (fn *ExponentialDecayFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*ExponentialDecayFunction) Weight

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type ExtendedStatsAggregation

type ExtendedStatsAggregation struct {
	// contains filtered or unexported fields
}

ExtendedExtendedStatsAggregation is a multi-value metrics aggregation that computes stats over numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-extendedstats-aggregation.html

func NewExtendedStatsAggregation

func NewExtendedStatsAggregation() *ExtendedStatsAggregation

func (*ExtendedStatsAggregation) Field

func (*ExtendedStatsAggregation) Format

func (*ExtendedStatsAggregation) Meta

func (a *ExtendedStatsAggregation) Meta(metaData map[string]interface{}) *ExtendedStatsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*ExtendedStatsAggregation) Script

func (*ExtendedStatsAggregation) Source

func (a *ExtendedStatsAggregation) Source() (interface{}, error)

func (*ExtendedStatsAggregation) SubAggregation

func (a *ExtendedStatsAggregation) SubAggregation(name string, subAggregation Aggregation) *ExtendedStatsAggregation

type FailedNodeException

type FailedNodeException struct {
	*ErrorDetails
	NodeId string `json:"node_id"`
}

type FetchSourceContext

type FetchSourceContext struct {
	// contains filtered or unexported fields
}

FetchSourceContext enables source filtering, i.e. it allows control over how the _source field is returned with every hit. It is used with various endpoints, e.g. when searching for documents, retrieving individual documents, or even updating documents.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-source-filtering.html for details.

func NewFetchSourceContext

func NewFetchSourceContext(fetchSource bool) *FetchSourceContext

NewFetchSourceContext returns a new FetchSourceContext.

func (*FetchSourceContext) Exclude

func (fsc *FetchSourceContext) Exclude(excludes ...string) *FetchSourceContext

Exclude indicates to exclude specific parts of the _source. Wildcards are allowed here.

func (*FetchSourceContext) FetchSource

func (fsc *FetchSourceContext) FetchSource() bool

FetchSource indicates whether to return the _source.

func (*FetchSourceContext) Include

func (fsc *FetchSourceContext) Include(includes ...string) *FetchSourceContext

Include indicates to return specific parts of the _source. Wildcards are allowed here.

func (*FetchSourceContext) Query

func (fsc *FetchSourceContext) Query() url.Values

Query returns the parameters in a form suitable for a URL query string.

func (*FetchSourceContext) SetFetchSource

func (fsc *FetchSourceContext) SetFetchSource(fetchSource bool)

SetFetchSource specifies whether to return the _source.

func (*FetchSourceContext) Source

func (fsc *FetchSourceContext) Source() (interface{}, error)

Source returns the JSON-serializable data to be used in a body.

type FieldCaps

type FieldCaps struct {
	Type                   string   `json:"type"`
	Searchable             bool     `json:"searchable"`
	Aggregatable           bool     `json:"aggregatable"`
	Indices                []string `json:"indices,omitempty"`
	NonSearchableIndices   []string `json:"non_searchable_indices,omitempty"`
	NonAggregatableIndices []string `json:"non_aggregatable_indices,omitempty"`
}

FieldCaps contains capabilities of an individual field.

type FieldCapsRequest

type FieldCapsRequest struct {
	Fields []string `json:"fields"`
}

FieldCapsRequest can be used to set up the body to be used in the Field Capabilities API.

type FieldCapsResponse

type FieldCapsResponse struct {
	Fields map[string]FieldCaps `json:"fields,omitempty"`
}

FieldCapsResponse contains field capabilities.

type FieldCapsService

type FieldCapsService struct {
	// contains filtered or unexported fields
}

FieldCapsService allows retrieving the capabilities of fields among multiple indices.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.1/search-field-caps.html for details

func NewFieldCapsService

func NewFieldCapsService(client *Client) *FieldCapsService

NewFieldCapsService creates a new FieldCapsService

func (*FieldCapsService) AllowNoIndices

func (s *FieldCapsService) AllowNoIndices(allowNoIndices bool) *FieldCapsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*FieldCapsService) BodyJson

func (s *FieldCapsService) BodyJson(body interface{}) *FieldCapsService

BodyJson is documented as: Field json objects containing the name and optionally a range to filter out indices result, that have results outside the defined bounds.

func (*FieldCapsService) BodyString

func (s *FieldCapsService) BodyString(body string) *FieldCapsService

BodyString is documented as: Field json objects containing the name and optionally a range to filter out indices result, that have results outside the defined bounds.

func (*FieldCapsService) Do

Do executes the operation.

func (*FieldCapsService) ExpandWildcards

func (s *FieldCapsService) ExpandWildcards(expandWildcards string) *FieldCapsService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*FieldCapsService) Fields

func (s *FieldCapsService) Fields(fields ...string) *FieldCapsService

Fields is a list of fields for to get field capabilities.

func (*FieldCapsService) IgnoreUnavailable

func (s *FieldCapsService) IgnoreUnavailable(ignoreUnavailable bool) *FieldCapsService

IgnoreUnavailable is documented as: Whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*FieldCapsService) Index

func (s *FieldCapsService) Index(index ...string) *FieldCapsService

Index is a list of index names; use `_all` or empty string to perform the operation on all indices.

func (*FieldCapsService) Pretty

func (s *FieldCapsService) Pretty(pretty bool) *FieldCapsService

Pretty indicates that the JSON response be indented and human readable.

func (*FieldCapsService) Validate

func (s *FieldCapsService) Validate() error

Validate checks if the operation is valid.

type FieldSort

type FieldSort struct {
	Sorter
	// contains filtered or unexported fields
}

FieldSort sorts by a given field.

func NewFieldSort

func NewFieldSort(fieldName string) *FieldSort

NewFieldSort creates a new FieldSort.

func (*FieldSort) Asc

func (s *FieldSort) Asc() *FieldSort

Asc sets ascending sort order.

func (*FieldSort) Desc

func (s *FieldSort) Desc() *FieldSort

Desc sets descending sort order.

func (*FieldSort) FieldName

func (s *FieldSort) FieldName(fieldName string) *FieldSort

FieldName specifies the name of the field to be used for sorting.

func (*FieldSort) Missing

func (s *FieldSort) Missing(missing interface{}) *FieldSort

Missing sets the value to be used when a field is missing in a document. You can also use "_last" or "_first" to sort missing last or first respectively.

func (*FieldSort) NestedFilter

func (s *FieldSort) NestedFilter(nestedFilter Query) *FieldSort

NestedFilter sets a filter that nested objects should match with in order to be taken into account for sorting.

func (*FieldSort) NestedPath

func (s *FieldSort) NestedPath(nestedPath string) *FieldSort

NestedPath is used if sorting occurs on a field that is inside a nested object.

func (*FieldSort) NestedSort

func (s *FieldSort) NestedSort(nestedSort *NestedSort) *FieldSort

NestedSort is available starting with 6.1 and will replace NestedFilter and NestedPath.

func (*FieldSort) Order

func (s *FieldSort) Order(ascending bool) *FieldSort

Order defines whether sorting ascending (default) or descending.

func (*FieldSort) SortMode

func (s *FieldSort) SortMode(sortMode string) *FieldSort

SortMode specifies what values to pick in case a document contains multiple values for the targeted sort field. Possible values are: min, max, sum, and avg.

func (*FieldSort) Source

func (s *FieldSort) Source() (interface{}, error)

Source returns the JSON-serializable data.

func (*FieldSort) UnmappedType

func (s *FieldSort) UnmappedType(typ string) *FieldSort

UnmappedType sets the type to use when the current field is not mapped in an index.

type FieldStatistics

type FieldStatistics struct {
	DocCount   int64 `json:"doc_count"`
	SumDocFreq int64 `json:"sum_doc_freq"`
	SumTtf     int64 `json:"sum_ttf"`
}

type FieldValueFactorFunction

type FieldValueFactorFunction struct {
	// contains filtered or unexported fields
}

FieldValueFactorFunction is a function score function that allows you to use a field from a document to influence the score. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_field_value_factor.

func NewFieldValueFactorFunction

func NewFieldValueFactorFunction() *FieldValueFactorFunction

NewFieldValueFactorFunction initializes and returns a new FieldValueFactorFunction.

func (*FieldValueFactorFunction) Factor

Factor is the (optional) factor to multiply the field with. If you do not specify a factor, the default is 1.

func (*FieldValueFactorFunction) Field

Field is the field to be extracted from the document.

func (*FieldValueFactorFunction) GetWeight

func (fn *FieldValueFactorFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*FieldValueFactorFunction) Missing

Missing is used if a document does not have that field.

func (*FieldValueFactorFunction) Modifier

Modifier to apply to the field value. It can be one of: none, log, log1p, log2p, ln, ln1p, ln2p, square, sqrt, or reciprocal. Defaults to: none.

func (*FieldValueFactorFunction) Name

func (fn *FieldValueFactorFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*FieldValueFactorFunction) Source

func (fn *FieldValueFactorFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*FieldValueFactorFunction) Weight

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type FilterAggregation

type FilterAggregation struct {
	// contains filtered or unexported fields
}

FilterAggregation defines a single bucket of all the documents in the current document set context that match a specified filter. Often this will be used to narrow down the current aggregation context to a specific set of documents. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-filter-aggregation.html

func NewFilterAggregation

func NewFilterAggregation() *FilterAggregation

func (*FilterAggregation) Filter

func (a *FilterAggregation) Filter(filter Query) *FilterAggregation

func (*FilterAggregation) Meta

func (a *FilterAggregation) Meta(metaData map[string]interface{}) *FilterAggregation

Meta sets the meta data to be included in the aggregation response.

func (*FilterAggregation) Source

func (a *FilterAggregation) Source() (interface{}, error)

func (*FilterAggregation) SubAggregation

func (a *FilterAggregation) SubAggregation(name string, subAggregation Aggregation) *FilterAggregation

type FiltersAggregation

type FiltersAggregation struct {
	// contains filtered or unexported fields
}

FiltersAggregation defines a multi bucket aggregations where each bucket is associated with a filter. Each bucket will collect all documents that match its associated filter.

Notice that the caller has to decide whether to add filters by name (using FilterWithName) or unnamed filters (using Filter or Filters). One cannot use both named and unnamed filters.

For details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-filters-aggregation.html

func NewFiltersAggregation

func NewFiltersAggregation() *FiltersAggregation

NewFiltersAggregation initializes a new FiltersAggregation.

func (*FiltersAggregation) Filter

func (a *FiltersAggregation) Filter(filter Query) *FiltersAggregation

Filter adds an unnamed filter. Notice that you can either use named or unnamed filters, but not both.

func (*FiltersAggregation) FilterWithName

func (a *FiltersAggregation) FilterWithName(name string, filter Query) *FiltersAggregation

FilterWithName adds a filter with a specific name. Notice that you can either use named or unnamed filters, but not both.

func (*FiltersAggregation) Filters

func (a *FiltersAggregation) Filters(filters ...Query) *FiltersAggregation

Filters adds one or more unnamed filters. Notice that you can either use named or unnamed filters, but not both.

func (*FiltersAggregation) Meta

func (a *FiltersAggregation) Meta(metaData map[string]interface{}) *FiltersAggregation

Meta sets the meta data to be included in the aggregation response.

func (*FiltersAggregation) Source

func (a *FiltersAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface. If the aggregation is invalid, an error is returned. This may e.g. happen if you mixed named and unnamed filters.

func (*FiltersAggregation) SubAggregation

func (a *FiltersAggregation) SubAggregation(name string, subAggregation Aggregation) *FiltersAggregation

SubAggregation adds a sub-aggregation to this aggregation.

type FunctionScoreQuery

type FunctionScoreQuery struct {
	// contains filtered or unexported fields
}

FunctionScoreQuery allows you to modify the score of documents that are retrieved by a query. This can be useful if, for example, a score function is computationally expensive and it is sufficient to compute the score on a filtered set of documents.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html

func NewFunctionScoreQuery

func NewFunctionScoreQuery() *FunctionScoreQuery

NewFunctionScoreQuery creates and initializes a new function score query.

func (*FunctionScoreQuery) Add

func (q *FunctionScoreQuery) Add(filter Query, scoreFunc ScoreFunction) *FunctionScoreQuery

Add adds a score function that will execute on all the documents matching the filter.

func (*FunctionScoreQuery) AddScoreFunc

func (q *FunctionScoreQuery) AddScoreFunc(scoreFunc ScoreFunction) *FunctionScoreQuery

AddScoreFunc adds a score function that will execute the function on all documents.

func (*FunctionScoreQuery) Boost

Boost sets the boost for this query. Documents matching this query will (in addition to the normal weightings) have their score multiplied by the boost provided.

func (*FunctionScoreQuery) BoostMode

func (q *FunctionScoreQuery) BoostMode(boostMode string) *FunctionScoreQuery

BoostMode defines how the combined result of score functions will influence the final score together with the sub query score.

func (*FunctionScoreQuery) Filter

func (q *FunctionScoreQuery) Filter(filter Query) *FunctionScoreQuery

Filter sets the filter for the function score query.

func (*FunctionScoreQuery) MaxBoost

func (q *FunctionScoreQuery) MaxBoost(maxBoost float64) *FunctionScoreQuery

MaxBoost is the maximum boost that will be applied by function score.

func (*FunctionScoreQuery) MinScore

func (q *FunctionScoreQuery) MinScore(minScore float64) *FunctionScoreQuery

MinScore sets the minimum score.

func (*FunctionScoreQuery) Query

func (q *FunctionScoreQuery) Query(query Query) *FunctionScoreQuery

Query sets the query for the function score query.

func (*FunctionScoreQuery) ScoreMode

func (q *FunctionScoreQuery) ScoreMode(scoreMode string) *FunctionScoreQuery

ScoreMode defines how results of individual score functions will be aggregated. Can be first, avg, max, sum, min, or multiply.

func (*FunctionScoreQuery) Source

func (q *FunctionScoreQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type FuzzyCompletionSuggesterOptions

type FuzzyCompletionSuggesterOptions struct {
	// contains filtered or unexported fields
}

FuzzyCompletionSuggesterOptions represents the options for fuzzy completion suggester.

func NewFuzzyCompletionSuggesterOptions

func NewFuzzyCompletionSuggesterOptions() *FuzzyCompletionSuggesterOptions

NewFuzzyCompletionSuggesterOptions initializes a new FuzzyCompletionSuggesterOptions instance.

func (*FuzzyCompletionSuggesterOptions) EditDistance

func (o *FuzzyCompletionSuggesterOptions) EditDistance(editDistance interface{}) *FuzzyCompletionSuggesterOptions

EditDistance specifies the maximum number of edits, e.g. a number like "1" or "2" or a string like "0..2" or ">5". See https://www.elastic.co/guide/en/elasticsearch/reference/5.6/common-options.html#fuzziness for details.

func (*FuzzyCompletionSuggesterOptions) MaxDeterminizedStates

MaxDeterminizedStates is currently undocumented in Elasticsearch. It represents the maximum automaton states allowed for fuzzy expansion.

func (*FuzzyCompletionSuggesterOptions) MinLength

MinLength represents the minimum length of the input before fuzzy suggestions are returned (defaults to 3).

func (*FuzzyCompletionSuggesterOptions) PrefixLength

PrefixLength represents the minimum length of the input, which is not checked for fuzzy alternatives (defaults to 1).

func (*FuzzyCompletionSuggesterOptions) Source

func (o *FuzzyCompletionSuggesterOptions) Source() (interface{}, error)

Source creates the JSON data.

func (*FuzzyCompletionSuggesterOptions) Transpositions

func (o *FuzzyCompletionSuggesterOptions) Transpositions(transpositions bool) *FuzzyCompletionSuggesterOptions

Transpositions, if set to true, are counted as one change instead of two (defaults to true).

func (*FuzzyCompletionSuggesterOptions) UnicodeAware

UnicodeAware, if true, all measurements (like fuzzy edit distance, transpositions, and lengths) are measured in Unicode code points instead of in bytes. This is slightly slower than raw bytes, so it is set to false by default.

type FuzzyQuery

type FuzzyQuery struct {
	// contains filtered or unexported fields
}

FuzzyQuery uses similarity based on Levenshtein edit distance for string fields, and a +/- margin on numeric and date fields.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-fuzzy-query.html

func NewFuzzyQuery

func NewFuzzyQuery(name string, value interface{}) *FuzzyQuery

NewFuzzyQuery creates a new fuzzy query.

func (*FuzzyQuery) Boost

func (q *FuzzyQuery) Boost(boost float64) *FuzzyQuery

Boost sets the boost for this query. Documents matching this query will (in addition to the normal weightings) have their score multiplied by the boost provided.

func (*FuzzyQuery) Fuzziness

func (q *FuzzyQuery) Fuzziness(fuzziness interface{}) *FuzzyQuery

Fuzziness can be an integer/long like 0, 1 or 2 as well as strings like "auto", "0..1", "1..4" or "0.0..1.0".

func (*FuzzyQuery) MaxExpansions

func (q *FuzzyQuery) MaxExpansions(maxExpansions int) *FuzzyQuery

func (*FuzzyQuery) PrefixLength

func (q *FuzzyQuery) PrefixLength(prefixLength int) *FuzzyQuery

func (*FuzzyQuery) QueryName

func (q *FuzzyQuery) QueryName(queryName string) *FuzzyQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*FuzzyQuery) Rewrite

func (q *FuzzyQuery) Rewrite(rewrite string) *FuzzyQuery

func (*FuzzyQuery) Source

func (q *FuzzyQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

func (*FuzzyQuery) Transpositions

func (q *FuzzyQuery) Transpositions(transpositions bool) *FuzzyQuery

type GNDSignificanceHeuristic

type GNDSignificanceHeuristic struct {
	// contains filtered or unexported fields
}

GNDSignificanceHeuristic implements the "Google Normalized Distance" as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_google_normalized_distance for details.

func NewGNDSignificanceHeuristic

func NewGNDSignificanceHeuristic() *GNDSignificanceHeuristic

NewGNDSignificanceHeuristic implements a new GNDSignificanceHeuristic.

func (*GNDSignificanceHeuristic) BackgroundIsSuperset

func (sh *GNDSignificanceHeuristic) BackgroundIsSuperset(backgroundIsSuperset bool) *GNDSignificanceHeuristic

BackgroundIsSuperset indicates whether you defined a custom background filter that represents a difference set of documents that you want to compare to.

func (*GNDSignificanceHeuristic) Name

func (sh *GNDSignificanceHeuristic) Name() string

Name returns the name of the heuristic in the REST interface.

func (*GNDSignificanceHeuristic) Source

func (sh *GNDSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type GaussDecayFunction

type GaussDecayFunction struct {
	// contains filtered or unexported fields
}

GaussDecayFunction builds a gauss decay score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html for details.

func NewGaussDecayFunction

func NewGaussDecayFunction() *GaussDecayFunction

NewGaussDecayFunction returns a new GaussDecayFunction.

func (*GaussDecayFunction) Decay

func (fn *GaussDecayFunction) Decay(decay float64) *GaussDecayFunction

Decay defines how documents are scored at the distance given a Scale. If no decay is defined, documents at the distance Scale will be scored 0.5.

func (*GaussDecayFunction) FieldName

func (fn *GaussDecayFunction) FieldName(fieldName string) *GaussDecayFunction

FieldName specifies the name of the field to which this decay function is applied to.

func (*GaussDecayFunction) GetWeight

func (fn *GaussDecayFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*GaussDecayFunction) MultiValueMode

func (fn *GaussDecayFunction) MultiValueMode(mode string) *GaussDecayFunction

MultiValueMode specifies how the decay function should be calculated on a field that has multiple values. Valid modes are: min, max, avg, and sum.

func (*GaussDecayFunction) Name

func (fn *GaussDecayFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*GaussDecayFunction) Offset

func (fn *GaussDecayFunction) Offset(offset interface{}) *GaussDecayFunction

Offset, if defined, computes the decay function only for a distance greater than the defined offset.

func (*GaussDecayFunction) Origin

func (fn *GaussDecayFunction) Origin(origin interface{}) *GaussDecayFunction

Origin defines the "central point" by which the decay function calculates "distance".

func (*GaussDecayFunction) Scale

func (fn *GaussDecayFunction) Scale(scale interface{}) *GaussDecayFunction

Scale defines the scale to be used with Decay.

func (*GaussDecayFunction) Source

func (fn *GaussDecayFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*GaussDecayFunction) Weight

func (fn *GaussDecayFunction) Weight(weight float64) *GaussDecayFunction

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type GeoBoundingBoxQuery

type GeoBoundingBoxQuery struct {
	// contains filtered or unexported fields
}

GeoBoundingBoxQuery allows to filter hits based on a point location using a bounding box.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-geo-bounding-box-query.html

func NewGeoBoundingBoxQuery

func NewGeoBoundingBoxQuery(name string) *GeoBoundingBoxQuery

NewGeoBoundingBoxQuery creates and initializes a new GeoBoundingBoxQuery.

func (*GeoBoundingBoxQuery) BottomLeft

func (q *GeoBoundingBoxQuery) BottomLeft(bottom, left float64) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) BottomLeftFromGeoPoint

func (q *GeoBoundingBoxQuery) BottomLeftFromGeoPoint(point *GeoPoint) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) BottomRight

func (q *GeoBoundingBoxQuery) BottomRight(bottom, right float64) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) BottomRightFromGeoPoint

func (q *GeoBoundingBoxQuery) BottomRightFromGeoPoint(point *GeoPoint) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) QueryName

func (q *GeoBoundingBoxQuery) QueryName(queryName string) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) Source

func (q *GeoBoundingBoxQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

func (*GeoBoundingBoxQuery) TopLeft

func (q *GeoBoundingBoxQuery) TopLeft(top, left float64) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) TopLeftFromGeoPoint

func (q *GeoBoundingBoxQuery) TopLeftFromGeoPoint(point *GeoPoint) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) TopRight

func (q *GeoBoundingBoxQuery) TopRight(top, right float64) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) TopRightFromGeoPoint

func (q *GeoBoundingBoxQuery) TopRightFromGeoPoint(point *GeoPoint) *GeoBoundingBoxQuery

func (*GeoBoundingBoxQuery) Type

Type sets the type of executing the geo bounding box. It can be either memory or indexed. It defaults to memory.

type GeoBoundsAggregation

type GeoBoundsAggregation struct {
	// contains filtered or unexported fields
}

GeoBoundsAggregation is a metric aggregation that computes the bounding box containing all geo_point values for a field. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-geobounds-aggregation.html

func NewGeoBoundsAggregation

func NewGeoBoundsAggregation() *GeoBoundsAggregation

func (*GeoBoundsAggregation) Field

func (*GeoBoundsAggregation) Meta

func (a *GeoBoundsAggregation) Meta(metaData map[string]interface{}) *GeoBoundsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*GeoBoundsAggregation) Script

func (a *GeoBoundsAggregation) Script(script *Script) *GeoBoundsAggregation

func (*GeoBoundsAggregation) Source

func (a *GeoBoundsAggregation) Source() (interface{}, error)

func (*GeoBoundsAggregation) SubAggregation

func (a *GeoBoundsAggregation) SubAggregation(name string, subAggregation Aggregation) *GeoBoundsAggregation

func (*GeoBoundsAggregation) WrapLongitude

func (a *GeoBoundsAggregation) WrapLongitude(wrapLongitude bool) *GeoBoundsAggregation

type GeoCentroidAggregation

type GeoCentroidAggregation struct {
	// contains filtered or unexported fields
}

GeoCentroidAggregation is a metric aggregation that computes the weighted centroid from all coordinate values for a Geo-point datatype field. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-geocentroid-aggregation.html

func NewGeoCentroidAggregation

func NewGeoCentroidAggregation() *GeoCentroidAggregation

func (*GeoCentroidAggregation) Field

func (*GeoCentroidAggregation) Meta

func (a *GeoCentroidAggregation) Meta(metaData map[string]interface{}) *GeoCentroidAggregation

Meta sets the meta data to be included in the aggregation response.

func (*GeoCentroidAggregation) Script

func (*GeoCentroidAggregation) Source

func (a *GeoCentroidAggregation) Source() (interface{}, error)

func (*GeoCentroidAggregation) SubAggregation

func (a *GeoCentroidAggregation) SubAggregation(name string, subAggregation Aggregation) *GeoCentroidAggregation

type GeoDistanceAggregation

type GeoDistanceAggregation struct {
	// contains filtered or unexported fields
}

GeoDistanceAggregation is a multi-bucket aggregation that works on geo_point fields and conceptually works very similar to the range aggregation. The user can define a point of origin and a set of distance range buckets. The aggregation evaluate the distance of each document value from the origin point and determines the buckets it belongs to based on the ranges (a document belongs to a bucket if the distance between the document and the origin falls within the distance range of the bucket). See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-geodistance-aggregation.html

func NewGeoDistanceAggregation

func NewGeoDistanceAggregation() *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddRange

func (a *GeoDistanceAggregation) AddRange(from, to interface{}) *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddRangeWithKey

func (a *GeoDistanceAggregation) AddRangeWithKey(key string, from, to interface{}) *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddUnboundedFrom

func (a *GeoDistanceAggregation) AddUnboundedFrom(to float64) *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddUnboundedFromWithKey

func (a *GeoDistanceAggregation) AddUnboundedFromWithKey(key string, to float64) *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddUnboundedTo

func (a *GeoDistanceAggregation) AddUnboundedTo(from float64) *GeoDistanceAggregation

func (*GeoDistanceAggregation) AddUnboundedToWithKey

func (a *GeoDistanceAggregation) AddUnboundedToWithKey(key string, from float64) *GeoDistanceAggregation

func (*GeoDistanceAggregation) Between

func (a *GeoDistanceAggregation) Between(from, to interface{}) *GeoDistanceAggregation

func (*GeoDistanceAggregation) BetweenWithKey

func (a *GeoDistanceAggregation) BetweenWithKey(key string, from, to interface{}) *GeoDistanceAggregation

func (*GeoDistanceAggregation) DistanceType

func (a *GeoDistanceAggregation) DistanceType(distanceType string) *GeoDistanceAggregation

func (*GeoDistanceAggregation) Field

func (*GeoDistanceAggregation) Meta

func (a *GeoDistanceAggregation) Meta(metaData map[string]interface{}) *GeoDistanceAggregation

Meta sets the meta data to be included in the aggregation response.

func (*GeoDistanceAggregation) Point

func (*GeoDistanceAggregation) Source

func (a *GeoDistanceAggregation) Source() (interface{}, error)

func (*GeoDistanceAggregation) SubAggregation

func (a *GeoDistanceAggregation) SubAggregation(name string, subAggregation Aggregation) *GeoDistanceAggregation

func (*GeoDistanceAggregation) Unit

type GeoDistanceQuery

type GeoDistanceQuery struct {
	// contains filtered or unexported fields
}

GeoDistanceQuery filters documents that include only hits that exists within a specific distance from a geo point.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-geo-distance-query.html

func NewGeoDistanceQuery

func NewGeoDistanceQuery(name string) *GeoDistanceQuery

NewGeoDistanceQuery creates and initializes a new GeoDistanceQuery.

func (*GeoDistanceQuery) Distance

func (q *GeoDistanceQuery) Distance(distance string) *GeoDistanceQuery

func (*GeoDistanceQuery) DistanceType

func (q *GeoDistanceQuery) DistanceType(distanceType string) *GeoDistanceQuery

func (*GeoDistanceQuery) GeoHash

func (q *GeoDistanceQuery) GeoHash(geohash string) *GeoDistanceQuery

func (*GeoDistanceQuery) GeoPoint

func (q *GeoDistanceQuery) GeoPoint(point *GeoPoint) *GeoDistanceQuery

func (*GeoDistanceQuery) Lat

func (*GeoDistanceQuery) Lon

func (*GeoDistanceQuery) Point

func (q *GeoDistanceQuery) Point(lat, lon float64) *GeoDistanceQuery

func (*GeoDistanceQuery) QueryName

func (q *GeoDistanceQuery) QueryName(queryName string) *GeoDistanceQuery

func (*GeoDistanceQuery) Source

func (q *GeoDistanceQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type GeoDistanceSort

type GeoDistanceSort struct {
	Sorter
	// contains filtered or unexported fields
}

GeoDistanceSort allows for sorting by geographic distance. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-sort.html#_geo_distance_sorting.

func NewGeoDistanceSort

func NewGeoDistanceSort(fieldName string) *GeoDistanceSort

NewGeoDistanceSort creates a new sorter for geo distances.

func (*GeoDistanceSort) Asc

func (s *GeoDistanceSort) Asc() *GeoDistanceSort

Asc sets ascending sort order.

func (*GeoDistanceSort) Desc

func (s *GeoDistanceSort) Desc() *GeoDistanceSort

Desc sets descending sort order.

func (*GeoDistanceSort) DistanceType

func (s *GeoDistanceSort) DistanceType(distanceType string) *GeoDistanceSort

DistanceType describes how to compute the distance, e.g. "arc" or "plane". See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-sort.html#geo-sorting for details.

func (*GeoDistanceSort) FieldName

func (s *GeoDistanceSort) FieldName(fieldName string) *GeoDistanceSort

FieldName specifies the name of the (geo) field to use for sorting.

func (*GeoDistanceSort) GeoDistance

func (s *GeoDistanceSort) GeoDistance(geoDistance string) *GeoDistanceSort

GeoDistance is an alias for DistanceType.

func (*GeoDistanceSort) GeoHashes

func (s *GeoDistanceSort) GeoHashes(geohashes ...string) *GeoDistanceSort

GeoHashes specifies the geo point to create the range distance aggregations from.

func (*GeoDistanceSort) NestedFilter

func (s *GeoDistanceSort) NestedFilter(nestedFilter Query) *GeoDistanceSort

NestedFilter sets a filter that nested objects should match with in order to be taken into account for sorting.

func (*GeoDistanceSort) NestedPath

func (s *GeoDistanceSort) NestedPath(nestedPath string) *GeoDistanceSort

NestedPath is used if sorting occurs on a field that is inside a nested object.

func (*GeoDistanceSort) NestedSort

func (s *GeoDistanceSort) NestedSort(nestedSort *NestedSort) *GeoDistanceSort

NestedSort is available starting with 6.1 and will replace NestedFilter and NestedPath.

func (*GeoDistanceSort) Order

func (s *GeoDistanceSort) Order(ascending bool) *GeoDistanceSort

Order defines whether sorting ascending (default) or descending.

func (*GeoDistanceSort) Point

func (s *GeoDistanceSort) Point(lat, lon float64) *GeoDistanceSort

Point specifies a point to create the range distance aggregations from.

func (*GeoDistanceSort) Points

func (s *GeoDistanceSort) Points(points ...*GeoPoint) *GeoDistanceSort

Points specifies the geo point(s) to create the range distance aggregations from.

func (*GeoDistanceSort) SortMode

func (s *GeoDistanceSort) SortMode(sortMode string) *GeoDistanceSort

SortMode specifies what values to pick in case a document contains multiple values for the targeted sort field. Possible values are: min, max, sum, and avg.

func (*GeoDistanceSort) Source

func (s *GeoDistanceSort) Source() (interface{}, error)

Source returns the JSON-serializable data.

func (*GeoDistanceSort) Unit

func (s *GeoDistanceSort) Unit(unit string) *GeoDistanceSort

Unit specifies the distance unit to use. It defaults to km. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/common-options.html#distance-units for details.

type GeoHashGridAggregation

type GeoHashGridAggregation struct {
	// contains filtered or unexported fields
}

func NewGeoHashGridAggregation

func NewGeoHashGridAggregation() *GeoHashGridAggregation

func (*GeoHashGridAggregation) Field

func (*GeoHashGridAggregation) Meta

func (a *GeoHashGridAggregation) Meta(metaData map[string]interface{}) *GeoHashGridAggregation

func (*GeoHashGridAggregation) Precision

func (a *GeoHashGridAggregation) Precision(precision interface{}) *GeoHashGridAggregation

Precision accepts the level as int value between 1 and 12 or Distance Units like "2km", "5mi" as described at https://www.elastic.co/guide/en/elasticsearch/reference/6.2/common-options.html#distance-units and https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-bucket-geohashgrid-aggregation.html

func (*GeoHashGridAggregation) ShardSize

func (a *GeoHashGridAggregation) ShardSize(shardSize int) *GeoHashGridAggregation

func (*GeoHashGridAggregation) Size

func (*GeoHashGridAggregation) Source

func (a *GeoHashGridAggregation) Source() (interface{}, error)

func (*GeoHashGridAggregation) SubAggregation

func (a *GeoHashGridAggregation) SubAggregation(name string, subAggregation Aggregation) *GeoHashGridAggregation

type GeoPoint

type GeoPoint struct {
	Lat float64 `json:"lat"`
	Lon float64 `json:"lon"`
}

GeoPoint is a geographic position described via latitude and longitude.

func GeoPointFromLatLon

func GeoPointFromLatLon(lat, lon float64) *GeoPoint

GeoPointFromLatLon initializes a new GeoPoint by latitude and longitude.

func GeoPointFromString

func GeoPointFromString(latLon string) (*GeoPoint, error)

GeoPointFromString initializes a new GeoPoint by a string that is formatted as "{latitude},{longitude}", e.g. "40.10210,-70.12091".

func (*GeoPoint) Source

func (pt *GeoPoint) Source() map[string]float64

Source returns the object to be serialized in Elasticsearch DSL.

type GeoPolygonQuery

type GeoPolygonQuery struct {
	// contains filtered or unexported fields
}

GeoPolygonQuery allows to include hits that only fall within a polygon of points.

For more details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-geo-polygon-query.html

func NewGeoPolygonQuery

func NewGeoPolygonQuery(name string) *GeoPolygonQuery

NewGeoPolygonQuery creates and initializes a new GeoPolygonQuery.

func (*GeoPolygonQuery) AddGeoPoint

func (q *GeoPolygonQuery) AddGeoPoint(point *GeoPoint) *GeoPolygonQuery

AddGeoPoint adds a GeoPoint.

func (*GeoPolygonQuery) AddPoint

func (q *GeoPolygonQuery) AddPoint(lat, lon float64) *GeoPolygonQuery

AddPoint adds a point from latitude and longitude.

func (*GeoPolygonQuery) QueryName

func (q *GeoPolygonQuery) QueryName(queryName string) *GeoPolygonQuery

func (*GeoPolygonQuery) Source

func (q *GeoPolygonQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type GetResult

type GetResult struct {
	Index   string                 `json:"_index"`   // index meta field
	Type    string                 `json:"_type"`    // type meta field
	Id      string                 `json:"_id"`      // id meta field
	Uid     string                 `json:"_uid"`     // uid meta field (see MapperService.java for all meta fields)
	Routing string                 `json:"_routing"` // routing meta field
	Parent  string                 `json:"_parent"`  // parent meta field
	Version *int64                 `json:"_version"` // version number, when Version is set to true in SearchService
	Source  *json.RawMessage       `json:"_source,omitempty"`
	Found   bool                   `json:"found,omitempty"`
	Fields  map[string]interface{} `json:"fields,omitempty"`
	//Error     string                 `json:"error,omitempty"` // used only in MultiGet
	// TODO double-check that MultiGet now returns details error information
	Error *ErrorDetails `json:"error,omitempty"` // only used in MultiGet
}

GetResult is the outcome of GetService.Do.

type GetService

type GetService struct {
	// contains filtered or unexported fields
}

GetService allows to get a typed JSON document from the index based on its id.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-get.html for details.

func NewGetService

func NewGetService(client *Client) *GetService

NewGetService creates a new GetService.

func (*GetService) Do

func (s *GetService) Do(ctx context.Context) (*GetResult, error)

Do executes the operation.

func (*GetService) FetchSource

func (s *GetService) FetchSource(fetchSource bool) *GetService

func (*GetService) FetchSourceContext

func (s *GetService) FetchSourceContext(fetchSourceContext *FetchSourceContext) *GetService

func (*GetService) Id

func (s *GetService) Id(id string) *GetService

Id is the document ID.

func (*GetService) IgnoreErrorsOnGeneratedFields

func (s *GetService) IgnoreErrorsOnGeneratedFields(ignore bool) *GetService

IgnoreErrorsOnGeneratedFields indicates whether to ignore fields that are generated if the transaction log is accessed.

func (*GetService) Index

func (s *GetService) Index(index string) *GetService

Index is the name of the index.

func (*GetService) Parent

func (s *GetService) Parent(parent string) *GetService

Parent is the ID of the parent document.

func (*GetService) Preference

func (s *GetService) Preference(preference string) *GetService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*GetService) Pretty

func (s *GetService) Pretty(pretty bool) *GetService

Pretty indicates that the JSON response be indented and human readable.

func (*GetService) Realtime

func (s *GetService) Realtime(realtime bool) *GetService

Realtime specifies whether to perform the operation in realtime or search mode.

func (*GetService) Refresh

func (s *GetService) Refresh(refresh string) *GetService

Refresh the shard containing the document before performing the operation.

func (*GetService) Routing

func (s *GetService) Routing(routing string) *GetService

Routing is the specific routing value.

func (*GetService) StoredFields

func (s *GetService) StoredFields(storedFields ...string) *GetService

StoredFields is a list of fields to return in the response.

func (*GetService) Type

func (s *GetService) Type(typ string) *GetService

Type is the type of the document (use `_all` to fetch the first document matching the ID across all types).

func (*GetService) Validate

func (s *GetService) Validate() error

Validate checks if the operation is valid.

func (*GetService) Version

func (s *GetService) Version(version interface{}) *GetService

Version is an explicit version number for concurrency control.

func (*GetService) VersionType

func (s *GetService) VersionType(versionType string) *GetService

VersionType is the specific version type.

type GlobalAggregation

type GlobalAggregation struct {
	// contains filtered or unexported fields
}

GlobalAggregation defines a single bucket of all the documents within the search execution context. This context is defined by the indices and the document types you’re searching on, but is not influenced by the search query itself. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-global-aggregation.html

func NewGlobalAggregation

func NewGlobalAggregation() *GlobalAggregation

func (*GlobalAggregation) Meta

func (a *GlobalAggregation) Meta(metaData map[string]interface{}) *GlobalAggregation

Meta sets the meta data to be included in the aggregation response.

func (*GlobalAggregation) Source

func (a *GlobalAggregation) Source() (interface{}, error)

func (*GlobalAggregation) SubAggregation

func (a *GlobalAggregation) SubAggregation(name string, subAggregation Aggregation) *GlobalAggregation

type HasChildQuery

type HasChildQuery struct {
	// contains filtered or unexported fields
}

HasChildQuery accepts a query and the child type to run against, and results in parent documents that have child docs matching the query.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-has-child-query.html

func NewHasChildQuery

func NewHasChildQuery(childType string, query Query) *HasChildQuery

NewHasChildQuery creates and initializes a new has_child query.

func (*HasChildQuery) Boost

func (q *HasChildQuery) Boost(boost float64) *HasChildQuery

Boost sets the boost for this query.

func (*HasChildQuery) InnerHit

func (q *HasChildQuery) InnerHit(innerHit *InnerHit) *HasChildQuery

InnerHit sets the inner hit definition in the scope of this query and reusing the defined type and query.

func (*HasChildQuery) MaxChildren

func (q *HasChildQuery) MaxChildren(maxChildren int) *HasChildQuery

MaxChildren defines the maximum number of children that are required to match for the parent to be considered a match.

func (*HasChildQuery) MinChildren

func (q *HasChildQuery) MinChildren(minChildren int) *HasChildQuery

MinChildren defines the minimum number of children that are required to match for the parent to be considered a match.

func (*HasChildQuery) QueryName

func (q *HasChildQuery) QueryName(queryName string) *HasChildQuery

QueryName specifies the query name for the filter that can be used when searching for matched filters per hit.

func (*HasChildQuery) ScoreMode

func (q *HasChildQuery) ScoreMode(scoreMode string) *HasChildQuery

ScoreMode defines how the scores from the matching child documents are mapped into the parent document. Allowed values are: min, max, avg, or none.

func (*HasChildQuery) ShortCircuitCutoff

func (q *HasChildQuery) ShortCircuitCutoff(shortCircuitCutoff int) *HasChildQuery

ShortCircuitCutoff configures what cut off point only to evaluate parent documents that contain the matching parent id terms instead of evaluating all parent docs.

func (*HasChildQuery) Source

func (q *HasChildQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type HasParentQuery

type HasParentQuery struct {
	// contains filtered or unexported fields
}

HasParentQuery accepts a query and a parent type. The query is executed in the parent document space which is specified by the parent type. This query returns child documents which associated parents have matched. For the rest has_parent query has the same options and works in the same manner as has_child query.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-has-parent-query.html

func NewHasParentQuery

func NewHasParentQuery(parentType string, query Query) *HasParentQuery

NewHasParentQuery creates and initializes a new has_parent query.

func (*HasParentQuery) Boost

func (q *HasParentQuery) Boost(boost float64) *HasParentQuery

Boost sets the boost for this query.

func (*HasParentQuery) InnerHit

func (q *HasParentQuery) InnerHit(innerHit *InnerHit) *HasParentQuery

InnerHit sets the inner hit definition in the scope of this query and reusing the defined type and query.

func (*HasParentQuery) QueryName

func (q *HasParentQuery) QueryName(queryName string) *HasParentQuery

QueryName specifies the query name for the filter that can be used when searching for matched filters per hit.

func (*HasParentQuery) Score

func (q *HasParentQuery) Score(score bool) *HasParentQuery

Score defines if the parent score is mapped into the child documents.

func (*HasParentQuery) Source

func (q *HasParentQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type Highlight

type Highlight struct {
	// contains filtered or unexported fields
}

Highlight allows highlighting search results on one or more fields. For details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-highlighting.html

func NewHighlight

func NewHighlight() *Highlight

func (*Highlight) BoundaryChars

func (hl *Highlight) BoundaryChars(boundaryChars string) *Highlight

func (*Highlight) BoundaryMaxScan

func (hl *Highlight) BoundaryMaxScan(boundaryMaxScan int) *Highlight

func (*Highlight) BoundaryScannerLocale

func (hl *Highlight) BoundaryScannerLocale(boundaryScannerLocale string) *Highlight

func (*Highlight) BoundaryScannerType

func (hl *Highlight) BoundaryScannerType(boundaryScannerType string) *Highlight

func (*Highlight) Encoder

func (hl *Highlight) Encoder(encoder string) *Highlight

func (*Highlight) Field

func (hl *Highlight) Field(name string) *Highlight

func (*Highlight) Fields

func (hl *Highlight) Fields(fields ...*HighlighterField) *Highlight

func (*Highlight) ForceSource

func (hl *Highlight) ForceSource(forceSource bool) *Highlight

func (*Highlight) FragmentSize

func (hl *Highlight) FragmentSize(fragmentSize int) *Highlight

func (*Highlight) Fragmenter

func (hl *Highlight) Fragmenter(fragmenter string) *Highlight

func (*Highlight) HighlighQuery

func (hl *Highlight) HighlighQuery(highlightQuery Query) *Highlight

func (*Highlight) HighlightFilter

func (hl *Highlight) HighlightFilter(highlightFilter bool) *Highlight

func (*Highlight) HighlighterType

func (hl *Highlight) HighlighterType(highlighterType string) *Highlight

func (*Highlight) NoMatchSize

func (hl *Highlight) NoMatchSize(noMatchSize int) *Highlight

func (*Highlight) NumOfFragments

func (hl *Highlight) NumOfFragments(numOfFragments int) *Highlight

func (*Highlight) Options

func (hl *Highlight) Options(options map[string]interface{}) *Highlight

func (*Highlight) Order

func (hl *Highlight) Order(order string) *Highlight

func (*Highlight) PostTags

func (hl *Highlight) PostTags(postTags ...string) *Highlight

func (*Highlight) PreTags

func (hl *Highlight) PreTags(preTags ...string) *Highlight

func (*Highlight) RequireFieldMatch

func (hl *Highlight) RequireFieldMatch(requireFieldMatch bool) *Highlight

func (*Highlight) Source

func (hl *Highlight) Source() (interface{}, error)

Creates the query source for the bool query.

func (*Highlight) TagsSchema

func (hl *Highlight) TagsSchema(schemaName string) *Highlight

func (*Highlight) UseExplicitFieldOrder

func (hl *Highlight) UseExplicitFieldOrder(useExplicitFieldOrder bool) *Highlight

type HighlighterField

type HighlighterField struct {
	Name string
	// contains filtered or unexported fields
}

HighlighterField specifies a highlighted field.

func NewHighlighterField

func NewHighlighterField(name string) *HighlighterField

func (*HighlighterField) BoundaryChars

func (f *HighlighterField) BoundaryChars(boundaryChars ...rune) *HighlighterField

func (*HighlighterField) BoundaryMaxScan

func (f *HighlighterField) BoundaryMaxScan(boundaryMaxScan int) *HighlighterField

func (*HighlighterField) ForceSource

func (f *HighlighterField) ForceSource(forceSource bool) *HighlighterField

func (*HighlighterField) FragmentOffset

func (f *HighlighterField) FragmentOffset(fragmentOffset int) *HighlighterField

func (*HighlighterField) FragmentSize

func (f *HighlighterField) FragmentSize(fragmentSize int) *HighlighterField

func (*HighlighterField) Fragmenter

func (f *HighlighterField) Fragmenter(fragmenter string) *HighlighterField

func (*HighlighterField) HighlightFilter

func (f *HighlighterField) HighlightFilter(highlightFilter bool) *HighlighterField

func (*HighlighterField) HighlightQuery

func (f *HighlighterField) HighlightQuery(highlightQuery Query) *HighlighterField

func (*HighlighterField) HighlighterType

func (f *HighlighterField) HighlighterType(highlighterType string) *HighlighterField

func (*HighlighterField) MatchedFields

func (f *HighlighterField) MatchedFields(matchedFields ...string) *HighlighterField

func (*HighlighterField) NoMatchSize

func (f *HighlighterField) NoMatchSize(noMatchSize int) *HighlighterField

func (*HighlighterField) NumOfFragments

func (f *HighlighterField) NumOfFragments(numOfFragments int) *HighlighterField

func (*HighlighterField) Options

func (f *HighlighterField) Options(options map[string]interface{}) *HighlighterField

func (*HighlighterField) Order

func (f *HighlighterField) Order(order string) *HighlighterField

func (*HighlighterField) PhraseLimit

func (f *HighlighterField) PhraseLimit(phraseLimit int) *HighlighterField

func (*HighlighterField) PostTags

func (f *HighlighterField) PostTags(postTags ...string) *HighlighterField

func (*HighlighterField) PreTags

func (f *HighlighterField) PreTags(preTags ...string) *HighlighterField

func (*HighlighterField) RequireFieldMatch

func (f *HighlighterField) RequireFieldMatch(requireFieldMatch bool) *HighlighterField

func (*HighlighterField) Source

func (f *HighlighterField) Source() (interface{}, error)

type HistogramAggregation

type HistogramAggregation struct {
	// contains filtered or unexported fields
}

HistogramAggregation is a multi-bucket values source based aggregation that can be applied on numeric values extracted from the documents. It dynamically builds fixed size (a.k.a. interval) buckets over the values. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-histogram-aggregation.html

func NewHistogramAggregation

func NewHistogramAggregation() *HistogramAggregation

func (*HistogramAggregation) ExtendedBounds

func (a *HistogramAggregation) ExtendedBounds(min, max float64) *HistogramAggregation

func (*HistogramAggregation) ExtendedBoundsMax

func (a *HistogramAggregation) ExtendedBoundsMax(max float64) *HistogramAggregation

func (*HistogramAggregation) ExtendedBoundsMin

func (a *HistogramAggregation) ExtendedBoundsMin(min float64) *HistogramAggregation

func (*HistogramAggregation) Field

func (*HistogramAggregation) Interval

func (a *HistogramAggregation) Interval(interval float64) *HistogramAggregation

Interval for this builder, must be greater than 0.

func (*HistogramAggregation) MaxBounds

func (*HistogramAggregation) Meta

func (a *HistogramAggregation) Meta(metaData map[string]interface{}) *HistogramAggregation

Meta sets the meta data to be included in the aggregation response.

func (*HistogramAggregation) MinBounds

func (*HistogramAggregation) MinDocCount

func (a *HistogramAggregation) MinDocCount(minDocCount int64) *HistogramAggregation

func (*HistogramAggregation) Missing

func (a *HistogramAggregation) Missing(missing interface{}) *HistogramAggregation

Missing configures the value to use when documents miss a value.

func (*HistogramAggregation) Offset

Offset into the histogram

func (*HistogramAggregation) Order

func (a *HistogramAggregation) Order(order string, asc bool) *HistogramAggregation

Order specifies the sort order. Valid values for order are: "_key", "_count", a sub-aggregation name, or a sub-aggregation name with a metric.

func (*HistogramAggregation) OrderByAggregation

func (a *HistogramAggregation) OrderByAggregation(aggName string, asc bool) *HistogramAggregation

OrderByAggregation creates a bucket ordering strategy which sorts buckets based on a single-valued calc get.

func (*HistogramAggregation) OrderByAggregationAndMetric

func (a *HistogramAggregation) OrderByAggregationAndMetric(aggName, metric string, asc bool) *HistogramAggregation

OrderByAggregationAndMetric creates a bucket ordering strategy which sorts buckets based on a multi-valued calc get.

func (*HistogramAggregation) OrderByCount

func (a *HistogramAggregation) OrderByCount(asc bool) *HistogramAggregation

func (*HistogramAggregation) OrderByCountAsc

func (a *HistogramAggregation) OrderByCountAsc() *HistogramAggregation

func (*HistogramAggregation) OrderByCountDesc

func (a *HistogramAggregation) OrderByCountDesc() *HistogramAggregation

func (*HistogramAggregation) OrderByKey

func (a *HistogramAggregation) OrderByKey(asc bool) *HistogramAggregation

func (*HistogramAggregation) OrderByKeyAsc

func (a *HistogramAggregation) OrderByKeyAsc() *HistogramAggregation

func (*HistogramAggregation) OrderByKeyDesc

func (a *HistogramAggregation) OrderByKeyDesc() *HistogramAggregation

func (*HistogramAggregation) Script

func (a *HistogramAggregation) Script(script *Script) *HistogramAggregation

func (*HistogramAggregation) Source

func (a *HistogramAggregation) Source() (interface{}, error)

func (*HistogramAggregation) SubAggregation

func (a *HistogramAggregation) SubAggregation(name string, subAggregation Aggregation) *HistogramAggregation

type HoltLinearMovAvgModel

type HoltLinearMovAvgModel struct {
	// contains filtered or unexported fields
}

HoltLinearMovAvgModel calculates a doubly exponential weighted moving average.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html#_holt_linear

func NewHoltLinearMovAvgModel

func NewHoltLinearMovAvgModel() *HoltLinearMovAvgModel

NewHoltLinearMovAvgModel creates and initializes a new HoltLinearMovAvgModel.

func (*HoltLinearMovAvgModel) Alpha

Alpha controls the smoothing of the data. Alpha = 1 retains no memory of past values (e.g. a random walk), while alpha = 0 retains infinite memory of past values (e.g. the series mean). Useful values are somewhere in between. Defaults to 0.5.

func (*HoltLinearMovAvgModel) Beta

Beta is equivalent to Alpha but controls the smoothing of the trend instead of the data.

func (*HoltLinearMovAvgModel) Name

func (m *HoltLinearMovAvgModel) Name() string

Name of the model.

func (*HoltLinearMovAvgModel) Settings

func (m *HoltLinearMovAvgModel) Settings() map[string]interface{}

Settings of the model.

type HoltWintersMovAvgModel

type HoltWintersMovAvgModel struct {
	// contains filtered or unexported fields
}

HoltWintersMovAvgModel calculates a triple exponential weighted moving average.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html#_holt_winters

func NewHoltWintersMovAvgModel

func NewHoltWintersMovAvgModel() *HoltWintersMovAvgModel

NewHoltWintersMovAvgModel creates and initializes a new HoltWintersMovAvgModel.

func (*HoltWintersMovAvgModel) Alpha

Alpha controls the smoothing of the data. Alpha = 1 retains no memory of past values (e.g. a random walk), while alpha = 0 retains infinite memory of past values (e.g. the series mean). Useful values are somewhere in between. Defaults to 0.5.

func (*HoltWintersMovAvgModel) Beta

Beta is equivalent to Alpha but controls the smoothing of the trend instead of the data.

func (*HoltWintersMovAvgModel) Gamma

func (*HoltWintersMovAvgModel) Name

func (m *HoltWintersMovAvgModel) Name() string

Name of the model.

func (*HoltWintersMovAvgModel) Pad

func (*HoltWintersMovAvgModel) Period

func (*HoltWintersMovAvgModel) SeasonalityType

func (m *HoltWintersMovAvgModel) SeasonalityType(typ string) *HoltWintersMovAvgModel

func (*HoltWintersMovAvgModel) Settings

func (m *HoltWintersMovAvgModel) Settings() map[string]interface{}

Settings of the model.

type IPRangeAggregation

type IPRangeAggregation struct {
	// contains filtered or unexported fields
}

IPRangeAggregation is a range aggregation that is dedicated for IP addresses.

See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-iprange-aggregation.html

func NewIPRangeAggregation

func NewIPRangeAggregation() *IPRangeAggregation

func (*IPRangeAggregation) AddMaskRange

func (a *IPRangeAggregation) AddMaskRange(mask string) *IPRangeAggregation

func (*IPRangeAggregation) AddMaskRangeWithKey

func (a *IPRangeAggregation) AddMaskRangeWithKey(key, mask string) *IPRangeAggregation

func (*IPRangeAggregation) AddRange

func (a *IPRangeAggregation) AddRange(from, to string) *IPRangeAggregation

func (*IPRangeAggregation) AddRangeWithKey

func (a *IPRangeAggregation) AddRangeWithKey(key, from, to string) *IPRangeAggregation

func (*IPRangeAggregation) AddUnboundedFrom

func (a *IPRangeAggregation) AddUnboundedFrom(to string) *IPRangeAggregation

func (*IPRangeAggregation) AddUnboundedFromWithKey

func (a *IPRangeAggregation) AddUnboundedFromWithKey(key, to string) *IPRangeAggregation

func (*IPRangeAggregation) AddUnboundedTo

func (a *IPRangeAggregation) AddUnboundedTo(from string) *IPRangeAggregation

func (*IPRangeAggregation) AddUnboundedToWithKey

func (a *IPRangeAggregation) AddUnboundedToWithKey(key, from string) *IPRangeAggregation

func (*IPRangeAggregation) Between

func (a *IPRangeAggregation) Between(from, to string) *IPRangeAggregation

func (*IPRangeAggregation) BetweenWithKey

func (a *IPRangeAggregation) BetweenWithKey(key, from, to string) *IPRangeAggregation

func (*IPRangeAggregation) Field

func (a *IPRangeAggregation) Field(field string) *IPRangeAggregation

func (*IPRangeAggregation) Gt

func (*IPRangeAggregation) GtWithKey

func (a *IPRangeAggregation) GtWithKey(key, from string) *IPRangeAggregation

func (*IPRangeAggregation) Keyed

func (a *IPRangeAggregation) Keyed(keyed bool) *IPRangeAggregation

func (*IPRangeAggregation) Lt

func (*IPRangeAggregation) LtWithKey

func (a *IPRangeAggregation) LtWithKey(key, to string) *IPRangeAggregation

func (*IPRangeAggregation) Meta

func (a *IPRangeAggregation) Meta(metaData map[string]interface{}) *IPRangeAggregation

Meta sets the meta data to be included in the aggregation response.

func (*IPRangeAggregation) Source

func (a *IPRangeAggregation) Source() (interface{}, error)

func (*IPRangeAggregation) SubAggregation

func (a *IPRangeAggregation) SubAggregation(name string, subAggregation Aggregation) *IPRangeAggregation

type IPRangeAggregationEntry

type IPRangeAggregationEntry struct {
	Key  string
	Mask string
	From string
	To   string
}

type IdsQuery

type IdsQuery struct {
	// contains filtered or unexported fields
}

IdsQuery filters documents that only have the provided ids. Note, this query uses the _uid field.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-ids-query.html

func NewIdsQuery

func NewIdsQuery(types ...string) *IdsQuery

NewIdsQuery creates and initializes a new ids query.

func (*IdsQuery) Boost

func (q *IdsQuery) Boost(boost float64) *IdsQuery

Boost sets the boost for this query.

func (*IdsQuery) Ids

func (q *IdsQuery) Ids(ids ...string) *IdsQuery

Ids adds ids to the filter.

func (*IdsQuery) QueryName

func (q *IdsQuery) QueryName(queryName string) *IdsQuery

QueryName sets the query name for the filter.

func (*IdsQuery) Source

func (q *IdsQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type IndexResponse

type IndexResponse struct {
	Index         string      `json:"_index,omitempty"`
	Type          string      `json:"_type,omitempty"`
	Id            string      `json:"_id,omitempty"`
	Version       int64       `json:"_version,omitempty"`
	Result        string      `json:"result,omitempty"`
	Shards        *shardsInfo `json:"_shards,omitempty"`
	SeqNo         int64       `json:"_seq_no,omitempty"`
	PrimaryTerm   int64       `json:"_primary_term,omitempty"`
	Status        int         `json:"status,omitempty"`
	ForcedRefresh bool        `json:"forced_refresh,omitempty"`
}

IndexResponse is the result of indexing a document in Elasticsearch.

type IndexSegments

type IndexSegments struct {
	// Shards provides a map into the shard related information of an index.
	// The key of the map is the number of a specific shard.
	Shards map[string][]*IndexSegmentsShards `json:"shards,omitempty"`
}

type IndexSegmentsDetails

type IndexSegmentsDetails struct {
	Generation    int64                   `json:"generation,omitempty"`
	NumDocs       int64                   `json:"num_docs,omitempty"`
	DeletedDocs   int64                   `json:"deleted_docs,omitempty"`
	Size          string                  `json:"size,omitempty"`
	SizeInBytes   int64                   `json:"size_in_bytes,omitempty"`
	Memory        string                  `json:"memory,omitempty"`
	MemoryInBytes int64                   `json:"memory_in_bytes,omitempty"`
	Committed     bool                    `json:"committed,omitempty"`
	Search        bool                    `json:"search,omitempty"`
	Version       string                  `json:"version,omitempty"`
	Compound      bool                    `json:"compound,omitempty"`
	MergeId       string                  `json:"merge_id,omitempty"`
	Sort          []*IndexSegmentsSort    `json:"sort,omitempty"`
	RAMTree       []*IndexSegmentsRamTree `json:"ram_tree,omitempty"`
	Attributes    map[string]string       `json:"attributes,omitempty"`
}

type IndexSegmentsRamTree

type IndexSegmentsRamTree struct {
	Description string                  `json:"description,omitempty"`
	Size        string                  `json:"size,omitempty"`
	SizeInBytes int64                   `json:"size_in_bytes,omitempty"`
	Children    []*IndexSegmentsRamTree `json:"children,omitempty"`
}

type IndexSegmentsRouting

type IndexSegmentsRouting struct {
	State          string `json:"state,omitempty"`
	Primary        bool   `json:"primary,omitempty"`
	Node           string `json:"node,omitempty"`
	RelocatingNode string `json:"relocating_node,omitempty"`
}

type IndexSegmentsShards

type IndexSegmentsShards struct {
	Routing              *IndexSegmentsRouting `json:"routing,omitempty"`
	NumCommittedSegments int64                 `json:"num_committed_segments,omitempty"`
	NumSearchSegments    int64                 `json:"num_search_segments"`

	// Segments provides a map into the segment related information of a shard.
	// The key of the map is the specific lucene segment id.
	Segments map[string]*IndexSegmentsDetails `json:"segments,omitempty"`
}

type IndexSegmentsSort

type IndexSegmentsSort struct {
	Field   string      `json:"field,omitempty"`
	Mode    string      `json:"mode,omitempty"`
	Missing interface{} `json:"missing,omitempty"`
	Reverse bool        `json:"reverse,omitempty"`
}

type IndexService

type IndexService struct {
	// contains filtered or unexported fields
}

IndexService adds or updates a typed JSON document in a specified index, making it searchable.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-index_.html for details.

func NewIndexService

func NewIndexService(client *Client) *IndexService

NewIndexService creates a new IndexService.

func (*IndexService) BodyJson

func (s *IndexService) BodyJson(body interface{}) *IndexService

BodyJson is the document as a serializable JSON interface.

func (*IndexService) BodyString

func (s *IndexService) BodyString(body string) *IndexService

BodyString is the document encoded as a string.

func (*IndexService) Do

Do executes the operation.

func (*IndexService) Id

func (s *IndexService) Id(id string) *IndexService

Id is the document ID.

func (*IndexService) Index

func (s *IndexService) Index(index string) *IndexService

Index is the name of the index.

func (*IndexService) OpType

func (s *IndexService) OpType(opType string) *IndexService

OpType is an explicit operation type, i.e. "create" or "index" (default).

func (*IndexService) Parent

func (s *IndexService) Parent(parent string) *IndexService

Parent is the ID of the parent document.

func (*IndexService) Pipeline

func (s *IndexService) Pipeline(pipeline string) *IndexService

Pipeline specifies the pipeline id to preprocess incoming documents with.

func (*IndexService) Pretty

func (s *IndexService) Pretty(pretty bool) *IndexService

Pretty indicates that the JSON response be indented and human readable.

func (*IndexService) Refresh

func (s *IndexService) Refresh(refresh string) *IndexService

Refresh the index after performing the operation.

func (*IndexService) Routing

func (s *IndexService) Routing(routing string) *IndexService

Routing is a specific routing value.

func (*IndexService) TTL

func (s *IndexService) TTL(ttl string) *IndexService

TTL is an expiration time for the document (alias for Ttl).

func (*IndexService) Timeout

func (s *IndexService) Timeout(timeout string) *IndexService

Timeout is an explicit operation timeout.

func (*IndexService) Timestamp

func (s *IndexService) Timestamp(timestamp string) *IndexService

Timestamp is an explicit timestamp for the document.

func (*IndexService) Ttl

func (s *IndexService) Ttl(ttl string) *IndexService

Ttl is an expiration time for the document.

func (*IndexService) Type

func (s *IndexService) Type(typ string) *IndexService

Type is the type of the document.

func (*IndexService) Validate

func (s *IndexService) Validate() error

Validate checks if the operation is valid.

func (*IndexService) Version

func (s *IndexService) Version(version interface{}) *IndexService

Version is an explicit version number for concurrency control.

func (*IndexService) VersionType

func (s *IndexService) VersionType(versionType string) *IndexService

VersionType is a specific version type.

func (*IndexService) WaitForActiveShards

func (s *IndexService) WaitForActiveShards(waitForActiveShards string) *IndexService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the index operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

type IndexStats

type IndexStats struct {
	Primaries *IndexStatsDetails `json:"primaries,omitempty"`
	Total     *IndexStatsDetails `json:"total,omitempty"`
}

IndexStats is index stats for a specific index.

type IndexStatsCompletion

type IndexStatsCompletion struct {
	Size        string `json:"size,omitempty"`
	SizeInBytes int64  `json:"size_in_bytes,omitempty"`
}

type IndexStatsDetails

type IndexStatsDetails struct {
	Docs        *IndexStatsDocs        `json:"docs,omitempty"`
	Store       *IndexStatsStore       `json:"store,omitempty"`
	Indexing    *IndexStatsIndexing    `json:"indexing,omitempty"`
	Get         *IndexStatsGet         `json:"get,omitempty"`
	Search      *IndexStatsSearch      `json:"search,omitempty"`
	Merges      *IndexStatsMerges      `json:"merges,omitempty"`
	Refresh     *IndexStatsRefresh     `json:"refresh,omitempty"`
	Flush       *IndexStatsFlush       `json:"flush,omitempty"`
	Warmer      *IndexStatsWarmer      `json:"warmer,omitempty"`
	FilterCache *IndexStatsFilterCache `json:"filter_cache,omitempty"`
	IdCache     *IndexStatsIdCache     `json:"id_cache,omitempty"`
	Fielddata   *IndexStatsFielddata   `json:"fielddata,omitempty"`
	Percolate   *IndexStatsPercolate   `json:"percolate,omitempty"`
	Completion  *IndexStatsCompletion  `json:"completion,omitempty"`
	Segments    *IndexStatsSegments    `json:"segments,omitempty"`
	Translog    *IndexStatsTranslog    `json:"translog,omitempty"`
	Suggest     *IndexStatsSuggest     `json:"suggest,omitempty"`
	QueryCache  *IndexStatsQueryCache  `json:"query_cache,omitempty"`
}

type IndexStatsDocs

type IndexStatsDocs struct {
	Count   int64 `json:"count,omitempty"`
	Deleted int64 `json:"deleted,omitempty"`
}

type IndexStatsFielddata

type IndexStatsFielddata struct {
	MemorySize        string `json:"memory_size,omitempty"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes,omitempty"`
	Evictions         int64  `json:"evictions,omitempty"`
}

type IndexStatsFilterCache

type IndexStatsFilterCache struct {
	MemorySize        string `json:"memory_size,omitempty"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes,omitempty"`
	Evictions         int64  `json:"evictions,omitempty"`
}

type IndexStatsFlush

type IndexStatsFlush struct {
	Total             int64  `json:"total,omitempty"`
	TotalTime         string `json:"total_time,omitempty"`
	TotalTimeInMillis int64  `json:"total_time_in_millis,omitempty"`
}

type IndexStatsGet

type IndexStatsGet struct {
	Total               int64  `json:"total,omitempty"`
	GetTime             string `json:"get_time,omitempty"`
	TimeInMillis        int64  `json:"time_in_millis,omitempty"`
	ExistsTotal         int64  `json:"exists_total,omitempty"`
	ExistsTime          string `json:"exists_time,omitempty"`
	ExistsTimeInMillis  int64  `json:"exists_time_in_millis,omitempty"`
	MissingTotal        int64  `json:"missing_total,omitempty"`
	MissingTime         string `json:"missing_time,omitempty"`
	MissingTimeInMillis int64  `json:"missing_time_in_millis,omitempty"`
	Current             int64  `json:"current,omitempty"`
}

type IndexStatsIdCache

type IndexStatsIdCache struct {
	MemorySize        string `json:"memory_size,omitempty"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes,omitempty"`
}

type IndexStatsIndexing

type IndexStatsIndexing struct {
	IndexTotal         int64  `json:"index_total,omitempty"`
	IndexTime          string `json:"index_time,omitempty"`
	IndexTimeInMillis  int64  `json:"index_time_in_millis,omitempty"`
	IndexCurrent       int64  `json:"index_current,omitempty"`
	DeleteTotal        int64  `json:"delete_total,omitempty"`
	DeleteTime         string `json:"delete_time,omitempty"`
	DeleteTimeInMillis int64  `json:"delete_time_in_millis,omitempty"`
	DeleteCurrent      int64  `json:"delete_current,omitempty"`
	NoopUpdateTotal    int64  `json:"noop_update_total,omitempty"`
}

type IndexStatsMerges

type IndexStatsMerges struct {
	Current            int64  `json:"current,omitempty"`
	CurrentDocs        int64  `json:"current_docs,omitempty"`
	CurrentSize        string `json:"current_size,omitempty"`
	CurrentSizeInBytes int64  `json:"current_size_in_bytes,omitempty"`
	Total              int64  `json:"total,omitempty"`
	TotalTime          string `json:"total_time,omitempty"`
	TotalTimeInMillis  int64  `json:"total_time_in_millis,omitempty"`
	TotalDocs          int64  `json:"total_docs,omitempty"`
	TotalSize          string `json:"total_size,omitempty"`
	TotalSizeInBytes   int64  `json:"total_size_in_bytes,omitempty"`
}

type IndexStatsPercolate

type IndexStatsPercolate struct {
	Total             int64  `json:"total,omitempty"`
	GetTime           string `json:"get_time,omitempty"`
	TimeInMillis      int64  `json:"time_in_millis,omitempty"`
	Current           int64  `json:"current,omitempty"`
	MemorySize        string `json:"memory_size,omitempty"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes,omitempty"`
	Queries           int64  `json:"queries,omitempty"`
}

type IndexStatsQueryCache

type IndexStatsQueryCache struct {
	MemorySize        string `json:"memory_size,omitempty"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes,omitempty"`
	Evictions         int64  `json:"evictions,omitempty"`
	HitCount          int64  `json:"hit_count,omitempty"`
	MissCount         int64  `json:"miss_count,omitempty"`
}

type IndexStatsRefresh

type IndexStatsRefresh struct {
	Total             int64  `json:"total,omitempty"`
	TotalTime         string `json:"total_time,omitempty"`
	TotalTimeInMillis int64  `json:"total_time_in_millis,omitempty"`
}

type IndexStatsSearch

type IndexStatsSearch struct {
	OpenContexts      int64  `json:"open_contexts,omitempty"`
	QueryTotal        int64  `json:"query_total,omitempty"`
	QueryTime         string `json:"query_time,omitempty"`
	QueryTimeInMillis int64  `json:"query_time_in_millis,omitempty"`
	QueryCurrent      int64  `json:"query_current,omitempty"`
	FetchTotal        int64  `json:"fetch_total,omitempty"`
	FetchTime         string `json:"fetch_time,omitempty"`
	FetchTimeInMillis int64  `json:"fetch_time_in_millis,omitempty"`
	FetchCurrent      int64  `json:"fetch_current,omitempty"`
}

type IndexStatsSegments

type IndexStatsSegments struct {
	Count                       int64  `json:"count,omitempty"`
	Memory                      string `json:"memory,omitempty"`
	MemoryInBytes               int64  `json:"memory_in_bytes,omitempty"`
	IndexWriterMemory           string `json:"index_writer_memory,omitempty"`
	IndexWriterMemoryInBytes    int64  `json:"index_writer_memory_in_bytes,omitempty"`
	IndexWriterMaxMemory        string `json:"index_writer_max_memory,omitempty"`
	IndexWriterMaxMemoryInBytes int64  `json:"index_writer_max_memory_in_bytes,omitempty"`
	VersionMapMemory            string `json:"version_map_memory,omitempty"`
	VersionMapMemoryInBytes     int64  `json:"version_map_memory_in_bytes,omitempty"`
	FixedBitSetMemory           string `json:"fixed_bit_set,omitempty"`
	FixedBitSetMemoryInBytes    int64  `json:"fixed_bit_set_memory_in_bytes,omitempty"`
}

type IndexStatsStore

type IndexStatsStore struct {
	Size        string `json:"size,omitempty"` // human size, e.g. 119.3mb
	SizeInBytes int64  `json:"size_in_bytes,omitempty"`
}

type IndexStatsSuggest

type IndexStatsSuggest struct {
	Total        int64  `json:"total,omitempty"`
	Time         string `json:"time,omitempty"`
	TimeInMillis int64  `json:"time_in_millis,omitempty"`
	Current      int64  `json:"current,omitempty"`
}

type IndexStatsTranslog

type IndexStatsTranslog struct {
	Operations  int64  `json:"operations,omitempty"`
	Size        string `json:"size,omitempty"`
	SizeInBytes int64  `json:"size_in_bytes,omitempty"`
}

type IndexStatsWarmer

type IndexStatsWarmer struct {
	Current           int64  `json:"current,omitempty"`
	Total             int64  `json:"total,omitempty"`
	TotalTime         string `json:"total_time,omitempty"`
	TotalTimeInMillis int64  `json:"total_time_in_millis,omitempty"`
}

type IndicesAnalyzeRequest

type IndicesAnalyzeRequest struct {
	Text       []string `json:"text,omitempty"`
	Analyzer   string   `json:"analyzer,omitempty"`
	Tokenizer  string   `json:"tokenizer,omitempty"`
	Filter     []string `json:"filter,omitempty"`
	CharFilter []string `json:"char_filter,omitempty"`
	Field      string   `json:"field,omitempty"`
	Explain    bool     `json:"explain,omitempty"`
	Attributes []string `json:"attributes,omitempty"`
}

IndicesAnalyzeRequest specifies the parameters of the analyze request.

type IndicesAnalyzeResponse

type IndicesAnalyzeResponse struct {
	Tokens []IndicesAnalyzeResponseToken `json:"tokens"` // json part for normal message
	Detail IndicesAnalyzeResponseDetail  `json:"detail"` // json part for verbose message of explain request
}

type IndicesAnalyzeResponseDetail

type IndicesAnalyzeResponseDetail struct {
	CustomAnalyzer bool          `json:"custom_analyzer"`
	Charfilters    []interface{} `json:"charfilters"`
	Analyzer       struct {
		Name   string `json:"name"`
		Tokens []struct {
			Token          string `json:"token"`
			StartOffset    int    `json:"start_offset"`
			EndOffset      int    `json:"end_offset"`
			Type           string `json:"type"`
			Position       int    `json:"position"`
			Bytes          string `json:"bytes"`
			PositionLength int    `json:"positionLength"`
		} `json:"tokens"`
	} `json:"analyzer"`
	Tokenizer struct {
		Name   string `json:"name"`
		Tokens []struct {
			Token       string `json:"token"`
			StartOffset int    `json:"start_offset"`
			EndOffset   int    `json:"end_offset"`
			Type        string `json:"type"`
			Position    int    `json:"position"`
		} `json:"tokens"`
	} `json:"tokenizer"`
	Tokenfilters []struct {
		Name   string `json:"name"`
		Tokens []struct {
			Token       string `json:"token"`
			StartOffset int    `json:"start_offset"`
			EndOffset   int    `json:"end_offset"`
			Type        string `json:"type"`
			Position    int    `json:"position"`
			Keyword     bool   `json:"keyword"`
		} `json:"tokens"`
	} `json:"tokenfilters"`
}

type IndicesAnalyzeResponseToken

type IndicesAnalyzeResponseToken struct {
	Token       string `json:"token"`
	StartOffset int    `json:"start_offset"`
	EndOffset   int    `json:"end_offset"`
	Type        string `json:"type"`
	Position    int    `json:"position"`
}

type IndicesAnalyzeService

type IndicesAnalyzeService struct {
	// contains filtered or unexported fields
}

IndicesAnalyzeService performs the analysis process on a text and returns the tokens breakdown of the text.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-analyze.html for detail.

func NewIndicesAnalyzeService

func NewIndicesAnalyzeService(client *Client) *IndicesAnalyzeService

NewIndicesAnalyzeService creates a new IndicesAnalyzeService.

func (*IndicesAnalyzeService) Analyzer

func (s *IndicesAnalyzeService) Analyzer(analyzer string) *IndicesAnalyzeService

Analyzer is the name of the analyzer to use.

func (*IndicesAnalyzeService) Attributes

func (s *IndicesAnalyzeService) Attributes(attributes ...string) *IndicesAnalyzeService

Attributes is a list of token attributes to output; this parameter works only with explain=true.

func (*IndicesAnalyzeService) BodyJson

func (s *IndicesAnalyzeService) BodyJson(body interface{}) *IndicesAnalyzeService

BodyJson is the text on which the analysis should be performed.

func (*IndicesAnalyzeService) BodyString

func (s *IndicesAnalyzeService) BodyString(body string) *IndicesAnalyzeService

BodyString is the text on which the analysis should be performed.

func (*IndicesAnalyzeService) CharFilter

func (s *IndicesAnalyzeService) CharFilter(charFilter ...string) *IndicesAnalyzeService

CharFilter is a list of character filters to use for the analysis.

func (*IndicesAnalyzeService) Do

Do will execute the request with the given context.

func (*IndicesAnalyzeService) Explain

func (s *IndicesAnalyzeService) Explain(explain bool) *IndicesAnalyzeService

Explain, when true, outputs more advanced details (default: false).

func (*IndicesAnalyzeService) Field

Field specifies to use a specific analyzer configured for this field (instead of passing the analyzer name).

func (*IndicesAnalyzeService) Filter

func (s *IndicesAnalyzeService) Filter(filter ...string) *IndicesAnalyzeService

Filter is a list of filters to use for the analysis.

func (*IndicesAnalyzeService) Format

Format of the output.

func (*IndicesAnalyzeService) Index

Index is the name of the index to scope the operation.

func (*IndicesAnalyzeService) PreferLocal

func (s *IndicesAnalyzeService) PreferLocal(preferLocal bool) *IndicesAnalyzeService

PreferLocal, when true, specifies that a local shard should be used if available. When false, a random shard is used (default: true).

func (*IndicesAnalyzeService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesAnalyzeService) Request

Request passes the analyze request to use.

func (*IndicesAnalyzeService) Text

Text is the text on which the analysis should be performed (when request body is not used).

func (*IndicesAnalyzeService) Tokenizer

func (s *IndicesAnalyzeService) Tokenizer(tokenizer string) *IndicesAnalyzeService

Tokenizer is the name of the tokenizer to use for the analysis.

func (*IndicesAnalyzeService) Validate

func (s *IndicesAnalyzeService) Validate() error

type IndicesCloseResponse

type IndicesCloseResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesCloseResponse is the response of IndicesCloseService.Do.

type IndicesCloseService

type IndicesCloseService struct {
	// contains filtered or unexported fields
}

IndicesCloseService closes an index.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-open-close.html for details.

func NewIndicesCloseService

func NewIndicesCloseService(client *Client) *IndicesCloseService

NewIndicesCloseService creates and initializes a new IndicesCloseService.

func (*IndicesCloseService) AllowNoIndices

func (s *IndicesCloseService) AllowNoIndices(allowNoIndices bool) *IndicesCloseService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesCloseService) Do

Do executes the operation.

func (*IndicesCloseService) ExpandWildcards

func (s *IndicesCloseService) ExpandWildcards(expandWildcards string) *IndicesCloseService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesCloseService) IgnoreUnavailable

func (s *IndicesCloseService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesCloseService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesCloseService) Index

Index is the name of the index to close.

func (*IndicesCloseService) MasterTimeout

func (s *IndicesCloseService) MasterTimeout(masterTimeout string) *IndicesCloseService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesCloseService) Pretty

func (s *IndicesCloseService) Pretty(pretty bool) *IndicesCloseService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesCloseService) Timeout

func (s *IndicesCloseService) Timeout(timeout string) *IndicesCloseService

Timeout is an explicit operation timeout.

func (*IndicesCloseService) Validate

func (s *IndicesCloseService) Validate() error

Validate checks if the operation is valid.

type IndicesCreateResult

type IndicesCreateResult struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesCreateResult is the outcome of creating a new index.

type IndicesCreateService

type IndicesCreateService struct {
	// contains filtered or unexported fields
}

IndicesCreateService creates a new index.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-create-index.html for details.

func NewIndicesCreateService

func NewIndicesCreateService(client *Client) *IndicesCreateService

NewIndicesCreateService returns a new IndicesCreateService.

func (*IndicesCreateService) Body

Body specifies the configuration of the index as a string. It is an alias for BodyString.

func (*IndicesCreateService) BodyJson

func (b *IndicesCreateService) BodyJson(body interface{}) *IndicesCreateService

BodyJson specifies the configuration of the index. The interface{} will be serializes as a JSON document, so use a map[string]interface{}.

func (*IndicesCreateService) BodyString

func (b *IndicesCreateService) BodyString(body string) *IndicesCreateService

BodyString specifies the configuration of the index as a string.

func (*IndicesCreateService) Do

Do executes the operation.

func (*IndicesCreateService) Index

Index is the name of the index to create.

func (*IndicesCreateService) MasterTimeout

func (s *IndicesCreateService) MasterTimeout(masterTimeout string) *IndicesCreateService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesCreateService) Pretty

func (b *IndicesCreateService) Pretty(pretty bool) *IndicesCreateService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesCreateService) Timeout

func (s *IndicesCreateService) Timeout(timeout string) *IndicesCreateService

Timeout the explicit operation timeout, e.g. "5s".

type IndicesDeleteResponse

type IndicesDeleteResponse struct {
	Acknowledged bool `json:"acknowledged"`
}

IndicesDeleteResponse is the response of IndicesDeleteService.Do.

type IndicesDeleteService

type IndicesDeleteService struct {
	// contains filtered or unexported fields
}

IndicesDeleteService allows to delete existing indices.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-delete-index.html for details.

func NewIndicesDeleteService

func NewIndicesDeleteService(client *Client) *IndicesDeleteService

NewIndicesDeleteService creates and initializes a new IndicesDeleteService.

func (*IndicesDeleteService) Do

Do executes the operation.

func (*IndicesDeleteService) Index

Index adds the list of indices to delete. Use `_all` or `*` string to delete all indices.

func (*IndicesDeleteService) MasterTimeout

func (s *IndicesDeleteService) MasterTimeout(masterTimeout string) *IndicesDeleteService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesDeleteService) Pretty

func (s *IndicesDeleteService) Pretty(pretty bool) *IndicesDeleteService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesDeleteService) Timeout

func (s *IndicesDeleteService) Timeout(timeout string) *IndicesDeleteService

Timeout is an explicit operation timeout.

func (*IndicesDeleteService) Validate

func (s *IndicesDeleteService) Validate() error

Validate checks if the operation is valid.

type IndicesDeleteTemplateResponse

type IndicesDeleteTemplateResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesDeleteTemplateResponse is the response of IndicesDeleteTemplateService.Do.

type IndicesDeleteTemplateService

type IndicesDeleteTemplateService struct {
	// contains filtered or unexported fields
}

IndicesDeleteTemplateService deletes index templates. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-templates.html.

func NewIndicesDeleteTemplateService

func NewIndicesDeleteTemplateService(client *Client) *IndicesDeleteTemplateService

NewIndicesDeleteTemplateService creates a new IndicesDeleteTemplateService.

func (*IndicesDeleteTemplateService) Do

Do executes the operation.

func (*IndicesDeleteTemplateService) MasterTimeout

func (s *IndicesDeleteTemplateService) MasterTimeout(masterTimeout string) *IndicesDeleteTemplateService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesDeleteTemplateService) Name

Name is the name of the template.

func (*IndicesDeleteTemplateService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesDeleteTemplateService) Timeout

Timeout is an explicit operation timeout.

func (*IndicesDeleteTemplateService) Validate

func (s *IndicesDeleteTemplateService) Validate() error

Validate checks if the operation is valid.

type IndicesExistsService

type IndicesExistsService struct {
	// contains filtered or unexported fields
}

IndicesExistsService checks if an index or indices exist or not.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-exists.html for details.

func NewIndicesExistsService

func NewIndicesExistsService(client *Client) *IndicesExistsService

NewIndicesExistsService creates and initializes a new IndicesExistsService.

func (*IndicesExistsService) AllowNoIndices

func (s *IndicesExistsService) AllowNoIndices(allowNoIndices bool) *IndicesExistsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesExistsService) Do

Do executes the operation.

func (*IndicesExistsService) ExpandWildcards

func (s *IndicesExistsService) ExpandWildcards(expandWildcards string) *IndicesExistsService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesExistsService) IgnoreUnavailable

func (s *IndicesExistsService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesExistsService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesExistsService) Index

Index is a list of one or more indices to check.

func (*IndicesExistsService) Local

Local, when set, returns local information and does not retrieve the state from master node (default: false).

func (*IndicesExistsService) Pretty

func (s *IndicesExistsService) Pretty(pretty bool) *IndicesExistsService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesExistsService) Validate

func (s *IndicesExistsService) Validate() error

Validate checks if the operation is valid.

type IndicesExistsTemplateService

type IndicesExistsTemplateService struct {
	// contains filtered or unexported fields
}

IndicesExistsTemplateService checks if a given template exists. See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-templates.html#indices-templates-exists for documentation.

func NewIndicesExistsTemplateService

func NewIndicesExistsTemplateService(client *Client) *IndicesExistsTemplateService

NewIndicesExistsTemplateService creates a new IndicesExistsTemplateService.

func (*IndicesExistsTemplateService) Do

Do executes the operation.

func (*IndicesExistsTemplateService) Local

Local indicates whether to return local information, i.e. do not retrieve the state from master node (default: false).

func (*IndicesExistsTemplateService) Name

Name is the name of the template.

func (*IndicesExistsTemplateService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesExistsTemplateService) Validate

func (s *IndicesExistsTemplateService) Validate() error

Validate checks if the operation is valid.

type IndicesExistsTypeService

type IndicesExistsTypeService struct {
	// contains filtered or unexported fields
}

IndicesExistsTypeService checks if one or more types exist in one or more indices.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-types-exists.html for details.

func NewIndicesExistsTypeService

func NewIndicesExistsTypeService(client *Client) *IndicesExistsTypeService

NewIndicesExistsTypeService creates a new IndicesExistsTypeService.

func (*IndicesExistsTypeService) AllowNoIndices

func (s *IndicesExistsTypeService) AllowNoIndices(allowNoIndices bool) *IndicesExistsTypeService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesExistsTypeService) Do

Do executes the operation.

func (*IndicesExistsTypeService) ExpandWildcards

func (s *IndicesExistsTypeService) ExpandWildcards(expandWildcards string) *IndicesExistsTypeService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesExistsTypeService) IgnoreUnavailable

func (s *IndicesExistsTypeService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesExistsTypeService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesExistsTypeService) Index

Index is a list of index names; use `_all` to check the types across all indices.

func (*IndicesExistsTypeService) Local

Local specifies whether to return local information, i.e. do not retrieve the state from master node (default: false).

func (*IndicesExistsTypeService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesExistsTypeService) Type

Type is a list of document types to check.

func (*IndicesExistsTypeService) Validate

func (s *IndicesExistsTypeService) Validate() error

Validate checks if the operation is valid.

type IndicesFlushResponse

type IndicesFlushResponse struct {
	Shards shardsInfo `json:"_shards"`
}

type IndicesFlushService

type IndicesFlushService struct {
	// contains filtered or unexported fields
}

Flush allows to flush one or more indices. The flush process of an index basically frees memory from the index by flushing data to the index storage and clearing the internal transaction log.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-flush.html for details.

func NewIndicesFlushService

func NewIndicesFlushService(client *Client) *IndicesFlushService

NewIndicesFlushService creates a new IndicesFlushService.

func (*IndicesFlushService) AllowNoIndices

func (s *IndicesFlushService) AllowNoIndices(allowNoIndices bool) *IndicesFlushService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesFlushService) Do

Do executes the service.

func (*IndicesFlushService) ExpandWildcards

func (s *IndicesFlushService) ExpandWildcards(expandWildcards string) *IndicesFlushService

ExpandWildcards specifies whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*IndicesFlushService) Force

func (s *IndicesFlushService) Force(force bool) *IndicesFlushService

Force indicates whether a flush should be forced even if it is not necessarily needed ie. if no changes will be committed to the index. This is useful if transaction log IDs should be incremented even if no uncommitted changes are present. (This setting can be considered as internal).

func (*IndicesFlushService) IgnoreUnavailable

func (s *IndicesFlushService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesFlushService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesFlushService) Index

func (s *IndicesFlushService) Index(indices ...string) *IndicesFlushService

Index is a list of index names; use `_all` or empty string for all indices.

func (*IndicesFlushService) Pretty

func (s *IndicesFlushService) Pretty(pretty bool) *IndicesFlushService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesFlushService) Validate

func (s *IndicesFlushService) Validate() error

Validate checks if the operation is valid.

func (*IndicesFlushService) WaitIfOngoing

func (s *IndicesFlushService) WaitIfOngoing(waitIfOngoing bool) *IndicesFlushService

WaitIfOngoing, if set to true, indicates that the flush operation will block until the flush can be executed if another flush operation is already executing. The default is false and will cause an exception to be thrown on the shard level if another flush operation is already running..

type IndicesForcemergeResponse

type IndicesForcemergeResponse struct {
	Shards shardsInfo `json:"_shards"`
}

IndicesForcemergeResponse is the response of IndicesForcemergeService.Do.

type IndicesForcemergeService

type IndicesForcemergeService struct {
	// contains filtered or unexported fields
}

IndicesForcemergeService allows to force merging of one or more indices. The merge relates to the number of segments a Lucene index holds within each shard. The force merge operation allows to reduce the number of segments by merging them.

See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-forcemerge.html for more information.

func NewIndicesForcemergeService

func NewIndicesForcemergeService(client *Client) *IndicesForcemergeService

NewIndicesForcemergeService creates a new IndicesForcemergeService.

func (*IndicesForcemergeService) AllowNoIndices

func (s *IndicesForcemergeService) AllowNoIndices(allowNoIndices bool) *IndicesForcemergeService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesForcemergeService) Do

Do executes the operation.

func (*IndicesForcemergeService) ExpandWildcards

func (s *IndicesForcemergeService) ExpandWildcards(expandWildcards string) *IndicesForcemergeService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*IndicesForcemergeService) Flush

Flush specifies whether the index should be flushed after performing the operation (default: true).

func (*IndicesForcemergeService) IgnoreUnavailable

func (s *IndicesForcemergeService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesForcemergeService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesForcemergeService) Index

Index is a list of index names; use `_all` or empty string to perform the operation on all indices.

func (*IndicesForcemergeService) MaxNumSegments

func (s *IndicesForcemergeService) MaxNumSegments(maxNumSegments interface{}) *IndicesForcemergeService

MaxNumSegments specifies the number of segments the index should be merged into (default: dynamic).

func (*IndicesForcemergeService) OnlyExpungeDeletes

func (s *IndicesForcemergeService) OnlyExpungeDeletes(onlyExpungeDeletes bool) *IndicesForcemergeService

OnlyExpungeDeletes specifies whether the operation should only expunge deleted documents.

func (*IndicesForcemergeService) OperationThreading

func (s *IndicesForcemergeService) OperationThreading(operationThreading interface{}) *IndicesForcemergeService

func (*IndicesForcemergeService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesForcemergeService) Validate

func (s *IndicesForcemergeService) Validate() error

Validate checks if the operation is valid.

type IndicesGetFieldMappingService

type IndicesGetFieldMappingService struct {
	// contains filtered or unexported fields
}

IndicesGetFieldMappingService retrieves the mapping definitions for the fields in an index

or index/type.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-get-field-mapping.html for details.

func NewGetFieldMappingService

func NewGetFieldMappingService(client *Client) *IndicesGetFieldMappingService

NewGetFieldMappingService is an alias for NewIndicesGetFieldMappingService. Use NewIndicesGetFieldMappingService.

func NewIndicesGetFieldMappingService

func NewIndicesGetFieldMappingService(client *Client) *IndicesGetFieldMappingService

NewIndicesGetFieldMappingService creates a new IndicesGetFieldMappingService.

func (*IndicesGetFieldMappingService) AllowNoIndices

func (s *IndicesGetFieldMappingService) AllowNoIndices(allowNoIndices bool) *IndicesGetFieldMappingService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. This includes `_all` string or when no indices have been specified.

func (*IndicesGetFieldMappingService) Do

func (s *IndicesGetFieldMappingService) Do(ctx context.Context) (map[string]interface{}, error)

Do executes the operation. It returns mapping definitions for an index or index/type.

func (*IndicesGetFieldMappingService) ExpandWildcards

func (s *IndicesGetFieldMappingService) ExpandWildcards(expandWildcards string) *IndicesGetFieldMappingService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*IndicesGetFieldMappingService) Field

Field is a list of fields.

func (*IndicesGetFieldMappingService) IgnoreUnavailable

func (s *IndicesGetFieldMappingService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesGetFieldMappingService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesGetFieldMappingService) Index

Index is a list of index names.

func (*IndicesGetFieldMappingService) Local

Local indicates whether to return local information, do not retrieve the state from master node (default: false).

func (*IndicesGetFieldMappingService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesGetFieldMappingService) Type

Type is a list of document types.

func (*IndicesGetFieldMappingService) Validate

func (s *IndicesGetFieldMappingService) Validate() error

Validate checks if the operation is valid.

type IndicesGetMappingService

type IndicesGetMappingService struct {
	// contains filtered or unexported fields
}

IndicesGetMappingService retrieves the mapping definitions for an index or index/type.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-get-mapping.html for details.

func NewGetMappingService

func NewGetMappingService(client *Client) *IndicesGetMappingService

NewGetMappingService is an alias for NewIndicesGetMappingService. Use NewIndicesGetMappingService.

func NewIndicesGetMappingService

func NewIndicesGetMappingService(client *Client) *IndicesGetMappingService

NewIndicesGetMappingService creates a new IndicesGetMappingService.

func (*IndicesGetMappingService) AllowNoIndices

func (s *IndicesGetMappingService) AllowNoIndices(allowNoIndices bool) *IndicesGetMappingService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. This includes `_all` string or when no indices have been specified.

func (*IndicesGetMappingService) Do

func (s *IndicesGetMappingService) Do(ctx context.Context) (map[string]interface{}, error)

Do executes the operation. It returns mapping definitions for an index or index/type.

func (*IndicesGetMappingService) ExpandWildcards

func (s *IndicesGetMappingService) ExpandWildcards(expandWildcards string) *IndicesGetMappingService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*IndicesGetMappingService) IgnoreUnavailable

func (s *IndicesGetMappingService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesGetMappingService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesGetMappingService) Index

Index is a list of index names.

func (*IndicesGetMappingService) Local

Local indicates whether to return local information, do not retrieve the state from master node (default: false).

func (*IndicesGetMappingService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesGetMappingService) Type

Type is a list of document types.

func (*IndicesGetMappingService) Validate

func (s *IndicesGetMappingService) Validate() error

Validate checks if the operation is valid.

type IndicesGetResponse

type IndicesGetResponse struct {
	Aliases  map[string]interface{} `json:"aliases"`
	Mappings map[string]interface{} `json:"mappings"`
	Settings map[string]interface{} `json:"settings"`
	Warmers  map[string]interface{} `json:"warmers"`
}

IndicesGetResponse is part of the response of IndicesGetService.Do.

type IndicesGetService

type IndicesGetService struct {
	// contains filtered or unexported fields
}

IndicesGetService retrieves information about one or more indices.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-get-index.html for more details.

func NewIndicesGetService

func NewIndicesGetService(client *Client) *IndicesGetService

NewIndicesGetService creates a new IndicesGetService.

func (*IndicesGetService) AllowNoIndices

func (s *IndicesGetService) AllowNoIndices(allowNoIndices bool) *IndicesGetService

AllowNoIndices indicates whether to ignore if a wildcard expression resolves to no concrete indices (default: false).

func (*IndicesGetService) Do

Do executes the operation.

func (*IndicesGetService) ExpandWildcards

func (s *IndicesGetService) ExpandWildcards(expandWildcards string) *IndicesGetService

ExpandWildcards indicates whether wildcard expressions should get expanded to open or closed indices (default: open).

func (*IndicesGetService) Feature

func (s *IndicesGetService) Feature(features ...string) *IndicesGetService

Feature is a list of features.

func (*IndicesGetService) Human

func (s *IndicesGetService) Human(human bool) *IndicesGetService

Human indicates whether to return version and creation date values in human-readable format (default: false).

func (*IndicesGetService) IgnoreUnavailable

func (s *IndicesGetService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesGetService

IgnoreUnavailable indicates whether to ignore unavailable indexes (default: false).

func (*IndicesGetService) Index

func (s *IndicesGetService) Index(indices ...string) *IndicesGetService

Index is a list of index names.

func (*IndicesGetService) Local

func (s *IndicesGetService) Local(local bool) *IndicesGetService

Local indicates whether to return local information, i.e. do not retrieve the state from master node (default: false).

func (*IndicesGetService) Pretty

func (s *IndicesGetService) Pretty(pretty bool) *IndicesGetService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesGetService) Validate

func (s *IndicesGetService) Validate() error

Validate checks if the operation is valid.

type IndicesGetSettingsResponse

type IndicesGetSettingsResponse struct {
	Settings map[string]interface{} `json:"settings"`
}

IndicesGetSettingsResponse is the response of IndicesGetSettingsService.Do.

type IndicesGetSettingsService

type IndicesGetSettingsService struct {
	// contains filtered or unexported fields
}

IndicesGetSettingsService allows to retrieve settings of one or more indices.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-get-settings.html for more details.

func NewIndicesGetSettingsService

func NewIndicesGetSettingsService(client *Client) *IndicesGetSettingsService

NewIndicesGetSettingsService creates a new IndicesGetSettingsService.

func (*IndicesGetSettingsService) AllowNoIndices

func (s *IndicesGetSettingsService) AllowNoIndices(allowNoIndices bool) *IndicesGetSettingsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesGetSettingsService) Do

Do executes the operation.

func (*IndicesGetSettingsService) ExpandWildcards

func (s *IndicesGetSettingsService) ExpandWildcards(expandWildcards string) *IndicesGetSettingsService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both. Options: open, closed, none, all. Default: open,closed.

func (*IndicesGetSettingsService) FlatSettings

func (s *IndicesGetSettingsService) FlatSettings(flatSettings bool) *IndicesGetSettingsService

FlatSettings indicates whether to return settings in flat format (default: false).

func (*IndicesGetSettingsService) IgnoreUnavailable

func (s *IndicesGetSettingsService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesGetSettingsService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesGetSettingsService) Index

Index is a list of index names; use `_all` or empty string to perform the operation on all indices.

func (*IndicesGetSettingsService) Local

Local indicates whether to return local information, do not retrieve the state from master node (default: false).

func (*IndicesGetSettingsService) Name

Name are the names of the settings that should be included.

func (*IndicesGetSettingsService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesGetSettingsService) Validate

func (s *IndicesGetSettingsService) Validate() error

Validate checks if the operation is valid.

type IndicesGetTemplateResponse

type IndicesGetTemplateResponse struct {
	Order    int                    `json:"order,omitempty"`
	Version  int                    `json:"version,omitempty"`
	Template string                 `json:"template,omitempty"`
	Settings map[string]interface{} `json:"settings,omitempty"`
	Mappings map[string]interface{} `json:"mappings,omitempty"`
	Aliases  map[string]interface{} `json:"aliases,omitempty"`
}

IndicesGetTemplateResponse is the response of IndicesGetTemplateService.Do.

type IndicesGetTemplateService

type IndicesGetTemplateService struct {
	// contains filtered or unexported fields
}

IndicesGetTemplateService returns an index template. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-templates.html.

func NewIndicesGetTemplateService

func NewIndicesGetTemplateService(client *Client) *IndicesGetTemplateService

NewIndicesGetTemplateService creates a new IndicesGetTemplateService.

func (*IndicesGetTemplateService) Do

Do executes the operation.

func (*IndicesGetTemplateService) FlatSettings

func (s *IndicesGetTemplateService) FlatSettings(flatSettings bool) *IndicesGetTemplateService

FlatSettings is returns settings in flat format (default: false).

func (*IndicesGetTemplateService) Local

Local indicates whether to return local information, i.e. do not retrieve the state from master node (default: false).

func (*IndicesGetTemplateService) Name

Name is the name of the index template.

func (*IndicesGetTemplateService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesGetTemplateService) Validate

func (s *IndicesGetTemplateService) Validate() error

Validate checks if the operation is valid.

type IndicesOpenResponse

type IndicesOpenResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesOpenResponse is the response of IndicesOpenService.Do.

type IndicesOpenService

type IndicesOpenService struct {
	// contains filtered or unexported fields
}

IndicesOpenService opens an index.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-open-close.html for details.

func NewIndicesOpenService

func NewIndicesOpenService(client *Client) *IndicesOpenService

NewIndicesOpenService creates and initializes a new IndicesOpenService.

func (*IndicesOpenService) AllowNoIndices

func (s *IndicesOpenService) AllowNoIndices(allowNoIndices bool) *IndicesOpenService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesOpenService) Do

Do executes the operation.

func (*IndicesOpenService) ExpandWildcards

func (s *IndicesOpenService) ExpandWildcards(expandWildcards string) *IndicesOpenService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both..

func (*IndicesOpenService) IgnoreUnavailable

func (s *IndicesOpenService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesOpenService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesOpenService) Index

func (s *IndicesOpenService) Index(index string) *IndicesOpenService

Index is the name of the index to open.

func (*IndicesOpenService) MasterTimeout

func (s *IndicesOpenService) MasterTimeout(masterTimeout string) *IndicesOpenService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesOpenService) Pretty

func (s *IndicesOpenService) Pretty(pretty bool) *IndicesOpenService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesOpenService) Timeout

func (s *IndicesOpenService) Timeout(timeout string) *IndicesOpenService

Timeout is an explicit operation timeout.

func (*IndicesOpenService) Validate

func (s *IndicesOpenService) Validate() error

Validate checks if the operation is valid.

type IndicesPutMappingService

type IndicesPutMappingService struct {
	// contains filtered or unexported fields
}

IndicesPutMappingService allows to register specific mapping definition for a specific type.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-put-mapping.html for details.

func NewIndicesPutMappingService

func NewIndicesPutMappingService(client *Client) *IndicesPutMappingService

NewIndicesPutMappingService creates a new IndicesPutMappingService.

func NewPutMappingService

func NewPutMappingService(client *Client) *IndicesPutMappingService

NewPutMappingService is an alias for NewIndicesPutMappingService. Use NewIndicesPutMappingService.

func (*IndicesPutMappingService) AllowNoIndices

func (s *IndicesPutMappingService) AllowNoIndices(allowNoIndices bool) *IndicesPutMappingService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. This includes `_all` string or when no indices have been specified.

func (*IndicesPutMappingService) BodyJson

func (s *IndicesPutMappingService) BodyJson(mapping map[string]interface{}) *IndicesPutMappingService

BodyJson contains the mapping definition.

func (*IndicesPutMappingService) BodyString

BodyString is the mapping definition serialized as a string.

func (*IndicesPutMappingService) Do

Do executes the operation.

func (*IndicesPutMappingService) ExpandWildcards

func (s *IndicesPutMappingService) ExpandWildcards(expandWildcards string) *IndicesPutMappingService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesPutMappingService) IgnoreUnavailable

func (s *IndicesPutMappingService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesPutMappingService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesPutMappingService) Index

Index is a list of index names the mapping should be added to (supports wildcards); use `_all` or omit to add the mapping on all indices.

func (*IndicesPutMappingService) MasterTimeout

func (s *IndicesPutMappingService) MasterTimeout(masterTimeout string) *IndicesPutMappingService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesPutMappingService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesPutMappingService) Timeout

Timeout is an explicit operation timeout.

func (*IndicesPutMappingService) Type

Type is the name of the document type.

func (*IndicesPutMappingService) UpdateAllTypes

func (s *IndicesPutMappingService) UpdateAllTypes(updateAllTypes bool) *IndicesPutMappingService

UpdateAllTypes, if true, indicates that all fields that span multiple indices should be updated (default: false).

func (*IndicesPutMappingService) Validate

func (s *IndicesPutMappingService) Validate() error

Validate checks if the operation is valid.

type IndicesPutSettingsResponse

type IndicesPutSettingsResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesPutSettingsResponse is the response of IndicesPutSettingsService.Do.

type IndicesPutSettingsService

type IndicesPutSettingsService struct {
	// contains filtered or unexported fields
}

IndicesPutSettingsService changes specific index level settings in real time.

See the documentation at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-update-settings.html.

func NewIndicesPutSettingsService

func NewIndicesPutSettingsService(client *Client) *IndicesPutSettingsService

NewIndicesPutSettingsService creates a new IndicesPutSettingsService.

func (*IndicesPutSettingsService) AllowNoIndices

func (s *IndicesPutSettingsService) AllowNoIndices(allowNoIndices bool) *IndicesPutSettingsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesPutSettingsService) BodyJson

func (s *IndicesPutSettingsService) BodyJson(body interface{}) *IndicesPutSettingsService

BodyJson is documented as: The index settings to be updated.

func (*IndicesPutSettingsService) BodyString

BodyString is documented as: The index settings to be updated.

func (*IndicesPutSettingsService) Do

Do executes the operation.

func (*IndicesPutSettingsService) ExpandWildcards

func (s *IndicesPutSettingsService) ExpandWildcards(expandWildcards string) *IndicesPutSettingsService

ExpandWildcards specifies whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesPutSettingsService) FlatSettings

func (s *IndicesPutSettingsService) FlatSettings(flatSettings bool) *IndicesPutSettingsService

FlatSettings indicates whether to return settings in flat format (default: false).

func (*IndicesPutSettingsService) IgnoreUnavailable

func (s *IndicesPutSettingsService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesPutSettingsService

IgnoreUnavailable specifies whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesPutSettingsService) Index

Index is a list of index names the mapping should be added to (supports wildcards); use `_all` or omit to add the mapping on all indices.

func (*IndicesPutSettingsService) MasterTimeout

func (s *IndicesPutSettingsService) MasterTimeout(masterTimeout string) *IndicesPutSettingsService

MasterTimeout is the timeout for connection to master.

func (*IndicesPutSettingsService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesPutSettingsService) Validate

func (s *IndicesPutSettingsService) Validate() error

Validate checks if the operation is valid.

type IndicesPutTemplateResponse

type IndicesPutTemplateResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesPutTemplateResponse is the response of IndicesPutTemplateService.Do.

type IndicesPutTemplateService

type IndicesPutTemplateService struct {
	// contains filtered or unexported fields
}

IndicesPutTemplateService creates or updates index mappings. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-templates.html.

func NewIndicesPutTemplateService

func NewIndicesPutTemplateService(client *Client) *IndicesPutTemplateService

NewIndicesPutTemplateService creates a new IndicesPutTemplateService.

func (*IndicesPutTemplateService) BodyJson

func (s *IndicesPutTemplateService) BodyJson(body interface{}) *IndicesPutTemplateService

BodyJson is documented as: The template definition.

func (*IndicesPutTemplateService) BodyString

BodyString is documented as: The template definition.

func (*IndicesPutTemplateService) Cause

Cause describes the cause for this index template creation. This is currently undocumented, but part of the Java source.

func (*IndicesPutTemplateService) Create

Create indicates whether the index template should only be added if new or can also replace an existing one.

func (*IndicesPutTemplateService) Do

Do executes the operation.

func (*IndicesPutTemplateService) FlatSettings

func (s *IndicesPutTemplateService) FlatSettings(flatSettings bool) *IndicesPutTemplateService

FlatSettings indicates whether to return settings in flat format (default: false).

func (*IndicesPutTemplateService) MasterTimeout

func (s *IndicesPutTemplateService) MasterTimeout(masterTimeout string) *IndicesPutTemplateService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesPutTemplateService) Name

Name is the name of the index template.

func (*IndicesPutTemplateService) Order

func (s *IndicesPutTemplateService) Order(order interface{}) *IndicesPutTemplateService

Order is the order for this template when merging multiple matching ones (higher numbers are merged later, overriding the lower numbers).

func (*IndicesPutTemplateService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesPutTemplateService) Timeout

Timeout is an explicit operation timeout.

func (*IndicesPutTemplateService) Validate

func (s *IndicesPutTemplateService) Validate() error

Validate checks if the operation is valid.

func (*IndicesPutTemplateService) Version

Version sets the version number for this template.

type IndicesRolloverResponse

type IndicesRolloverResponse struct {
	OldIndex           string          `json:"old_index"`
	NewIndex           string          `json:"new_index"`
	RolledOver         bool            `json:"rolled_over"`
	DryRun             bool            `json:"dry_run"`
	Acknowledged       bool            `json:"acknowledged"`
	ShardsAcknowledged bool            `json:"shards_acknowledged"`
	Conditions         map[string]bool `json:"conditions"`
}

IndicesRolloverResponse is the response of IndicesRolloverService.Do.

type IndicesRolloverService

type IndicesRolloverService struct {
	// contains filtered or unexported fields
}

IndicesRolloverService rolls an alias over to a new index when the existing index is considered to be too large or too old.

It is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-rollover-index.html.

func NewIndicesRolloverService

func NewIndicesRolloverService(client *Client) *IndicesRolloverService

NewIndicesRolloverService creates a new IndicesRolloverService.

func (*IndicesRolloverService) AddCondition

func (s *IndicesRolloverService) AddCondition(name string, value interface{}) *IndicesRolloverService

AddCondition adds a condition to the rollover decision.

func (*IndicesRolloverService) AddMapping

func (s *IndicesRolloverService) AddMapping(typ string, mapping interface{}) *IndicesRolloverService

AddMapping adds a mapping for the given type.

func (*IndicesRolloverService) AddMaxIndexAgeCondition

func (s *IndicesRolloverService) AddMaxIndexAgeCondition(time string) *IndicesRolloverService

AddMaxIndexAgeCondition adds a condition to set the max index age.

func (*IndicesRolloverService) AddMaxIndexDocsCondition

func (s *IndicesRolloverService) AddMaxIndexDocsCondition(docs int64) *IndicesRolloverService

AddMaxIndexDocsCondition adds a condition to set the max documents in the index.

func (*IndicesRolloverService) AddSetting

func (s *IndicesRolloverService) AddSetting(name string, value interface{}) *IndicesRolloverService

AddSetting adds an index setting.

func (*IndicesRolloverService) Alias

Alias is the name of the alias to rollover.

func (*IndicesRolloverService) BodyJson

func (s *IndicesRolloverService) BodyJson(body interface{}) *IndicesRolloverService

BodyJson sets the conditions that needs to be met for executing rollover, specified as a serializable JSON instance which is sent as the body of the request.

func (*IndicesRolloverService) BodyString

BodyString sets the conditions that needs to be met for executing rollover, specified as a string which is sent as the body of the request.

func (*IndicesRolloverService) Conditions

func (s *IndicesRolloverService) Conditions(conditions map[string]interface{}) *IndicesRolloverService

Conditions allows to specify all conditions as a dictionary.

func (*IndicesRolloverService) Do

Do executes the operation.

func (*IndicesRolloverService) DryRun

DryRun, when set, specifies that only conditions are checked without performing the actual rollover.

func (*IndicesRolloverService) Mappings

func (s *IndicesRolloverService) Mappings(mappings map[string]interface{}) *IndicesRolloverService

Mappings adds the index mappings.

func (*IndicesRolloverService) MasterTimeout

func (s *IndicesRolloverService) MasterTimeout(masterTimeout string) *IndicesRolloverService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesRolloverService) NewIndex

func (s *IndicesRolloverService) NewIndex(newIndex string) *IndicesRolloverService

NewIndex is the name of the rollover index.

func (*IndicesRolloverService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesRolloverService) Settings

func (s *IndicesRolloverService) Settings(settings map[string]interface{}) *IndicesRolloverService

Settings adds the index settings.

func (*IndicesRolloverService) Timeout

Timeout sets an explicit operation timeout.

func (*IndicesRolloverService) Validate

func (s *IndicesRolloverService) Validate() error

Validate checks if the operation is valid.

func (*IndicesRolloverService) WaitForActiveShards

func (s *IndicesRolloverService) WaitForActiveShards(waitForActiveShards string) *IndicesRolloverService

WaitForActiveShards sets the number of active shards to wait for on the newly created rollover index before the operation returns.

type IndicesSegmentsResponse

type IndicesSegmentsResponse struct {
	// Shards provides information returned from shards.
	Shards shardsInfo `json:"_shards"`

	// Indices provides a map into the stats of an index.
	// The key of the map is the index name.
	Indices map[string]*IndexSegments `json:"indices,omitempty"`
}

IndicesSegmentsResponse is the response of IndicesSegmentsService.Do.

type IndicesSegmentsService

type IndicesSegmentsService struct {
	// contains filtered or unexported fields
}

IndicesSegmentsService provides low level segments information that a Lucene index (shard level) is built with. Allows to be used to provide more information on the state of a shard and an index, possibly optimization information, data "wasted" on deletes, and so on.

Find further documentation at https://www.elastic.co/guide/en/elasticsearch/reference/6.1/indices-segments.html.

func NewIndicesSegmentsService

func NewIndicesSegmentsService(client *Client) *IndicesSegmentsService

NewIndicesSegmentsService creates a new IndicesSegmentsService.

func (*IndicesSegmentsService) AllowNoIndices

func (s *IndicesSegmentsService) AllowNoIndices(allowNoIndices bool) *IndicesSegmentsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*IndicesSegmentsService) Do

Do executes the operation.

func (*IndicesSegmentsService) ExpandWildcards

func (s *IndicesSegmentsService) ExpandWildcards(expandWildcards string) *IndicesSegmentsService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*IndicesSegmentsService) Human

Human, when set to true, returns time and byte-values in human-readable format.

func (*IndicesSegmentsService) IgnoreUnavailable

func (s *IndicesSegmentsService) IgnoreUnavailable(ignoreUnavailable bool) *IndicesSegmentsService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*IndicesSegmentsService) Index

Index is a comma-separated list of index names; use `_all` or empty string to perform the operation on all indices.

func (*IndicesSegmentsService) OperationThreading

func (s *IndicesSegmentsService) OperationThreading(operationThreading interface{}) *IndicesSegmentsService

OperationThreading is undocumented in Elasticsearch as of now.

func (*IndicesSegmentsService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesSegmentsService) Validate

func (s *IndicesSegmentsService) Validate() error

Validate checks if the operation is valid.

func (*IndicesSegmentsService) Verbose

func (s *IndicesSegmentsService) Verbose(verbose bool) *IndicesSegmentsService

Verbose, when set to true, includes detailed memory usage by Lucene.

type IndicesShrinkResponse

type IndicesShrinkResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IndicesShrinkResponse is the response of IndicesShrinkService.Do.

type IndicesShrinkService

type IndicesShrinkService struct {
	// contains filtered or unexported fields
}

IndicesShrinkService allows you to shrink an existing index into a new index with fewer primary shards.

For further details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-shrink-index.html.

func NewIndicesShrinkService

func NewIndicesShrinkService(client *Client) *IndicesShrinkService

NewIndicesShrinkService creates a new IndicesShrinkService.

func (*IndicesShrinkService) BodyJson

func (s *IndicesShrinkService) BodyJson(body interface{}) *IndicesShrinkService

BodyJson is the configuration for the target index (`settings` and `aliases`) defined as a JSON-serializable instance to be sent as the request body.

func (*IndicesShrinkService) BodyString

func (s *IndicesShrinkService) BodyString(body string) *IndicesShrinkService

BodyString is the configuration for the target index (`settings` and `aliases`) defined as a string to send as the request body.

func (*IndicesShrinkService) Do

Do executes the operation.

func (*IndicesShrinkService) MasterTimeout

func (s *IndicesShrinkService) MasterTimeout(masterTimeout string) *IndicesShrinkService

MasterTimeout specifies the timeout for connection to master.

func (*IndicesShrinkService) Pretty

func (s *IndicesShrinkService) Pretty(pretty bool) *IndicesShrinkService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesShrinkService) Source

Source is the name of the source index to shrink.

func (*IndicesShrinkService) Target

Target is the name of the target index to shrink into.

func (*IndicesShrinkService) Timeout

func (s *IndicesShrinkService) Timeout(timeout string) *IndicesShrinkService

Timeout is an explicit operation timeout.

func (*IndicesShrinkService) Validate

func (s *IndicesShrinkService) Validate() error

Validate checks if the operation is valid.

func (*IndicesShrinkService) WaitForActiveShards

func (s *IndicesShrinkService) WaitForActiveShards(waitForActiveShards string) *IndicesShrinkService

WaitForActiveShards sets the number of active shards to wait for on the shrunken index before the operation returns.

type IndicesStatsResponse

type IndicesStatsResponse struct {
	// Shards provides information returned from shards.
	Shards shardsInfo `json:"_shards"`

	// All provides summary stats about all indices.
	All *IndexStats `json:"_all,omitempty"`

	// Indices provides a map into the stats of an index. The key of the
	// map is the index name.
	Indices map[string]*IndexStats `json:"indices,omitempty"`
}

IndicesStatsResponse is the response of IndicesStatsService.Do.

type IndicesStatsService

type IndicesStatsService struct {
	// contains filtered or unexported fields
}

IndicesStatsService provides stats on various metrics of one or more indices. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-stats.html.

func NewIndicesStatsService

func NewIndicesStatsService(client *Client) *IndicesStatsService

NewIndicesStatsService creates a new IndicesStatsService.

func (*IndicesStatsService) CompletionFields

func (s *IndicesStatsService) CompletionFields(completionFields ...string) *IndicesStatsService

CompletionFields is a list of fields for `fielddata` and `suggest` index metric (supports wildcards).

func (*IndicesStatsService) Do

Do executes the operation.

func (*IndicesStatsService) FielddataFields

func (s *IndicesStatsService) FielddataFields(fielddataFields ...string) *IndicesStatsService

FielddataFields is a list of fields for `fielddata` index metric (supports wildcards).

func (*IndicesStatsService) Fields

func (s *IndicesStatsService) Fields(fields ...string) *IndicesStatsService

Fields is a list of fields for `fielddata` and `completion` index metric (supports wildcards).

func (*IndicesStatsService) Groups

func (s *IndicesStatsService) Groups(groups ...string) *IndicesStatsService

Groups is a list of search groups for `search` index metric.

func (*IndicesStatsService) Human

func (s *IndicesStatsService) Human(human bool) *IndicesStatsService

Human indicates whether to return time and byte values in human-readable format..

func (*IndicesStatsService) Index

func (s *IndicesStatsService) Index(indices ...string) *IndicesStatsService

Index is the list of index names; use `_all` or empty string to perform the operation on all indices.

func (*IndicesStatsService) Level

Level returns stats aggregated at cluster, index or shard level.

func (*IndicesStatsService) Metric

func (s *IndicesStatsService) Metric(metric ...string) *IndicesStatsService

Metric limits the information returned the specific metrics. Options are: docs, store, indexing, get, search, completion, fielddata, flush, merge, query_cache, refresh, suggest, and warmer.

func (*IndicesStatsService) Pretty

func (s *IndicesStatsService) Pretty(pretty bool) *IndicesStatsService

Pretty indicates that the JSON response be indented and human readable.

func (*IndicesStatsService) Type

func (s *IndicesStatsService) Type(types ...string) *IndicesStatsService

Type is a list of document types for the `indexing` index metric.

func (*IndicesStatsService) Validate

func (s *IndicesStatsService) Validate() error

Validate checks if the operation is valid.

type IngestDeletePipelineResponse

type IngestDeletePipelineResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IngestDeletePipelineResponse is the response of IngestDeletePipelineService.Do.

type IngestDeletePipelineService

type IngestDeletePipelineService struct {
	// contains filtered or unexported fields
}

IngestDeletePipelineService deletes pipelines by ID. It is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/delete-pipeline-api.html.

func NewIngestDeletePipelineService

func NewIngestDeletePipelineService(client *Client) *IngestDeletePipelineService

NewIngestDeletePipelineService creates a new IngestDeletePipelineService.

func (*IngestDeletePipelineService) Do

Do executes the operation.

func (*IngestDeletePipelineService) Id

Id is documented as: Pipeline ID.

func (*IngestDeletePipelineService) MasterTimeout

func (s *IngestDeletePipelineService) MasterTimeout(masterTimeout string) *IngestDeletePipelineService

MasterTimeout is documented as: Explicit operation timeout for connection to master node.

func (*IngestDeletePipelineService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IngestDeletePipelineService) Timeout

Timeout is documented as: Explicit operation timeout.

func (*IngestDeletePipelineService) Validate

func (s *IngestDeletePipelineService) Validate() error

Validate checks if the operation is valid.

type IngestGetPipeline

type IngestGetPipeline struct {
	ID     string                 `json:"id"`
	Config map[string]interface{} `json:"config"`
}

type IngestGetPipelineResponse

type IngestGetPipelineResponse map[string]*IngestGetPipeline

IngestGetPipelineResponse is the response of IngestGetPipelineService.Do.

type IngestGetPipelineService

type IngestGetPipelineService struct {
	// contains filtered or unexported fields
}

IngestGetPipelineService returns pipelines based on ID. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/get-pipeline-api.html for documentation.

func NewIngestGetPipelineService

func NewIngestGetPipelineService(client *Client) *IngestGetPipelineService

NewIngestGetPipelineService creates a new IngestGetPipelineService.

func (*IngestGetPipelineService) Do

Do executes the operation.

func (*IngestGetPipelineService) Id

Id is a list of pipeline ids. Wildcards supported.

func (*IngestGetPipelineService) MasterTimeout

func (s *IngestGetPipelineService) MasterTimeout(masterTimeout string) *IngestGetPipelineService

MasterTimeout is an explicit operation timeout for connection to master node.

func (*IngestGetPipelineService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IngestGetPipelineService) Validate

func (s *IngestGetPipelineService) Validate() error

Validate checks if the operation is valid.

type IngestPutPipelineResponse

type IngestPutPipelineResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

IngestPutPipelineResponse is the response of IngestPutPipelineService.Do.

type IngestPutPipelineService

type IngestPutPipelineService struct {
	// contains filtered or unexported fields
}

IngestPutPipelineService adds pipelines and updates existing pipelines in the cluster.

It is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/put-pipeline-api.html.

func NewIngestPutPipelineService

func NewIngestPutPipelineService(client *Client) *IngestPutPipelineService

NewIngestPutPipelineService creates a new IngestPutPipelineService.

func (*IngestPutPipelineService) BodyJson

func (s *IngestPutPipelineService) BodyJson(body interface{}) *IngestPutPipelineService

BodyJson is the ingest definition, defined as a JSON-serializable document. Use e.g. a map[string]interface{} here.

func (*IngestPutPipelineService) BodyString

BodyString is the ingest definition, specified as a string.

func (*IngestPutPipelineService) Do

Do executes the operation.

func (*IngestPutPipelineService) Id

Id is the pipeline ID.

func (*IngestPutPipelineService) MasterTimeout

func (s *IngestPutPipelineService) MasterTimeout(masterTimeout string) *IngestPutPipelineService

MasterTimeout is an explicit operation timeout for connection to master node.

func (*IngestPutPipelineService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IngestPutPipelineService) Timeout

Timeout specifies an explicit operation timeout.

func (*IngestPutPipelineService) Validate

func (s *IngestPutPipelineService) Validate() error

Validate checks if the operation is valid.

type IngestSimulateDocumentResult

type IngestSimulateDocumentResult struct {
	Doc              map[string]interface{}           `json:"doc"`
	ProcessorResults []*IngestSimulateProcessorResult `json:"processor_results"`
}

type IngestSimulatePipelineResponse

type IngestSimulatePipelineResponse struct {
	Docs []*IngestSimulateDocumentResult `json:"docs"`
}

IngestSimulatePipelineResponse is the response of IngestSimulatePipeline.Do.

type IngestSimulatePipelineService

type IngestSimulatePipelineService struct {
	// contains filtered or unexported fields
}

IngestSimulatePipelineService executes a specific pipeline against the set of documents provided in the body of the request.

The API is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/simulate-pipeline-api.html.

func NewIngestSimulatePipelineService

func NewIngestSimulatePipelineService(client *Client) *IngestSimulatePipelineService

NewIngestSimulatePipelineService creates a new IngestSimulatePipeline.

func (*IngestSimulatePipelineService) BodyJson

func (s *IngestSimulatePipelineService) BodyJson(body interface{}) *IngestSimulatePipelineService

BodyJson is the ingest definition, defined as a JSON-serializable simulate definition. Use e.g. a map[string]interface{} here.

func (*IngestSimulatePipelineService) BodyString

BodyString is the simulate definition, defined as a string.

func (*IngestSimulatePipelineService) Do

Do executes the operation.

func (*IngestSimulatePipelineService) Id

Id specifies the pipeline ID.

func (*IngestSimulatePipelineService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*IngestSimulatePipelineService) Validate

func (s *IngestSimulatePipelineService) Validate() error

Validate checks if the operation is valid.

func (*IngestSimulatePipelineService) Verbose

Verbose mode. Display data output for each processor in executed pipeline.

type IngestSimulateProcessorResult

type IngestSimulateProcessorResult struct {
	ProcessorTag string                 `json:"tag"`
	Doc          map[string]interface{} `json:"doc"`
}

type InnerHit

type InnerHit struct {
	// contains filtered or unexported fields
}

InnerHit implements a simple join for parent/child, nested, and even top-level documents in Elasticsearch. It is an experimental feature for Elasticsearch versions 1.5 (or greater). See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/search-request-inner-hits.html for documentation.

See the tests for SearchSource, HasChildFilter, HasChildQuery, HasParentFilter, HasParentQuery, NestedFilter, and NestedQuery for usage examples.

func NewInnerHit

func NewInnerHit() *InnerHit

NewInnerHit creates a new InnerHit.

func (*InnerHit) DocvalueField

func (hit *InnerHit) DocvalueField(docvalueField string) *InnerHit

func (*InnerHit) DocvalueFields

func (hit *InnerHit) DocvalueFields(docvalueFields ...string) *InnerHit

func (*InnerHit) Explain

func (hit *InnerHit) Explain(explain bool) *InnerHit

func (*InnerHit) FetchSource

func (hit *InnerHit) FetchSource(fetchSource bool) *InnerHit

func (*InnerHit) FetchSourceContext

func (hit *InnerHit) FetchSourceContext(fetchSourceContext *FetchSourceContext) *InnerHit

func (*InnerHit) From

func (hit *InnerHit) From(from int) *InnerHit

func (*InnerHit) Highlight

func (hit *InnerHit) Highlight(highlight *Highlight) *InnerHit

func (*InnerHit) Highlighter

func (hit *InnerHit) Highlighter() *Highlight

func (*InnerHit) Name

func (hit *InnerHit) Name(name string) *InnerHit

func (*InnerHit) NoStoredFields

func (hit *InnerHit) NoStoredFields() *InnerHit

func (*InnerHit) Path

func (hit *InnerHit) Path(path string) *InnerHit

func (*InnerHit) Query

func (hit *InnerHit) Query(query Query) *InnerHit

func (*InnerHit) ScriptField

func (hit *InnerHit) ScriptField(scriptField *ScriptField) *InnerHit

func (*InnerHit) ScriptFields

func (hit *InnerHit) ScriptFields(scriptFields ...*ScriptField) *InnerHit

func (*InnerHit) Size

func (hit *InnerHit) Size(size int) *InnerHit

func (*InnerHit) Sort

func (hit *InnerHit) Sort(field string, ascending bool) *InnerHit

func (*InnerHit) SortBy

func (hit *InnerHit) SortBy(sorter ...Sorter) *InnerHit

func (*InnerHit) SortWithInfo

func (hit *InnerHit) SortWithInfo(info SortInfo) *InnerHit

func (*InnerHit) Source

func (hit *InnerHit) Source() (interface{}, error)

func (*InnerHit) StoredField

func (hit *InnerHit) StoredField(storedFieldName string) *InnerHit

func (*InnerHit) StoredFields

func (hit *InnerHit) StoredFields(storedFieldNames ...string) *InnerHit

func (*InnerHit) TrackScores

func (hit *InnerHit) TrackScores(trackScores bool) *InnerHit

func (*InnerHit) Type

func (hit *InnerHit) Type(typ string) *InnerHit

func (*InnerHit) Version

func (hit *InnerHit) Version(version bool) *InnerHit

type JLHScoreSignificanceHeuristic

type JLHScoreSignificanceHeuristic struct{}

JLHScoreSignificanceHeuristic implements the JLH score as described in https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_jlh_score.

func NewJLHScoreSignificanceHeuristic

func NewJLHScoreSignificanceHeuristic() *JLHScoreSignificanceHeuristic

NewJLHScoreSignificanceHeuristic initializes a new JLHScoreSignificanceHeuristic.

func (*JLHScoreSignificanceHeuristic) Name

Name returns the name of the heuristic in the REST interface.

func (*JLHScoreSignificanceHeuristic) Source

func (sh *JLHScoreSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type LaplaceSmoothingModel

type LaplaceSmoothingModel struct {
	// contains filtered or unexported fields
}

LaplaceSmoothingModel implements a laplace smoothing model. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-phrase.html#_smoothing_models for details about smoothing models.

func NewLaplaceSmoothingModel

func NewLaplaceSmoothingModel(alpha float64) *LaplaceSmoothingModel

func (*LaplaceSmoothingModel) Source

func (sm *LaplaceSmoothingModel) Source() (interface{}, error)

func (*LaplaceSmoothingModel) Type

func (sm *LaplaceSmoothingModel) Type() string

type LinearDecayFunction

type LinearDecayFunction struct {
	// contains filtered or unexported fields
}

LinearDecayFunction builds a linear decay score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html for details.

func NewLinearDecayFunction

func NewLinearDecayFunction() *LinearDecayFunction

NewLinearDecayFunction initializes and returns a new LinearDecayFunction.

func (*LinearDecayFunction) Decay

Decay defines how documents are scored at the distance given a Scale. If no decay is defined, documents at the distance Scale will be scored 0.5.

func (*LinearDecayFunction) FieldName

func (fn *LinearDecayFunction) FieldName(fieldName string) *LinearDecayFunction

FieldName specifies the name of the field to which this decay function is applied to.

func (*LinearDecayFunction) GetMultiValueMode

func (fn *LinearDecayFunction) GetMultiValueMode() string

GetMultiValueMode returns how the decay function should be calculated on a field that has multiple values. Valid modes are: min, max, avg, and sum.

func (*LinearDecayFunction) GetWeight

func (fn *LinearDecayFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*LinearDecayFunction) MultiValueMode

func (fn *LinearDecayFunction) MultiValueMode(mode string) *LinearDecayFunction

MultiValueMode specifies how the decay function should be calculated on a field that has multiple values. Valid modes are: min, max, avg, and sum.

func (*LinearDecayFunction) Name

func (fn *LinearDecayFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*LinearDecayFunction) Offset

func (fn *LinearDecayFunction) Offset(offset interface{}) *LinearDecayFunction

Offset, if defined, computes the decay function only for a distance greater than the defined offset.

func (*LinearDecayFunction) Origin

func (fn *LinearDecayFunction) Origin(origin interface{}) *LinearDecayFunction

Origin defines the "central point" by which the decay function calculates "distance".

func (*LinearDecayFunction) Scale

func (fn *LinearDecayFunction) Scale(scale interface{}) *LinearDecayFunction

Scale defines the scale to be used with Decay.

func (*LinearDecayFunction) Source

func (fn *LinearDecayFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*LinearDecayFunction) Weight

func (fn *LinearDecayFunction) Weight(weight float64) *LinearDecayFunction

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type LinearInterpolationSmoothingModel

type LinearInterpolationSmoothingModel struct {
	// contains filtered or unexported fields
}

LinearInterpolationSmoothingModel implements a linear interpolation smoothing model. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-phrase.html#_smoothing_models for details about smoothing models.

func NewLinearInterpolationSmoothingModel

func NewLinearInterpolationSmoothingModel(trigramLamda, bigramLambda, unigramLambda float64) *LinearInterpolationSmoothingModel

func (*LinearInterpolationSmoothingModel) Source

func (sm *LinearInterpolationSmoothingModel) Source() (interface{}, error)

func (*LinearInterpolationSmoothingModel) Type

type LinearMovAvgModel

type LinearMovAvgModel struct {
}

LinearMovAvgModel calculates a linearly weighted moving average, such that older values are linearly less important. "Time" is determined by position in collection.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html#_linear

func NewLinearMovAvgModel

func NewLinearMovAvgModel() *LinearMovAvgModel

NewLinearMovAvgModel creates and initializes a new LinearMovAvgModel.

func (*LinearMovAvgModel) Name

func (m *LinearMovAvgModel) Name() string

Name of the model.

func (*LinearMovAvgModel) Settings

func (m *LinearMovAvgModel) Settings() map[string]interface{}

Settings of the model.

type Logger

type Logger interface {
	Printf(format string, v ...interface{})
}

Logger specifies the interface for all log operations.

type MatchAllQuery

type MatchAllQuery struct {
	// contains filtered or unexported fields
}

MatchAllQuery is the most simple query, which matches all documents, giving them all a _score of 1.0.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-match-all-query.html

func NewMatchAllQuery

func NewMatchAllQuery() *MatchAllQuery

NewMatchAllQuery creates and initializes a new match all query.

func (*MatchAllQuery) Boost

func (q *MatchAllQuery) Boost(boost float64) *MatchAllQuery

Boost sets the boost for this query. Documents matching this query will (in addition to the normal weightings) have their score multiplied by the boost provided.

func (*MatchAllQuery) QueryName

func (q *MatchAllQuery) QueryName(name string) *MatchAllQuery

QueryName sets the query name.

func (MatchAllQuery) Source

func (q MatchAllQuery) Source() (interface{}, error)

Source returns JSON for the match all query.

type MatchNoneQuery

type MatchNoneQuery struct {
	// contains filtered or unexported fields
}

MatchNoneQuery returns no documents. It is the inverse of MatchAllQuery.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-match-all-query.html

func NewMatchNoneQuery

func NewMatchNoneQuery() *MatchNoneQuery

NewMatchNoneQuery creates and initializes a new match none query.

func (*MatchNoneQuery) QueryName

func (q *MatchNoneQuery) QueryName(name string) *MatchNoneQuery

QueryName sets the query name.

func (MatchNoneQuery) Source

func (q MatchNoneQuery) Source() (interface{}, error)

Source returns JSON for the match none query.

type MatchPhrasePrefixQuery

type MatchPhrasePrefixQuery struct {
	// contains filtered or unexported fields
}

MatchPhrasePrefixQuery is the same as match_phrase, except that it allows for prefix matches on the last term in the text.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-match-query-phrase-prefix.html

func NewMatchPhrasePrefixQuery

func NewMatchPhrasePrefixQuery(name string, value interface{}) *MatchPhrasePrefixQuery

NewMatchPhrasePrefixQuery creates and initializes a new MatchPhrasePrefixQuery.

func (*MatchPhrasePrefixQuery) Analyzer

func (q *MatchPhrasePrefixQuery) Analyzer(analyzer string) *MatchPhrasePrefixQuery

Analyzer explicitly sets the analyzer to use. It defaults to use explicit mapping config for the field, or, if not set, the default search analyzer.

func (*MatchPhrasePrefixQuery) Boost

Boost sets the boost to apply to this query.

func (*MatchPhrasePrefixQuery) MaxExpansions

func (q *MatchPhrasePrefixQuery) MaxExpansions(n int) *MatchPhrasePrefixQuery

MaxExpansions sets the number of term expansions to use.

func (*MatchPhrasePrefixQuery) QueryName

func (q *MatchPhrasePrefixQuery) QueryName(queryName string) *MatchPhrasePrefixQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*MatchPhrasePrefixQuery) Slop

Slop sets the phrase slop if evaluated to a phrase query type.

func (*MatchPhrasePrefixQuery) Source

func (q *MatchPhrasePrefixQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type MatchPhraseQuery

type MatchPhraseQuery struct {
	// contains filtered or unexported fields
}

MatchPhraseQuery analyzes the text and creates a phrase query out of the analyzed text.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-match-query-phrase.html

func NewMatchPhraseQuery

func NewMatchPhraseQuery(name string, value interface{}) *MatchPhraseQuery

NewMatchPhraseQuery creates and initializes a new MatchPhraseQuery.

func (*MatchPhraseQuery) Analyzer

func (q *MatchPhraseQuery) Analyzer(analyzer string) *MatchPhraseQuery

Analyzer explicitly sets the analyzer to use. It defaults to use explicit mapping config for the field, or, if not set, the default search analyzer.

func (*MatchPhraseQuery) Boost

func (q *MatchPhraseQuery) Boost(boost float64) *MatchPhraseQuery

Boost sets the boost to apply to this query.

func (*MatchPhraseQuery) QueryName

func (q *MatchPhraseQuery) QueryName(queryName string) *MatchPhraseQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*MatchPhraseQuery) Slop

func (q *MatchPhraseQuery) Slop(slop int) *MatchPhraseQuery

Slop sets the phrase slop if evaluated to a phrase query type.

func (*MatchPhraseQuery) Source

func (q *MatchPhraseQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

type MatchQuery

type MatchQuery struct {
	// contains filtered or unexported fields
}

MatchQuery is a family of queries that accepts text/numerics/dates, analyzes them, and constructs a query.

To create a new MatchQuery, use NewMatchQuery. To create specific types of queries, e.g. a match_phrase query, use NewMatchPhrQuery(...).Type("phrase"), or use one of the shortcuts e.g. NewMatchPhraseQuery(...).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-match-query.html

func NewMatchQuery

func NewMatchQuery(name string, text interface{}) *MatchQuery

NewMatchQuery creates and initializes a new MatchQuery.

func (*MatchQuery) Analyzer

func (q *MatchQuery) Analyzer(analyzer string) *MatchQuery

Analyzer explicitly sets the analyzer to use. It defaults to use explicit mapping config for the field, or, if not set, the default search analyzer.

func (*MatchQuery) Boost

func (q *MatchQuery) Boost(boost float64) *MatchQuery

Boost sets the boost to apply to this query.

func (*MatchQuery) CutoffFrequency

func (q *MatchQuery) CutoffFrequency(cutoff float64) *MatchQuery

CutoffFrequency can be a value in [0..1] (or an absolute number >=1). It represents the maximum treshold of a terms document frequency to be considered a low frequency term.

func (*MatchQuery) Fuzziness

func (q *MatchQuery) Fuzziness(fuzziness string) *MatchQuery

Fuzziness sets the fuzziness when evaluated to a fuzzy query type. Defaults to "AUTO".

func (*MatchQuery) FuzzyRewrite

func (q *MatchQuery) FuzzyRewrite(fuzzyRewrite string) *MatchQuery

FuzzyRewrite sets the fuzzy_rewrite parameter controlling how the fuzzy query will get rewritten.

func (*MatchQuery) FuzzyTranspositions

func (q *MatchQuery) FuzzyTranspositions(fuzzyTranspositions bool) *MatchQuery

FuzzyTranspositions sets whether transpositions are supported in fuzzy queries.

The default metric used by fuzzy queries to determine a match is the Damerau-Levenshtein distance formula which supports transpositions. Setting transposition to false will * switch to classic Levenshtein distance. * If not set, Damerau-Levenshtein distance metric will be used.

func (*MatchQuery) Lenient

func (q *MatchQuery) Lenient(lenient bool) *MatchQuery

Lenient specifies whether format based failures will be ignored.

func (*MatchQuery) MaxExpansions

func (q *MatchQuery) MaxExpansions(maxExpansions int) *MatchQuery

MaxExpansions is used with fuzzy or prefix type queries. It specifies the number of term expansions to use. It defaults to unbounded so that its recommended to set it to a reasonable value for faster execution.

func (*MatchQuery) MinimumShouldMatch

func (q *MatchQuery) MinimumShouldMatch(minimumShouldMatch string) *MatchQuery

MinimumShouldMatch sets the optional minimumShouldMatch value to apply to the query.

func (*MatchQuery) Operator

func (q *MatchQuery) Operator(operator string) *MatchQuery

Operator sets the operator to use when using a boolean query. Can be "AND" or "OR" (default).

func (*MatchQuery) PrefixLength

func (q *MatchQuery) PrefixLength(prefixLength int) *MatchQuery

PrefixLength sets the length of a length of common (non-fuzzy) prefix for fuzzy match queries. It must be non-negative.

func (*MatchQuery) QueryName

func (q *MatchQuery) QueryName(queryName string) *MatchQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*MatchQuery) Source

func (q *MatchQuery) Source() (interface{}, error)

Source returns JSON for the function score query.

func (*MatchQuery) ZeroTermsQuery

func (q *MatchQuery) ZeroTermsQuery(zeroTermsQuery string) *MatchQuery

ZeroTermsQuery can be "all" or "none".

type MatrixStatsAggregation

type MatrixStatsAggregation struct {
	// contains filtered or unexported fields
}

MatrixMatrixStatsAggregation ... See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-stats-aggregation.html for details.

func NewMatrixStatsAggregation

func NewMatrixStatsAggregation() *MatrixStatsAggregation

NewMatrixStatsAggregation initializes a new MatrixStatsAggregation.

func (*MatrixStatsAggregation) Fields

func (*MatrixStatsAggregation) Format

func (*MatrixStatsAggregation) Meta

func (a *MatrixStatsAggregation) Meta(metaData map[string]interface{}) *MatrixStatsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MatrixStatsAggregation) Missing

func (a *MatrixStatsAggregation) Missing(missing interface{}) *MatrixStatsAggregation

Missing configures the value to use when documents miss a value.

func (*MatrixStatsAggregation) Mode

Mode specifies how to operate. Valid values are: sum, avg, median, min, or max.

func (*MatrixStatsAggregation) Source

func (a *MatrixStatsAggregation) Source() (interface{}, error)

Source returns the JSON to serialize into the request, or an error.

func (*MatrixStatsAggregation) SubAggregation

func (a *MatrixStatsAggregation) SubAggregation(name string, subAggregation Aggregation) *MatrixStatsAggregation

func (*MatrixStatsAggregation) ValueType

func (a *MatrixStatsAggregation) ValueType(valueType interface{}) *MatrixStatsAggregation

type MaxAggregation

type MaxAggregation struct {
	// contains filtered or unexported fields
}

MaxAggregation is a single-value metrics aggregation that keeps track and returns the maximum value among the numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-max-aggregation.html

func NewMaxAggregation

func NewMaxAggregation() *MaxAggregation

func (*MaxAggregation) Field

func (a *MaxAggregation) Field(field string) *MaxAggregation

func (*MaxAggregation) Format

func (a *MaxAggregation) Format(format string) *MaxAggregation

func (*MaxAggregation) Meta

func (a *MaxAggregation) Meta(metaData map[string]interface{}) *MaxAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MaxAggregation) Script

func (a *MaxAggregation) Script(script *Script) *MaxAggregation

func (*MaxAggregation) Source

func (a *MaxAggregation) Source() (interface{}, error)

func (*MaxAggregation) SubAggregation

func (a *MaxAggregation) SubAggregation(name string, subAggregation Aggregation) *MaxAggregation

type MaxBucketAggregation

type MaxBucketAggregation struct {
	// contains filtered or unexported fields
}

MaxBucketAggregation is a sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-max-bucket-aggregation.html

func NewMaxBucketAggregation

func NewMaxBucketAggregation() *MaxBucketAggregation

NewMaxBucketAggregation creates and initializes a new MaxBucketAggregation.

func (*MaxBucketAggregation) BucketsPath

func (a *MaxBucketAggregation) BucketsPath(bucketsPaths ...string) *MaxBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*MaxBucketAggregation) Format

Format to use on the output of this aggregation.

func (*MaxBucketAggregation) GapInsertZeros

func (a *MaxBucketAggregation) GapInsertZeros() *MaxBucketAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*MaxBucketAggregation) GapPolicy

func (a *MaxBucketAggregation) GapPolicy(gapPolicy string) *MaxBucketAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*MaxBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*MaxBucketAggregation) Meta

func (a *MaxBucketAggregation) Meta(metaData map[string]interface{}) *MaxBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MaxBucketAggregation) Source

func (a *MaxBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type MgetResponse

type MgetResponse struct {
	Docs []*GetResult `json:"docs,omitempty"`
}

MgetResponse is the outcome of a Multi GET API request.

type MgetService

type MgetService struct {
	// contains filtered or unexported fields
}

MgetService allows to get multiple documents based on an index, type (optional) and id (possibly routing). The response includes a docs array with all the fetched documents, each element similar in structure to a document provided by the Get API.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-multi-get.html for details.

func NewMgetService

func NewMgetService(client *Client) *MgetService

NewMgetService initializes a new Multi GET API request call.

func (*MgetService) Add

func (s *MgetService) Add(items ...*MultiGetItem) *MgetService

Add an item to the request.

func (*MgetService) Do

func (s *MgetService) Do(ctx context.Context) (*MgetResponse, error)

Do executes the request.

func (*MgetService) Preference

func (s *MgetService) Preference(preference string) *MgetService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*MgetService) Pretty

func (s *MgetService) Pretty(pretty bool) *MgetService

Pretty indicates that the JSON response be indented and human readable.

func (*MgetService) Realtime

func (s *MgetService) Realtime(realtime bool) *MgetService

Realtime specifies whether to perform the operation in realtime or search mode.

func (*MgetService) Refresh

func (s *MgetService) Refresh(refresh string) *MgetService

Refresh the shard containing the document before performing the operation.

func (*MgetService) Routing

func (s *MgetService) Routing(routing string) *MgetService

Routing is the specific routing value.

func (*MgetService) Source

func (s *MgetService) Source() (interface{}, error)

Source returns the request body, which will be serialized into JSON.

func (*MgetService) StoredFields

func (s *MgetService) StoredFields(storedFields ...string) *MgetService

StoredFields is a list of fields to return in the response.

type MinAggregation

type MinAggregation struct {
	// contains filtered or unexported fields
}

MinAggregation is a single-value metrics aggregation that keeps track and returns the minimum value among numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-min-aggregation.html

func NewMinAggregation

func NewMinAggregation() *MinAggregation

func (*MinAggregation) Field

func (a *MinAggregation) Field(field string) *MinAggregation

func (*MinAggregation) Format

func (a *MinAggregation) Format(format string) *MinAggregation

func (*MinAggregation) Meta

func (a *MinAggregation) Meta(metaData map[string]interface{}) *MinAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MinAggregation) Script

func (a *MinAggregation) Script(script *Script) *MinAggregation

func (*MinAggregation) Source

func (a *MinAggregation) Source() (interface{}, error)

func (*MinAggregation) SubAggregation

func (a *MinAggregation) SubAggregation(name string, subAggregation Aggregation) *MinAggregation

type MinBucketAggregation

type MinBucketAggregation struct {
	// contains filtered or unexported fields
}

MinBucketAggregation is a sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-min-bucket-aggregation.html

func NewMinBucketAggregation

func NewMinBucketAggregation() *MinBucketAggregation

NewMinBucketAggregation creates and initializes a new MinBucketAggregation.

func (*MinBucketAggregation) BucketsPath

func (a *MinBucketAggregation) BucketsPath(bucketsPaths ...string) *MinBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*MinBucketAggregation) Format

Format to use on the output of this aggregation.

func (*MinBucketAggregation) GapInsertZeros

func (a *MinBucketAggregation) GapInsertZeros() *MinBucketAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*MinBucketAggregation) GapPolicy

func (a *MinBucketAggregation) GapPolicy(gapPolicy string) *MinBucketAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*MinBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*MinBucketAggregation) Meta

func (a *MinBucketAggregation) Meta(metaData map[string]interface{}) *MinBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MinBucketAggregation) Source

func (a *MinBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type MissingAggregation

type MissingAggregation struct {
	// contains filtered or unexported fields
}

MissingAggregation is a field data based single bucket aggregation, that creates a bucket of all documents in the current document set context that are missing a field value (effectively, missing a field or having the configured NULL value set). This aggregator will often be used in conjunction with other field data bucket aggregators (such as ranges) to return information for all the documents that could not be placed in any of the other buckets due to missing field data values. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-missing-aggregation.html

func NewMissingAggregation

func NewMissingAggregation() *MissingAggregation

func (*MissingAggregation) Field

func (a *MissingAggregation) Field(field string) *MissingAggregation

func (*MissingAggregation) Meta

func (a *MissingAggregation) Meta(metaData map[string]interface{}) *MissingAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MissingAggregation) Source

func (a *MissingAggregation) Source() (interface{}, error)

func (*MissingAggregation) SubAggregation

func (a *MissingAggregation) SubAggregation(name string, subAggregation Aggregation) *MissingAggregation

type MoreLikeThisQuery

type MoreLikeThisQuery struct {
	// contains filtered or unexported fields
}

MoreLikeThis query (MLT Query) finds documents that are "like" a given set of documents. In order to do so, MLT selects a set of representative terms of these input documents, forms a query using these terms, executes the query and returns the results. The user controls the input documents, how the terms should be selected and how the query is formed.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-mlt-query.html

func NewMoreLikeThisQuery

func NewMoreLikeThisQuery() *MoreLikeThisQuery

NewMoreLikeThisQuery creates and initializes a new MoreLikeThisQuery.

func (*MoreLikeThisQuery) Analyzer

func (q *MoreLikeThisQuery) Analyzer(analyzer string) *MoreLikeThisQuery

Analyzer specifies the analyzer that will be use to analyze the text. Defaults to the analyzer associated with the field.

func (*MoreLikeThisQuery) Boost

func (q *MoreLikeThisQuery) Boost(boost float64) *MoreLikeThisQuery

Boost sets the boost for this query.

func (*MoreLikeThisQuery) BoostTerms

func (q *MoreLikeThisQuery) BoostTerms(boostTerms float64) *MoreLikeThisQuery

BoostTerms sets the boost factor to use when boosting terms. It defaults to 1.

func (*MoreLikeThisQuery) FailOnUnsupportedField

func (q *MoreLikeThisQuery) FailOnUnsupportedField(fail bool) *MoreLikeThisQuery

FailOnUnsupportedField indicates whether to fail or return no result when this query is run against a field which is not supported such as a binary/numeric field.

func (*MoreLikeThisQuery) Field

func (q *MoreLikeThisQuery) Field(fields ...string) *MoreLikeThisQuery

Field adds one or more field names to the query.

func (*MoreLikeThisQuery) Ids

func (q *MoreLikeThisQuery) Ids(ids ...string) *MoreLikeThisQuery

Ids sets the document ids to use in order to find documents that are "like" this.

func (*MoreLikeThisQuery) IgnoreLikeItems

func (q *MoreLikeThisQuery) IgnoreLikeItems(ignoreDocs ...*MoreLikeThisQueryItem) *MoreLikeThisQuery

IgnoreLikeItems sets the documents from which the terms should not be selected from.

func (*MoreLikeThisQuery) IgnoreLikeText

func (q *MoreLikeThisQuery) IgnoreLikeText(ignoreLikeText ...string) *MoreLikeThisQuery

IgnoreLikeText sets the text from which the terms should not be selected from.

func (*MoreLikeThisQuery) Include

func (q *MoreLikeThisQuery) Include(include bool) *MoreLikeThisQuery

Include specifies whether the input documents should also be included in the results returned. Defaults to false.

func (*MoreLikeThisQuery) LikeItems

LikeItems sets the documents to use in order to find documents that are "like" this.

func (*MoreLikeThisQuery) LikeText

func (q *MoreLikeThisQuery) LikeText(likeTexts ...string) *MoreLikeThisQuery

LikeText sets the text to use in order to find documents that are "like" this.

func (*MoreLikeThisQuery) MaxDocFreq

func (q *MoreLikeThisQuery) MaxDocFreq(maxDocFreq int) *MoreLikeThisQuery

MaxDocFreq sets the maximum frequency for which words may still appear. Words that appear in more than this many docs will be ignored. It defaults to unbounded.

func (*MoreLikeThisQuery) MaxQueryTerms

func (q *MoreLikeThisQuery) MaxQueryTerms(maxQueryTerms int) *MoreLikeThisQuery

MaxQueryTerms sets the maximum number of query terms that will be included in any generated query. It defaults to 25.

func (*MoreLikeThisQuery) MaxWordLength

func (q *MoreLikeThisQuery) MaxWordLength(maxWordLength int) *MoreLikeThisQuery

MaxWordLength sets the maximum word length above which words will be ignored. Defaults to unbounded (0).

func (*MoreLikeThisQuery) MinDocFreq

func (q *MoreLikeThisQuery) MinDocFreq(minDocFreq int) *MoreLikeThisQuery

MinDocFreq sets the frequency at which words will be ignored which do not occur in at least this many docs. The default is 5.

func (*MoreLikeThisQuery) MinTermFreq

func (q *MoreLikeThisQuery) MinTermFreq(minTermFreq int) *MoreLikeThisQuery

MinTermFreq is the frequency below which terms will be ignored in the source doc. The default frequency is 2.

func (*MoreLikeThisQuery) MinWordLength

func (q *MoreLikeThisQuery) MinWordLength(minWordLength int) *MoreLikeThisQuery

MinWordLength sets the minimum word length below which words will be ignored. It defaults to 0.

func (*MoreLikeThisQuery) MinimumShouldMatch

func (q *MoreLikeThisQuery) MinimumShouldMatch(minimumShouldMatch string) *MoreLikeThisQuery

MinimumShouldMatch sets the number of terms that must match the generated query expressed in the common syntax for minimum should match. The default value is "30%".

This used to be "PercentTermsToMatch" in Elasticsearch versions before 2.0.

func (*MoreLikeThisQuery) QueryName

func (q *MoreLikeThisQuery) QueryName(queryName string) *MoreLikeThisQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit.

func (*MoreLikeThisQuery) Source

func (q *MoreLikeThisQuery) Source() (interface{}, error)

Source creates the source for the MLT query. It may return an error if the caller forgot to specify any documents to be "liked" in the MoreLikeThisQuery.

func (*MoreLikeThisQuery) StopWord

func (q *MoreLikeThisQuery) StopWord(stopWords ...string) *MoreLikeThisQuery

StopWord sets the stopwords. Any word in this set is considered "uninteresting" and ignored. Even if your Analyzer allows stopwords, you might want to tell the MoreLikeThis code to ignore them, as for the purposes of document similarity it seems reasonable to assume that "a stop word is never interesting".

type MoreLikeThisQueryItem

type MoreLikeThisQueryItem struct {
	// contains filtered or unexported fields
}

MoreLikeThisQueryItem represents a single item of a MoreLikeThisQuery to be "liked" or "unliked".

func NewMoreLikeThisQueryItem

func NewMoreLikeThisQueryItem() *MoreLikeThisQueryItem

NewMoreLikeThisQueryItem creates and initializes a MoreLikeThisQueryItem.

func (*MoreLikeThisQueryItem) Doc

func (item *MoreLikeThisQueryItem) Doc(doc interface{}) *MoreLikeThisQueryItem

Doc represents a raw document template for the item.

func (*MoreLikeThisQueryItem) FetchSourceContext

func (item *MoreLikeThisQueryItem) FetchSourceContext(fsc *FetchSourceContext) *MoreLikeThisQueryItem

FetchSourceContext represents the fetch source of the item which controls if and how _source should be returned.

func (*MoreLikeThisQueryItem) Fields

func (item *MoreLikeThisQueryItem) Fields(fields ...string) *MoreLikeThisQueryItem

Fields represents the list of fields of the item.

func (*MoreLikeThisQueryItem) Id

Id represents the document id of the item.

func (*MoreLikeThisQueryItem) Index

Index represents the index of the item.

func (*MoreLikeThisQueryItem) LikeText

func (item *MoreLikeThisQueryItem) LikeText(likeText string) *MoreLikeThisQueryItem

LikeText represents a text to be "liked".

func (*MoreLikeThisQueryItem) Routing

func (item *MoreLikeThisQueryItem) Routing(routing string) *MoreLikeThisQueryItem

Routing sets the routing associated with the item.

func (*MoreLikeThisQueryItem) Source

func (item *MoreLikeThisQueryItem) Source() (interface{}, error)

Source returns the JSON-serializable fragment of the entity.

func (*MoreLikeThisQueryItem) Type

Type represents the document type of the item.

func (*MoreLikeThisQueryItem) Version

func (item *MoreLikeThisQueryItem) Version(version int64) *MoreLikeThisQueryItem

Version specifies the version of the item.

func (*MoreLikeThisQueryItem) VersionType

func (item *MoreLikeThisQueryItem) VersionType(versionType string) *MoreLikeThisQueryItem

VersionType represents the version type of the item.

type MovAvgAggregation

type MovAvgAggregation struct {
	// contains filtered or unexported fields
}

MovAvgAggregation operates on a series of data. It will slide a window across the data and emit the average value of that window.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-movavg-aggregation.html

func NewMovAvgAggregation

func NewMovAvgAggregation() *MovAvgAggregation

NewMovAvgAggregation creates and initializes a new MovAvgAggregation.

func (*MovAvgAggregation) BucketsPath

func (a *MovAvgAggregation) BucketsPath(bucketsPaths ...string) *MovAvgAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*MovAvgAggregation) Format

func (a *MovAvgAggregation) Format(format string) *MovAvgAggregation

Format to use on the output of this aggregation.

func (*MovAvgAggregation) GapInsertZeros

func (a *MovAvgAggregation) GapInsertZeros() *MovAvgAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*MovAvgAggregation) GapPolicy

func (a *MovAvgAggregation) GapPolicy(gapPolicy string) *MovAvgAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*MovAvgAggregation) GapSkip

func (a *MovAvgAggregation) GapSkip() *MovAvgAggregation

GapSkip skips gaps in the series.

func (*MovAvgAggregation) Meta

func (a *MovAvgAggregation) Meta(metaData map[string]interface{}) *MovAvgAggregation

Meta sets the meta data to be included in the aggregation response.

func (*MovAvgAggregation) Minimize

func (a *MovAvgAggregation) Minimize(minimize bool) *MovAvgAggregation

Minimize determines if the model should be fit to the data using a cost minimizing algorithm.

func (*MovAvgAggregation) Model

Model is used to define what type of moving average you want to use in the series.

func (*MovAvgAggregation) Predict

func (a *MovAvgAggregation) Predict(numPredictions int) *MovAvgAggregation

Predict sets the number of predictions that should be returned. Each prediction will be spaced at the intervals in the histogram. E.g. a predict of 2 will return two new buckets at the end of the histogram with the predicted values.

func (*MovAvgAggregation) Source

func (a *MovAvgAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

func (*MovAvgAggregation) Window

func (a *MovAvgAggregation) Window(window int) *MovAvgAggregation

Window sets the window size for the moving average. This window will "slide" across the series, and the values inside that window will be used to calculate the moving avg value.

type MovAvgModel

type MovAvgModel interface {
	Name() string
	Settings() map[string]interface{}
}

MovAvgModel specifies the model to use with the MovAvgAggregation.

type MultiGetItem

type MultiGetItem struct {
	// contains filtered or unexported fields
}

MultiGetItem is a single document to retrieve via the MgetService.

func NewMultiGetItem

func NewMultiGetItem() *MultiGetItem

NewMultiGetItem initializes a new, single item for a Multi GET request.

func (*MultiGetItem) FetchSource

func (item *MultiGetItem) FetchSource(fetchSourceContext *FetchSourceContext) *MultiGetItem

FetchSource allows to specify source filtering.

func (*MultiGetItem) Id

func (item *MultiGetItem) Id(id string) *MultiGetItem

Id specifies the identifier of the document.

func (*MultiGetItem) Index

func (item *MultiGetItem) Index(index string) *MultiGetItem

Index specifies the index name.

func (*MultiGetItem) Routing

func (item *MultiGetItem) Routing(routing string) *MultiGetItem

Routing is the specific routing value.

func (*MultiGetItem) Source

func (item *MultiGetItem) Source() (interface{}, error)

Source returns the serialized JSON to be sent to Elasticsearch as part of a MultiGet search.

func (*MultiGetItem) StoredFields

func (item *MultiGetItem) StoredFields(storedFields ...string) *MultiGetItem

StoredFields is a list of fields to return in the response.

func (*MultiGetItem) Type

func (item *MultiGetItem) Type(typ string) *MultiGetItem

Type specifies the type name.

func (*MultiGetItem) Version

func (item *MultiGetItem) Version(version int64) *MultiGetItem

Version can be MatchAny (-3), MatchAnyPre120 (0), NotFound (-1), or NotSet (-2). These are specified in org.elasticsearch.common.lucene.uid.Versions. The default in Elasticsearch is MatchAny (-3).

func (*MultiGetItem) VersionType

func (item *MultiGetItem) VersionType(versionType string) *MultiGetItem

VersionType can be "internal", "external", "external_gt", or "external_gte". See org.elasticsearch.index.VersionType in Elasticsearch source. It is "internal" by default.

type MultiMatchQuery

type MultiMatchQuery struct {
	// contains filtered or unexported fields
}

MultiMatchQuery builds on the MatchQuery to allow multi-field queries.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-multi-match-query.html

func NewMultiMatchQuery

func NewMultiMatchQuery(text interface{}, fields ...string) *MultiMatchQuery

MultiMatchQuery creates and initializes a new MultiMatchQuery.

func (*MultiMatchQuery) Analyzer

func (q *MultiMatchQuery) Analyzer(analyzer string) *MultiMatchQuery

Analyzer sets the analyzer to use explicitly. It defaults to use explicit mapping config for the field, or, if not set, the default search analyzer.

func (*MultiMatchQuery) Boost

func (q *MultiMatchQuery) Boost(boost float64) *MultiMatchQuery

Boost sets the boost for this query.

func (*MultiMatchQuery) CutoffFrequency

func (q *MultiMatchQuery) CutoffFrequency(cutoff float64) *MultiMatchQuery

CutoffFrequency sets a cutoff value in [0..1] (or absolute number >=1) representing the maximum threshold of a terms document frequency to be considered a low frequency term.

func (*MultiMatchQuery) Field

func (q *MultiMatchQuery) Field(field string) *MultiMatchQuery

Field adds a field to run the multi match against.

func (*MultiMatchQuery) FieldWithBoost

func (q *MultiMatchQuery) FieldWithBoost(field string, boost float64) *MultiMatchQuery

FieldWithBoost adds a field to run the multi match against with a specific boost.

func (*MultiMatchQuery) Fuzziness

func (q *MultiMatchQuery) Fuzziness(fuzziness string) *MultiMatchQuery

Fuzziness sets the fuzziness used when evaluated to a fuzzy query type. It defaults to "AUTO".

func (*MultiMatchQuery) FuzzyRewrite

func (q *MultiMatchQuery) FuzzyRewrite(fuzzyRewrite string) *MultiMatchQuery

func (*MultiMatchQuery) Lenient

func (q *MultiMatchQuery) Lenient(lenient bool) *MultiMatchQuery

Lenient indicates whether format based failures will be ignored.

func (*MultiMatchQuery) MaxExpansions

func (q *MultiMatchQuery) MaxExpansions(maxExpansions int) *MultiMatchQuery

MaxExpansions is the number of term expansions to use when using fuzzy or prefix type query. It defaults to unbounded so it's recommended to set it to a reasonable value for faster execution.

func (*MultiMatchQuery) MinimumShouldMatch

func (q *MultiMatchQuery) MinimumShouldMatch(minimumShouldMatch string) *MultiMatchQuery

MinimumShouldMatch represents the minimum number of optional should clauses to match.

func (*MultiMatchQuery) Operator

func (q *MultiMatchQuery) Operator(operator string) *MultiMatchQuery

Operator sets the operator to use when using boolean query. It can be either AND or OR (default).

func (*MultiMatchQuery) PrefixLength

func (q *MultiMatchQuery) PrefixLength(prefixLength int) *MultiMatchQuery

PrefixLength for the fuzzy process.

func (*MultiMatchQuery) QueryName

func (q *MultiMatchQuery) QueryName(queryName string) *MultiMatchQuery

QueryName sets the query name for the filter that can be used when searching for matched filters per hit.

func (*MultiMatchQuery) Rewrite

func (q *MultiMatchQuery) Rewrite(rewrite string) *MultiMatchQuery

func (*MultiMatchQuery) Slop

func (q *MultiMatchQuery) Slop(slop int) *MultiMatchQuery

Slop sets the phrase slop if evaluated to a phrase query type.

func (*MultiMatchQuery) Source

func (q *MultiMatchQuery) Source() (interface{}, error)

Source returns JSON for the query.

func (*MultiMatchQuery) TieBreaker

func (q *MultiMatchQuery) TieBreaker(tieBreaker float64) *MultiMatchQuery

TieBreaker for "best-match" disjunction queries (OR queries). The tie breaker capability allows documents that match more than one query clause (in this case on more than one field) to be scored better than documents that match only the best of the fields, without confusing this with the better case of two distinct matches in the multiple fields.

A tie-breaker value of 1.0 is interpreted as a signal to score queries as "most-match" queries where all matching query clauses are considered for scoring.

func (*MultiMatchQuery) Type

func (q *MultiMatchQuery) Type(typ string) *MultiMatchQuery

Type can be "best_fields", "boolean", "most_fields", "cross_fields", "phrase", or "phrase_prefix".

func (*MultiMatchQuery) ZeroTermsQuery

func (q *MultiMatchQuery) ZeroTermsQuery(zeroTermsQuery string) *MultiMatchQuery

ZeroTermsQuery can be "all" or "none".

type MultiSearchResult

type MultiSearchResult struct {
	Responses []*SearchResult `json:"responses,omitempty"`
}

MultiSearchResult is the outcome of running a multi-search operation.

type MultiSearchService

type MultiSearchService struct {
	// contains filtered or unexported fields
}

MultiSearch executes one or more searches in one roundtrip.

func NewMultiSearchService

func NewMultiSearchService(client *Client) *MultiSearchService

func (*MultiSearchService) Add

func (s *MultiSearchService) Add(requests ...*SearchRequest) *MultiSearchService

func (*MultiSearchService) Do

func (*MultiSearchService) Index

func (s *MultiSearchService) Index(indices ...string) *MultiSearchService

func (*MultiSearchService) MaxConcurrentSearches

func (s *MultiSearchService) MaxConcurrentSearches(max int) *MultiSearchService

func (*MultiSearchService) PreFilterShardSize

func (s *MultiSearchService) PreFilterShardSize(size int) *MultiSearchService

func (*MultiSearchService) Pretty

func (s *MultiSearchService) Pretty(pretty bool) *MultiSearchService

type MultiTermvectorItem

type MultiTermvectorItem struct {
	// contains filtered or unexported fields
}

MultiTermvectorItem is a single document to retrieve via MultiTermvectorService.

func NewMultiTermvectorItem

func NewMultiTermvectorItem() *MultiTermvectorItem

func (*MultiTermvectorItem) Doc

func (s *MultiTermvectorItem) Doc(doc interface{}) *MultiTermvectorItem

Doc is the document to analyze.

func (*MultiTermvectorItem) FieldStatistics

func (s *MultiTermvectorItem) FieldStatistics(fieldStatistics bool) *MultiTermvectorItem

FieldStatistics specifies if document count, sum of document frequencies and sum of total term frequencies should be returned.

func (*MultiTermvectorItem) Fields

func (s *MultiTermvectorItem) Fields(fields ...string) *MultiTermvectorItem

Fields a list of fields to return.

func (*MultiTermvectorItem) Id

func (*MultiTermvectorItem) Index

func (*MultiTermvectorItem) Offsets

func (s *MultiTermvectorItem) Offsets(offsets bool) *MultiTermvectorItem

Offsets specifies if term offsets should be returned.

func (*MultiTermvectorItem) Parent

func (s *MultiTermvectorItem) Parent(parent string) *MultiTermvectorItem

Parent id of documents.

func (*MultiTermvectorItem) Payloads

func (s *MultiTermvectorItem) Payloads(payloads bool) *MultiTermvectorItem

Payloads specifies if term payloads should be returned.

func (*MultiTermvectorItem) PerFieldAnalyzer

func (s *MultiTermvectorItem) PerFieldAnalyzer(perFieldAnalyzer map[string]string) *MultiTermvectorItem

PerFieldAnalyzer allows to specify a different analyzer than the one at the field.

func (*MultiTermvectorItem) Positions

func (s *MultiTermvectorItem) Positions(positions bool) *MultiTermvectorItem

Positions specifies if term positions should be returned.

func (*MultiTermvectorItem) Preference

func (s *MultiTermvectorItem) Preference(preference string) *MultiTermvectorItem

Preference specify the node or shard the operation should be performed on (default: random).

func (*MultiTermvectorItem) Realtime

func (s *MultiTermvectorItem) Realtime(realtime bool) *MultiTermvectorItem

Realtime specifies if request is real-time as opposed to near-real-time (default: true).

func (*MultiTermvectorItem) Routing

func (s *MultiTermvectorItem) Routing(routing string) *MultiTermvectorItem

Routing is a specific routing value.

func (*MultiTermvectorItem) Source

func (s *MultiTermvectorItem) Source() interface{}

Source returns the serialized JSON to be sent to Elasticsearch as part of a MultiTermvector.

func (*MultiTermvectorItem) TermStatistics

func (s *MultiTermvectorItem) TermStatistics(termStatistics bool) *MultiTermvectorItem

TermStatistics specifies if total term frequency and document frequency should be returned.

func (*MultiTermvectorItem) Type

type MultiTermvectorResponse

type MultiTermvectorResponse struct {
	Docs []*TermvectorsResponse `json:"docs"`
}

MultiTermvectorResponse is the response of MultiTermvectorService.Do.

type MultiTermvectorService

type MultiTermvectorService struct {
	// contains filtered or unexported fields
}

MultiTermvectorService returns information and statistics on terms in the fields of a particular document. The document could be stored in the index or artificially provided by the user.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-multi-termvectors.html for documentation.

func NewMultiTermvectorService

func NewMultiTermvectorService(client *Client) *MultiTermvectorService

NewMultiTermvectorService creates a new MultiTermvectorService.

func (*MultiTermvectorService) Add

Add adds documents to MultiTermvectors service.

func (*MultiTermvectorService) BodyJson

func (s *MultiTermvectorService) BodyJson(body interface{}) *MultiTermvectorService

BodyJson is documented as: Define ids, documents, parameters or a list of parameters per document here. You must at least provide a list of document ids. See documentation..

func (*MultiTermvectorService) BodyString

BodyString is documented as: Define ids, documents, parameters or a list of parameters per document here. You must at least provide a list of document ids. See documentation..

func (*MultiTermvectorService) Do

Do executes the operation.

func (*MultiTermvectorService) FieldStatistics

func (s *MultiTermvectorService) FieldStatistics(fieldStatistics bool) *MultiTermvectorService

FieldStatistics specifies if document count, sum of document frequencies and sum of total term frequencies should be returned. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Fields

Fields is a comma-separated list of fields to return. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Ids

Ids is a comma-separated list of documents ids. You must define ids as parameter or set "ids" or "docs" in the request body.

func (*MultiTermvectorService) Index

Index in which the document resides.

func (*MultiTermvectorService) Offsets

func (s *MultiTermvectorService) Offsets(offsets bool) *MultiTermvectorService

Offsets specifies if term offsets should be returned. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Parent

Parent id of documents. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Payloads

func (s *MultiTermvectorService) Payloads(payloads bool) *MultiTermvectorService

Payloads specifies if term payloads should be returned. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Positions

func (s *MultiTermvectorService) Positions(positions bool) *MultiTermvectorService

Positions specifies if term positions should be returned. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Preference

func (s *MultiTermvectorService) Preference(preference string) *MultiTermvectorService

Preference specifies the node or shard the operation should be performed on (default: random). Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*MultiTermvectorService) Realtime

func (s *MultiTermvectorService) Realtime(realtime bool) *MultiTermvectorService

Realtime specifies if requests are real-time as opposed to near-real-time (default: true).

func (*MultiTermvectorService) Routing

Routing specific routing value. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Source

func (s *MultiTermvectorService) Source() interface{}

func (*MultiTermvectorService) TermStatistics

func (s *MultiTermvectorService) TermStatistics(termStatistics bool) *MultiTermvectorService

TermStatistics specifies if total term frequency and document frequency should be returned. Applies to all returned documents unless otherwise specified in body "params" or "docs".

func (*MultiTermvectorService) Type

Type of the document.

func (*MultiTermvectorService) Validate

func (s *MultiTermvectorService) Validate() error

Validate checks if the operation is valid.

func (*MultiTermvectorService) Version

func (s *MultiTermvectorService) Version(version interface{}) *MultiTermvectorService

Version is explicit version number for concurrency control.

func (*MultiTermvectorService) VersionType

func (s *MultiTermvectorService) VersionType(versionType string) *MultiTermvectorService

VersionType is specific version type.

type MutualInformationSignificanceHeuristic

type MutualInformationSignificanceHeuristic struct {
	// contains filtered or unexported fields
}

MutualInformationSignificanceHeuristic implements Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_mutual_information for details.

func NewMutualInformationSignificanceHeuristic

func NewMutualInformationSignificanceHeuristic() *MutualInformationSignificanceHeuristic

NewMutualInformationSignificanceHeuristic initializes a new instance of MutualInformationSignificanceHeuristic.

func (*MutualInformationSignificanceHeuristic) BackgroundIsSuperset

func (sh *MutualInformationSignificanceHeuristic) BackgroundIsSuperset(backgroundIsSuperset bool) *MutualInformationSignificanceHeuristic

BackgroundIsSuperset indicates whether you defined a custom background filter that represents a difference set of documents that you want to compare to.

func (*MutualInformationSignificanceHeuristic) IncludeNegatives

IncludeNegatives indicates whether to filter out the terms that appear much less in the subset than in the background without the subset.

func (*MutualInformationSignificanceHeuristic) Name

Name returns the name of the heuristic in the REST interface.

func (*MutualInformationSignificanceHeuristic) Source

func (sh *MutualInformationSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type NestedAggregation

type NestedAggregation struct {
	// contains filtered or unexported fields
}

NestedAggregation is a special single bucket aggregation that enables aggregating nested documents. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-nested-aggregation.html

func NewNestedAggregation

func NewNestedAggregation() *NestedAggregation

func (*NestedAggregation) Meta

func (a *NestedAggregation) Meta(metaData map[string]interface{}) *NestedAggregation

Meta sets the meta data to be included in the aggregation response.

func (*NestedAggregation) Path

func (*NestedAggregation) Source

func (a *NestedAggregation) Source() (interface{}, error)

func (*NestedAggregation) SubAggregation

func (a *NestedAggregation) SubAggregation(name string, subAggregation Aggregation) *NestedAggregation

type NestedQuery

type NestedQuery struct {
	// contains filtered or unexported fields
}

NestedQuery allows to query nested objects / docs. The query is executed against the nested objects / docs as if they were indexed as separate docs (they are, internally) and resulting in the root parent doc (or parent nested mapping).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-nested-query.html

func NewNestedQuery

func NewNestedQuery(path string, query Query) *NestedQuery

NewNestedQuery creates and initializes a new NestedQuery.

func (*NestedQuery) Boost

func (q *NestedQuery) Boost(boost float64) *NestedQuery

Boost sets the boost for this query.

func (*NestedQuery) IgnoreUnmapped

func (q *NestedQuery) IgnoreUnmapped(value bool) *NestedQuery

IgnoreUnmapped sets the ignore_unmapped option for the filter that ignores unmapped nested fields

func (*NestedQuery) InnerHit

func (q *NestedQuery) InnerHit(innerHit *InnerHit) *NestedQuery

InnerHit sets the inner hit definition in the scope of this nested query and reusing the defined path and query.

func (*NestedQuery) QueryName

func (q *NestedQuery) QueryName(queryName string) *NestedQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*NestedQuery) ScoreMode

func (q *NestedQuery) ScoreMode(scoreMode string) *NestedQuery

ScoreMode specifies the score mode.

func (*NestedQuery) Source

func (q *NestedQuery) Source() (interface{}, error)

Source returns JSON for the query.

type NestedSort

type NestedSort struct {
	Sorter
	// contains filtered or unexported fields
}

NestedSort is used for fields that are inside a nested object. It takes a "path" argument and an optional nested filter that the nested objects should match with in order to be taken into account for sorting.

NestedSort is available from 6.1 and replaces nestedFilter and nestedPath in the other sorters.

func NewNestedSort

func NewNestedSort(path string) *NestedSort

NewNestedSort creates a new NestedSort.

func (*NestedSort) Filter

func (s *NestedSort) Filter(filter Query) *NestedSort

Filter sets the filter.

func (*NestedSort) NestedSort

func (s *NestedSort) NestedSort(nestedSort *NestedSort) *NestedSort

NestedSort embeds another level of nested sorting.

func (*NestedSort) Source

func (s *NestedSort) Source() (interface{}, error)

Source returns the JSON-serializable data.

type NodesInfoNode

type NodesInfoNode struct {
	// Name of the node, e.g. "Mister Fear"
	Name string `json:"name"`
	// TransportAddress, e.g. "127.0.0.1:9300"
	TransportAddress string `json:"transport_address"`
	// Host is the host name, e.g. "macbookair"
	Host string `json:"host"`
	// IP is the IP address, e.g. "192.168.1.2"
	IP string `json:"ip"`
	// Version is the Elasticsearch version running on the node, e.g. "1.4.3"
	Version string `json:"version"`
	// Build is the Elasticsearch build, e.g. "36a29a7"
	Build string `json:"build"`
	// HTTPAddress, e.g. "127.0.0.1:9200"
	HTTPAddress string `json:"http_address"`
	// HTTPSAddress, e.g. "127.0.0.1:9200"
	HTTPSAddress string `json:"https_address"`

	// Attributes of the node.
	Attributes map[string]interface{} `json:"attributes"`

	// Settings of the node, e.g. paths and pidfile.
	Settings map[string]interface{} `json:"settings"`

	// OS information, e.g. CPU and memory.
	OS *NodesInfoNodeOS `json:"os"`

	// Process information, e.g. max file descriptors.
	Process *NodesInfoNodeProcess `json:"process"`

	// JVM information, e.g. VM version.
	JVM *NodesInfoNodeJVM `json:"jvm"`

	// ThreadPool information.
	ThreadPool *NodesInfoNodeThreadPool `json:"thread_pool"`

	// Network information.
	Network *NodesInfoNodeNetwork `json:"network"`

	// Network information.
	Transport *NodesInfoNodeTransport `json:"transport"`

	// HTTP information.
	HTTP *NodesInfoNodeHTTP `json:"http"`

	// Plugins information.
	Plugins []*NodesInfoNodePlugin `json:"plugins"`
}

type NodesInfoNodeHTTP

type NodesInfoNodeHTTP struct {
	BoundAddress            []string `json:"bound_address"`      // e.g. ["127.0.0.1:9200", "[fe80::1]:9200", "[::1]:9200"]
	PublishAddress          string   `json:"publish_address"`    // e.g. "127.0.0.1:9300"
	MaxContentLength        string   `json:"max_content_length"` // e.g. "100mb"
	MaxContentLengthInBytes int64    `json:"max_content_length_in_bytes"`
}

type NodesInfoNodeJVM

type NodesInfoNodeJVM struct {
	PID               int       `json:"pid"`        // process id, e.g. 87079
	Version           string    `json:"version"`    // e.g. "1.8.0_25"
	VMName            string    `json:"vm_name"`    // e.g. "Java HotSpot(TM) 64-Bit Server VM"
	VMVersion         string    `json:"vm_version"` // e.g. "25.25-b02"
	VMVendor          string    `json:"vm_vendor"`  // e.g. "Oracle Corporation"
	StartTime         time.Time `json:"start_time"` // e.g. "2015-01-03T15:18:30.982Z"
	StartTimeInMillis int64     `json:"start_time_in_millis"`

	// Mem information
	Mem struct {
		HeapInit           string `json:"heap_init"` // e.g. 1gb
		HeapInitInBytes    int    `json:"heap_init_in_bytes"`
		HeapMax            string `json:"heap_max"` // e.g. 4gb
		HeapMaxInBytes     int    `json:"heap_max_in_bytes"`
		NonHeapInit        string `json:"non_heap_init"` // e.g. 2.4mb
		NonHeapInitInBytes int    `json:"non_heap_init_in_bytes"`
		NonHeapMax         string `json:"non_heap_max"` // e.g. 0b
		NonHeapMaxInBytes  int    `json:"non_heap_max_in_bytes"`
		DirectMax          string `json:"direct_max"` // e.g. 4gb
		DirectMaxInBytes   int    `json:"direct_max_in_bytes"`
	} `json:"mem"`

	GCCollectors []string `json:"gc_collectors"` // e.g. ["ParNew"]
	MemoryPools  []string `json:"memory_pools"`  // e.g. ["Code Cache", "Metaspace"]
}

type NodesInfoNodeNetwork

type NodesInfoNodeNetwork struct {
	RefreshInterval         string `json:"refresh_interval"`           // e.g. 1s
	RefreshIntervalInMillis int    `json:"refresh_interval_in_millis"` // e.g. 1000
	PrimaryInterface        struct {
		Address    string `json:"address"`     // e.g. 192.168.1.2
		Name       string `json:"name"`        // e.g. en0
		MACAddress string `json:"mac_address"` // e.g. 11:22:33:44:55:66
	} `json:"primary_interface"`
}

type NodesInfoNodeOS

type NodesInfoNodeOS struct {
	RefreshInterval         string `json:"refresh_interval"`           // e.g. 1s
	RefreshIntervalInMillis int    `json:"refresh_interval_in_millis"` // e.g. 1000
	AvailableProcessors     int    `json:"available_processors"`       // e.g. 4

	// CPU information
	CPU struct {
		Vendor           string `json:"vendor"`              // e.g. Intel
		Model            string `json:"model"`               // e.g. iMac15,1
		MHz              int    `json:"mhz"`                 // e.g. 3500
		TotalCores       int    `json:"total_cores"`         // e.g. 4
		TotalSockets     int    `json:"total_sockets"`       // e.g. 4
		CoresPerSocket   int    `json:"cores_per_socket"`    // e.g. 16
		CacheSizeInBytes int    `json:"cache_size_in_bytes"` // e.g. 256
	} `json:"cpu"`

	// Mem information
	Mem struct {
		Total        string `json:"total"`          // e.g. 16gb
		TotalInBytes int    `json:"total_in_bytes"` // e.g. 17179869184
	} `json:"mem"`

	// Swap information
	Swap struct {
		Total        string `json:"total"`          // e.g. 1gb
		TotalInBytes int    `json:"total_in_bytes"` // e.g. 1073741824
	} `json:"swap"`
}

type NodesInfoNodePlugin

type NodesInfoNodePlugin struct {
	Name        string `json:"name"`
	Description string `json:"description"`
	Site        bool   `json:"site"`
	JVM         bool   `json:"jvm"`
	URL         string `json:"url"` // e.g. /_plugin/dummy/
}

type NodesInfoNodeProcess

type NodesInfoNodeProcess struct {
	RefreshInterval         string `json:"refresh_interval"`           // e.g. 1s
	RefreshIntervalInMillis int    `json:"refresh_interval_in_millis"` // e.g. 1000
	ID                      int    `json:"id"`                         // process id, e.g. 87079
	MaxFileDescriptors      int    `json:"max_file_descriptors"`       // e.g. 32768
	Mlockall                bool   `json:"mlockall"`                   // e.g. false
}

type NodesInfoNodeThreadPool

type NodesInfoNodeThreadPool struct {
	Percolate  *NodesInfoNodeThreadPoolSection `json:"percolate"`
	Bench      *NodesInfoNodeThreadPoolSection `json:"bench"`
	Listener   *NodesInfoNodeThreadPoolSection `json:"listener"`
	Index      *NodesInfoNodeThreadPoolSection `json:"index"`
	Refresh    *NodesInfoNodeThreadPoolSection `json:"refresh"`
	Suggest    *NodesInfoNodeThreadPoolSection `json:"suggest"`
	Generic    *NodesInfoNodeThreadPoolSection `json:"generic"`
	Warmer     *NodesInfoNodeThreadPoolSection `json:"warmer"`
	Search     *NodesInfoNodeThreadPoolSection `json:"search"`
	Flush      *NodesInfoNodeThreadPoolSection `json:"flush"`
	Optimize   *NodesInfoNodeThreadPoolSection `json:"optimize"`
	Management *NodesInfoNodeThreadPoolSection `json:"management"`
	Get        *NodesInfoNodeThreadPoolSection `json:"get"`
	Merge      *NodesInfoNodeThreadPoolSection `json:"merge"`
	Bulk       *NodesInfoNodeThreadPoolSection `json:"bulk"`
	Snapshot   *NodesInfoNodeThreadPoolSection `json:"snapshot"`
}

type NodesInfoNodeThreadPoolSection

type NodesInfoNodeThreadPoolSection struct {
	Type      string      `json:"type"`       // e.g. fixed
	Min       int         `json:"min"`        // e.g. 4
	Max       int         `json:"max"`        // e.g. 4
	KeepAlive string      `json:"keep_alive"` // e.g. "5m"
	QueueSize interface{} `json:"queue_size"` // e.g. "1k" or -1
}

type NodesInfoNodeTransport

type NodesInfoNodeTransport struct {
	BoundAddress   []string                                  `json:"bound_address"`
	PublishAddress string                                    `json:"publish_address"`
	Profiles       map[string]*NodesInfoNodeTransportProfile `json:"profiles"`
}

type NodesInfoNodeTransportProfile

type NodesInfoNodeTransportProfile struct {
	BoundAddress   []string `json:"bound_address"`
	PublishAddress string   `json:"publish_address"`
}

type NodesInfoResponse

type NodesInfoResponse struct {
	ClusterName string                    `json:"cluster_name"`
	Nodes       map[string]*NodesInfoNode `json:"nodes"`
}

NodesInfoResponse is the response of NodesInfoService.Do.

type NodesInfoService

type NodesInfoService struct {
	// contains filtered or unexported fields
}

NodesInfoService allows to retrieve one or more or all of the cluster nodes information. It is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/cluster-nodes-info.html.

func NewNodesInfoService

func NewNodesInfoService(client *Client) *NodesInfoService

NewNodesInfoService creates a new NodesInfoService.

func (*NodesInfoService) Do

Do executes the operation.

func (*NodesInfoService) FlatSettings

func (s *NodesInfoService) FlatSettings(flatSettings bool) *NodesInfoService

FlatSettings returns settings in flat format (default: false).

func (*NodesInfoService) Human

func (s *NodesInfoService) Human(human bool) *NodesInfoService

Human indicates whether to return time and byte values in human-readable format.

func (*NodesInfoService) Metric

func (s *NodesInfoService) Metric(metric ...string) *NodesInfoService

Metric is a list of metrics you wish returned. Leave empty to return all. Valid metrics are: settings, os, process, jvm, thread_pool, network, transport, http, and plugins.

func (*NodesInfoService) NodeId

func (s *NodesInfoService) NodeId(nodeId ...string) *NodesInfoService

NodeId is a list of node IDs or names to limit the returned information. Use "_local" to return information from the node you're connecting to, leave empty to get information from all nodes.

func (*NodesInfoService) Pretty

func (s *NodesInfoService) Pretty(pretty bool) *NodesInfoService

Pretty indicates whether to indent the returned JSON.

func (*NodesInfoService) Validate

func (s *NodesInfoService) Validate() error

Validate checks if the operation is valid.

type NodesStatsBreaker

type NodesStatsBreaker struct {
	LimitSize            string  `json:"limit_size"`
	LimitSizeInBytes     int64   `json:"limit_size_in_bytes"`
	EstimatedSize        string  `json:"estimated_size"`
	EstimatedSizeInBytes int64   `json:"estimated_size_in_bytes"`
	Overhead             float64 `json:"overhead"`
	Tripped              int64   `json:"tripped"`
}

type NodesStatsCompletionStats

type NodesStatsCompletionStats struct {
	Size        string `json:"size"`
	SizeInBytes int64  `json:"size_in_bytes"`
	Fields      map[string]struct {
		Size        string `json:"size"`
		SizeInBytes int64  `json:"size_in_bytes"`
	} `json:"fields"`
}

type NodesStatsDiscovery

type NodesStatsDiscovery struct {
	ClusterStateQueue *NodesStatsDiscoveryStats `json:"cluster_state_queue"`
}

type NodesStatsDiscoveryStats

type NodesStatsDiscoveryStats struct {
	Total     int64 `json:"total"`
	Pending   int64 `json:"pending"`
	Committed int64 `json:"committed"`
}

type NodesStatsDocsStats

type NodesStatsDocsStats struct {
	Count   int64 `json:"count"`
	Deleted int64 `json:"deleted"`
}

type NodesStatsFielddataStats

type NodesStatsFielddataStats struct {
	MemorySize        string `json:"memory_size"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	Evictions         int64  `json:"evictions"`
	Fields            map[string]struct {
		MemorySize        string `json:"memory_size"`
		MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	} `json:"fields"`
}

type NodesStatsFlushStats

type NodesStatsFlushStats struct {
	Total             int64  `json:"total"`
	TotalTime         string `json:"total_time"`
	TotalTimeInMillis int64  `json:"total_time_in_millis"`
}

type NodesStatsGetStats

type NodesStatsGetStats struct {
	Total               int64  `json:"total"`
	Time                string `json:"get_time"`
	TimeInMillis        int64  `json:"time_in_millis"`
	Exists              int64  `json:"exists"`
	ExistsTime          string `json:"exists_time"`
	ExistsTimeInMillis  int64  `json:"exists_in_millis"`
	Missing             int64  `json:"missing"`
	MissingTime         string `json:"missing_time"`
	MissingTimeInMillis int64  `json:"missing_in_millis"`
	Current             int64  `json:"current"`
}

type NodesStatsIndex

type NodesStatsIndex struct {
	Docs         *NodesStatsDocsStats         `json:"docs"`
	Store        *NodesStatsStoreStats        `json:"store"`
	Indexing     *NodesStatsIndexingStats     `json:"indexing"`
	Get          *NodesStatsGetStats          `json:"get"`
	Search       *NodesStatsSearchStats       `json:"search"`
	Merges       *NodesStatsMergeStats        `json:"merges"`
	Refresh      *NodesStatsRefreshStats      `json:"refresh"`
	Flush        *NodesStatsFlushStats        `json:"flush"`
	Warmer       *NodesStatsWarmerStats       `json:"warmer"`
	QueryCache   *NodesStatsQueryCacheStats   `json:"query_cache"`
	Fielddata    *NodesStatsFielddataStats    `json:"fielddata"`
	Percolate    *NodesStatsPercolateStats    `json:"percolate"`
	Completion   *NodesStatsCompletionStats   `json:"completion"`
	Segments     *NodesStatsSegmentsStats     `json:"segments"`
	Translog     *NodesStatsTranslogStats     `json:"translog"`
	Suggest      *NodesStatsSuggestStats      `json:"suggest"`
	RequestCache *NodesStatsRequestCacheStats `json:"request_cache"`
	Recovery     NodesStatsRecoveryStats      `json:"recovery"`

	Indices map[string]*NodesStatsIndex `json:"indices"` // for level=indices
	Shards  map[string]*NodesStatsIndex `json:"shards"`  // for level=shards
}

type NodesStatsIndexingStats

type NodesStatsIndexingStats struct {
	IndexTotal         int64  `json:"index_total"`
	IndexTime          string `json:"index_time"`
	IndexTimeInMillis  int64  `json:"index_time_in_millis"`
	IndexCurrent       int64  `json:"index_current"`
	IndexFailed        int64  `json:"index_failed"`
	DeleteTotal        int64  `json:"delete_total"`
	DeleteTime         string `json:"delete_time"`
	DeleteTimeInMillis int64  `json:"delete_time_in_millis"`
	DeleteCurrent      int64  `json:"delete_current"`
	NoopUpdateTotal    int64  `json:"noop_update_total"`

	Types map[string]*NodesStatsIndexingStats `json:"types"` // stats for individual types
}

type NodesStatsIngest

type NodesStatsIngest struct {
	Total     *NodesStatsIngestStats `json:"total"`
	Pipelines interface{}            `json:"pipelines"`
}

type NodesStatsIngestStats

type NodesStatsIngestStats struct {
	Count        int64  `json:"count"`
	Time         string `json:"time"`
	TimeInMillis int64  `json:"time_in_millis"`
	Current      int64  `json:"current"`
	Failed       int64  `json:"failed"`
}

type NodesStatsMergeStats

type NodesStatsMergeStats struct {
	Current                    int64  `json:"current"`
	CurrentDocs                int64  `json:"current_docs"`
	CurrentSize                string `json:"current_size"`
	CurrentSizeInBytes         int64  `json:"current_size_in_bytes"`
	Total                      int64  `json:"total"`
	TotalTime                  string `json:"total_time"`
	TotalTimeInMillis          int64  `json:"total_time_in_millis"`
	TotalDocs                  int64  `json:"total_docs"`
	TotalSize                  string `json:"total_size"`
	TotalSizeInBytes           int64  `json:"total_size_in_bytes"`
	TotalStoppedTime           string `json:"total_stopped_time"`
	TotalStoppedTimeInMillis   int64  `json:"total_stopped_time_in_millis"`
	TotalThrottledTime         string `json:"total_throttled_time"`
	TotalThrottledTimeInMillis int64  `json:"total_throttled_time_in_millis"`
	TotalThrottleBytes         string `json:"total_auto_throttle"`
	TotalThrottleBytesInBytes  int64  `json:"total_auto_throttle_in_bytes"`
}

type NodesStatsNode

type NodesStatsNode struct {
	// Timestamp when these stats we're gathered.
	Timestamp int64 `json:"timestamp"`
	// Name of the node, e.g. "Mister Fear"
	Name string `json:"name"`
	// TransportAddress, e.g. "127.0.0.1:9300"
	TransportAddress string `json:"transport_address"`
	// Host is the host name, e.g. "macbookair"
	Host string `json:"host"`
	// IP is an IP address, e.g. "192.168.1.2"
	IP string `json:"ip"`
	// Roles is a list of the roles of the node, e.g. master, data, ingest.
	Roles []string `json:"roles"`

	// Attributes of the node.
	Attributes map[string]interface{} `json:"attributes"`

	// Indices returns index information.
	Indices *NodesStatsIndex `json:"indices"`

	// OS information, e.g. CPU and memory.
	OS *NodesStatsNodeOS `json:"os"`

	// Process information, e.g. max file descriptors.
	Process *NodesStatsNodeProcess `json:"process"`

	// JVM information, e.g. VM version.
	JVM *NodesStatsNodeJVM `json:"jvm"`

	// ThreadPool information.
	ThreadPool map[string]*NodesStatsNodeThreadPool `json:"thread_pool"`

	// FS returns information about the filesystem.
	FS *NodesStatsNodeFS `json:"fs"`

	// Network information.
	Transport *NodesStatsNodeTransport `json:"transport"`

	// HTTP information.
	HTTP *NodesStatsNodeHTTP `json:"http"`

	// Breaker contains information about circuit breakers.
	Breaker map[string]*NodesStatsBreaker `json:"breakers"`

	// ScriptStats information.
	ScriptStats *NodesStatsScriptStats `json:"script"`

	// Discovery information.
	Discovery *NodesStatsDiscovery `json:"discovery"`

	// Ingest information
	Ingest *NodesStatsIngest `json:"ingest"`
}

type NodesStatsNodeFS

type NodesStatsNodeFS struct {
	Timestamp int64                    `json:"timestamp"`
	Total     *NodesStatsNodeFSEntry   `json:"total"`
	Data      []*NodesStatsNodeFSEntry `json:"data"`
	IOStats   *NodesStatsNodeFSIOStats `json:"io_stats"`
}

type NodesStatsNodeFSEntry

type NodesStatsNodeFSEntry struct {
	Path             string `json:"path"`
	Mount            string `json:"mount"`
	Type             string `json:"type"`
	Total            string `json:"total"`
	TotalInBytes     int64  `json:"total_in_bytes"`
	Free             string `json:"free"`
	FreeInBytes      int64  `json:"free_in_bytes"`
	Available        string `json:"available"`
	AvailableInBytes int64  `json:"available_in_bytes"`
	Spins            string `json:"spins"`
}

type NodesStatsNodeFSIOStats

type NodesStatsNodeFSIOStats struct {
	Devices []*NodesStatsNodeFSIOStatsEntry `json:"devices"`
	Total   *NodesStatsNodeFSIOStatsEntry   `json:"total"`
}

type NodesStatsNodeFSIOStatsEntry

type NodesStatsNodeFSIOStatsEntry struct {
	DeviceName      string `json:"device_name"`
	Operations      int64  `json:"operations"`
	ReadOperations  int64  `json:"read_operations"`
	WriteOperations int64  `json:"write_operations"`
	ReadKilobytes   int64  `json:"read_kilobytes"`
	WriteKilobytes  int64  `json:"write_kilobytes"`
}

type NodesStatsNodeHTTP

type NodesStatsNodeHTTP struct {
	CurrentOpen int `json:"current_open"`
	TotalOpened int `json:"total_opened"`
}

type NodesStatsNodeJVM

type NodesStatsNodeJVM struct {
	Timestamp      int64                                   `json:"timestamp"`
	Uptime         string                                  `json:"uptime"`
	UptimeInMillis int64                                   `json:"uptime_in_millis"`
	Mem            *NodesStatsNodeJVMMem                   `json:"mem"`
	Threads        *NodesStatsNodeJVMThreads               `json:"threads"`
	GC             *NodesStatsNodeJVMGC                    `json:"gc"`
	BufferPools    map[string]*NodesStatsNodeJVMBufferPool `json:"buffer_pools"`
	Classes        *NodesStatsNodeJVMClasses               `json:"classes"`
}

type NodesStatsNodeJVMBufferPool

type NodesStatsNodeJVMBufferPool struct {
	Count                int64  `json:"count"`
	TotalCapacity        string `json:"total_capacity"`
	TotalCapacityInBytes int64  `json:"total_capacity_in_bytes"`
}

type NodesStatsNodeJVMClasses

type NodesStatsNodeJVMClasses struct {
	CurrentLoadedCount int64 `json:"current_loaded_count"`
	TotalLoadedCount   int64 `json:"total_loaded_count"`
	TotalUnloadedCount int64 `json:"total_unloaded_count"`
}

type NodesStatsNodeJVMGC

type NodesStatsNodeJVMGC struct {
	Collectors map[string]*NodesStatsNodeJVMGCCollector `json:"collectors"`
}

type NodesStatsNodeJVMGCCollector

type NodesStatsNodeJVMGCCollector struct {
	CollectionCount        int64  `json:"collection_count"`
	CollectionTime         string `json:"collection_time"`
	CollectionTimeInMillis int64  `json:"collection_time_in_millis"`
}

type NodesStatsNodeJVMMem

type NodesStatsNodeJVMMem struct {
	HeapUsed                string `json:"heap_used"`
	HeapUsedInBytes         int64  `json:"heap_used_in_bytes"`
	HeapUsedPercent         int    `json:"heap_used_percent"`
	HeapCommitted           string `json:"heap_committed"`
	HeapCommittedInBytes    int64  `json:"heap_committed_in_bytes"`
	HeapMax                 string `json:"heap_max"`
	HeapMaxInBytes          int64  `json:"heap_max_in_bytes"`
	NonHeapUsed             string `json:"non_heap_used"`
	NonHeapUsedInBytes      int64  `json:"non_heap_used_in_bytes"`
	NonHeapCommitted        string `json:"non_heap_committed"`
	NonHeapCommittedInBytes int64  `json:"non_heap_committed_in_bytes"`
	Pools                   map[string]struct {
		Used            string `json:"used"`
		UsedInBytes     int64  `json:"used_in_bytes"`
		Max             string `json:"max"`
		MaxInBytes      int64  `json:"max_in_bytes"`
		PeakUsed        string `json:"peak_used"`
		PeakUsedInBytes int64  `json:"peak_used_in_bytes"`
		PeakMax         string `json:"peak_max"`
		PeakMaxInBytes  int64  `json:"peak_max_in_bytes"`
	} `json:"pools"`
}

type NodesStatsNodeJVMThreads

type NodesStatsNodeJVMThreads struct {
	Count     int64 `json:"count"`
	PeakCount int64 `json:"peak_count"`
}

type NodesStatsNodeOS

type NodesStatsNodeOS struct {
	Timestamp int64                 `json:"timestamp"`
	CPU       *NodesStatsNodeOSCPU  `json:"cpu"`
	Mem       *NodesStatsNodeOSMem  `json:"mem"`
	Swap      *NodesStatsNodeOSSwap `json:"swap"`
}

type NodesStatsNodeOSCPU

type NodesStatsNodeOSCPU struct {
	Percent     int                `json:"percent"`
	LoadAverage map[string]float64 `json:"load_average"` // keys are: 1m, 5m, and 15m
}

type NodesStatsNodeOSMem

type NodesStatsNodeOSMem struct {
	Total        string `json:"total"`
	TotalInBytes int64  `json:"total_in_bytes"`
	Free         string `json:"free"`
	FreeInBytes  int64  `json:"free_in_bytes"`
	Used         string `json:"used"`
	UsedInBytes  int64  `json:"used_in_bytes"`
	FreePercent  int    `json:"free_percent"`
	UsedPercent  int    `json:"used_percent"`
}

type NodesStatsNodeOSSwap

type NodesStatsNodeOSSwap struct {
	Total        string `json:"total"`
	TotalInBytes int64  `json:"total_in_bytes"`
	Free         string `json:"free"`
	FreeInBytes  int64  `json:"free_in_bytes"`
	Used         string `json:"used"`
	UsedInBytes  int64  `json:"used_in_bytes"`
}

type NodesStatsNodeProcess

type NodesStatsNodeProcess struct {
	Timestamp           int64 `json:"timestamp"`
	OpenFileDescriptors int64 `json:"open_file_descriptors"`
	MaxFileDescriptors  int64 `json:"max_file_descriptors"`
	CPU                 struct {
		Percent       int    `json:"percent"`
		Total         string `json:"total"`
		TotalInMillis int64  `json:"total_in_millis"`
	} `json:"cpu"`
	Mem struct {
		TotalVirtual        string `json:"total_virtual"`
		TotalVirtualInBytes int64  `json:"total_virtual_in_bytes"`
	} `json:"mem"`
}

type NodesStatsNodeThreadPool

type NodesStatsNodeThreadPool struct {
	Threads   int   `json:"threads"`
	Queue     int   `json:"queue"`
	Active    int   `json:"active"`
	Rejected  int64 `json:"rejected"`
	Largest   int   `json:"largest"`
	Completed int64 `json:"completed"`
}

type NodesStatsNodeTransport

type NodesStatsNodeTransport struct {
	ServerOpen    int    `json:"server_open"`
	RxCount       int64  `json:"rx_count"`
	RxSize        string `json:"rx_size"`
	RxSizeInBytes int64  `json:"rx_size_in_bytes"`
	TxCount       int64  `json:"tx_count"`
	TxSize        string `json:"tx_size"`
	TxSizeInBytes int64  `json:"tx_size_in_bytes"`
}

type NodesStatsPercolateStats

type NodesStatsPercolateStats struct {
	Total             int64  `json:"total"`
	Time              string `json:"time"`
	TimeInMillis      int64  `json:"time_in_millis"`
	Current           int64  `json:"current"`
	MemorySize        string `json:"memory_size"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	Queries           int64  `json:"queries"`
}

type NodesStatsQueryCacheStats

type NodesStatsQueryCacheStats struct {
	MemorySize        string `json:"memory_size"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	TotalCount        int64  `json:"total_count"`
	HitCount          int64  `json:"hit_count"`
	MissCount         int64  `json:"miss_count"`
	CacheSize         int64  `json:"cache_size"`
	CacheCount        int64  `json:"cache_count"`
	Evictions         int64  `json:"evictions"`
}

type NodesStatsRecoveryStats

type NodesStatsRecoveryStats struct {
	CurrentAsSource int `json:"current_as_source"`
	CurrentAsTarget int `json:"current_as_target"`
}

type NodesStatsRefreshStats

type NodesStatsRefreshStats struct {
	Total             int64  `json:"total"`
	TotalTime         string `json:"total_time"`
	TotalTimeInMillis int64  `json:"total_time_in_millis"`
}

type NodesStatsRequestCacheStats

type NodesStatsRequestCacheStats struct {
	MemorySize        string `json:"memory_size"`
	MemorySizeInBytes int64  `json:"memory_size_in_bytes"`
	Evictions         int64  `json:"evictions"`
	HitCount          int64  `json:"hit_count"`
	MissCount         int64  `json:"miss_count"`
}

type NodesStatsResponse

type NodesStatsResponse struct {
	ClusterName string                     `json:"cluster_name"`
	Nodes       map[string]*NodesStatsNode `json:"nodes"`
}

NodesStatsResponse is the response of NodesStatsService.Do.

type NodesStatsScriptStats

type NodesStatsScriptStats struct {
	Compilations   int64 `json:"compilations"`
	CacheEvictions int64 `json:"cache_evictions"`
}

type NodesStatsSearchStats

type NodesStatsSearchStats struct {
	OpenContexts       int64  `json:"open_contexts"`
	QueryTotal         int64  `json:"query_total"`
	QueryTime          string `json:"query_time"`
	QueryTimeInMillis  int64  `json:"query_time_in_millis"`
	QueryCurrent       int64  `json:"query_current"`
	FetchTotal         int64  `json:"fetch_total"`
	FetchTime          string `json:"fetch_time"`
	FetchTimeInMillis  int64  `json:"fetch_time_in_millis"`
	FetchCurrent       int64  `json:"fetch_current"`
	ScrollTotal        int64  `json:"scroll_total"`
	ScrollTime         string `json:"scroll_time"`
	ScrollTimeInMillis int64  `json:"scroll_time_in_millis"`
	ScrollCurrent      int64  `json:"scroll_current"`

	Groups map[string]*NodesStatsSearchStats `json:"groups"` // stats for individual groups
}

type NodesStatsSegmentsStats

type NodesStatsSegmentsStats struct {
	Count                       int64  `json:"count"`
	Memory                      string `json:"memory"`
	MemoryInBytes               int64  `json:"memory_in_bytes"`
	TermsMemory                 string `json:"terms_memory"`
	TermsMemoryInBytes          int64  `json:"terms_memory_in_bytes"`
	StoredFieldsMemory          string `json:"stored_fields_memory"`
	StoredFieldsMemoryInBytes   int64  `json:"stored_fields_memory_in_bytes"`
	TermVectorsMemory           string `json:"term_vectors_memory"`
	TermVectorsMemoryInBytes    int64  `json:"term_vectors_memory_in_bytes"`
	NormsMemory                 string `json:"norms_memory"`
	NormsMemoryInBytes          int64  `json:"norms_memory_in_bytes"`
	DocValuesMemory             string `json:"doc_values_memory"`
	DocValuesMemoryInBytes      int64  `json:"doc_values_memory_in_bytes"`
	IndexWriterMemory           string `json:"index_writer_memory"`
	IndexWriterMemoryInBytes    int64  `json:"index_writer_memory_in_bytes"`
	IndexWriterMaxMemory        string `json:"index_writer_max_memory"`
	IndexWriterMaxMemoryInBytes int64  `json:"index_writer_max_memory_in_bytes"`
	VersionMapMemory            string `json:"version_map_memory"`
	VersionMapMemoryInBytes     int64  `json:"version_map_memory_in_bytes"`
	FixedBitSetMemory           string `json:"fixed_bit_set"` // not a typo
	FixedBitSetMemoryInBytes    int64  `json:"fixed_bit_set_memory_in_bytes"`
}

type NodesStatsService

type NodesStatsService struct {
	// contains filtered or unexported fields
}

NodesStatsService returns node statistics. See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/cluster-nodes-stats.html for details.

func NewNodesStatsService

func NewNodesStatsService(client *Client) *NodesStatsService

NewNodesStatsService creates a new NodesStatsService.

func (*NodesStatsService) CompletionFields

func (s *NodesStatsService) CompletionFields(completionFields ...string) *NodesStatsService

CompletionFields is a list of fields for `fielddata` and `suggest` index metric (supports wildcards).

func (*NodesStatsService) Do

Do executes the operation.

func (*NodesStatsService) FielddataFields

func (s *NodesStatsService) FielddataFields(fielddataFields ...string) *NodesStatsService

FielddataFields is a list of fields for `fielddata` index metric (supports wildcards).

func (*NodesStatsService) Fields

func (s *NodesStatsService) Fields(fields ...string) *NodesStatsService

Fields is a list of fields for `fielddata` and `completion` index metric (supports wildcards).

func (*NodesStatsService) Groups

func (s *NodesStatsService) Groups(groups bool) *NodesStatsService

Groups is a list of search groups for `search` index metric.

func (*NodesStatsService) Human

func (s *NodesStatsService) Human(human bool) *NodesStatsService

Human indicates whether to return time and byte values in human-readable format.

func (*NodesStatsService) IndexMetric

func (s *NodesStatsService) IndexMetric(indexMetric ...string) *NodesStatsService

IndexMetric limits the information returned for `indices` metric to the specific index metrics. Isn't used if `indices` (or `all`) metric isn't specified..

func (*NodesStatsService) Level

func (s *NodesStatsService) Level(level string) *NodesStatsService

Level specifies whether to return indices stats aggregated at node, index or shard level.

func (*NodesStatsService) Metric

func (s *NodesStatsService) Metric(metric ...string) *NodesStatsService

Metric limits the information returned to the specified metrics.

func (*NodesStatsService) NodeId

func (s *NodesStatsService) NodeId(nodeId ...string) *NodesStatsService

NodeId is a list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes.

func (*NodesStatsService) Pretty

func (s *NodesStatsService) Pretty(pretty bool) *NodesStatsService

Pretty indicates that the JSON response be indented and human readable.

func (*NodesStatsService) Timeout

func (s *NodesStatsService) Timeout(timeout string) *NodesStatsService

Timeout specifies an explicit operation timeout.

func (*NodesStatsService) Types

func (s *NodesStatsService) Types(types ...string) *NodesStatsService

Types a list of document types for the `indexing` index metric.

func (*NodesStatsService) Validate

func (s *NodesStatsService) Validate() error

Validate checks if the operation is valid.

type NodesStatsStoreStats

type NodesStatsStoreStats struct {
	Size        string `json:"size"`
	SizeInBytes int64  `json:"size_in_bytes"`
}

type NodesStatsSuggestStats

type NodesStatsSuggestStats struct {
	Total             int64  `json:"total"`
	TotalTime         string `json:"total_time"`
	TotalTimeInMillis int64  `json:"total_time_in_millis"`
	Current           int64  `json:"current"`
}

type NodesStatsTranslogStats

type NodesStatsTranslogStats struct {
	Operations  int64  `json:"operations"`
	Size        string `json:"size"`
	SizeInBytes int64  `json:"size_in_bytes"`
}

type NodesStatsWarmerStats

type NodesStatsWarmerStats struct {
	Current           int64  `json:"current"`
	Total             int64  `json:"total"`
	TotalTime         string `json:"total_time"`
	TotalTimeInMillis int64  `json:"total_time_in_millis"`
}

type Notify

type Notify func(error)

Notify is a notify-on-error function. It receives error returned from an operation.

Notice that if the backoff policy stated to stop retrying, the notify function isn't called.

type Operation

type Operation func() error

An Operation is executing by Retry() or RetryNotify(). The operation will be retried using a backoff policy if it returns an error.

type ParentIdQuery

type ParentIdQuery struct {
	// contains filtered or unexported fields
}

ParentIdQuery can be used to find child documents which belong to a particular parent. Given the following mapping definition.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-parent-id-query.html

func NewParentIdQuery

func NewParentIdQuery(typ, id string) *ParentIdQuery

NewParentIdQuery creates and initializes a new parent_id query.

func (*ParentIdQuery) Boost

func (q *ParentIdQuery) Boost(boost float64) *ParentIdQuery

Boost sets the boost for this query.

func (*ParentIdQuery) Id

func (q *ParentIdQuery) Id(id string) *ParentIdQuery

Id sets the id.

func (*ParentIdQuery) IgnoreUnmapped

func (q *ParentIdQuery) IgnoreUnmapped(ignore bool) *ParentIdQuery

IgnoreUnmapped specifies whether unmapped types should be ignored. If set to false, the query failes when an unmapped type is found.

func (*ParentIdQuery) InnerHit

func (q *ParentIdQuery) InnerHit(innerHit *InnerHit) *ParentIdQuery

InnerHit sets the inner hit definition in the scope of this query and reusing the defined type and query.

func (*ParentIdQuery) QueryName

func (q *ParentIdQuery) QueryName(queryName string) *ParentIdQuery

QueryName specifies the query name for the filter that can be used when searching for matched filters per hit.

func (*ParentIdQuery) Source

func (q *ParentIdQuery) Source() (interface{}, error)

Source returns JSON for the parent_id query.

func (*ParentIdQuery) Type

func (q *ParentIdQuery) Type(typ string) *ParentIdQuery

Type sets the parent type.

type PercentageScoreSignificanceHeuristic

type PercentageScoreSignificanceHeuristic struct{}

PercentageScoreSignificanceHeuristic implements the algorithm described in https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_percentage.

func NewPercentageScoreSignificanceHeuristic

func NewPercentageScoreSignificanceHeuristic() *PercentageScoreSignificanceHeuristic

NewPercentageScoreSignificanceHeuristic initializes a new instance of PercentageScoreSignificanceHeuristic.

func (*PercentageScoreSignificanceHeuristic) Name

Name returns the name of the heuristic in the REST interface.

func (*PercentageScoreSignificanceHeuristic) Source

func (sh *PercentageScoreSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type PercentileRanksAggregation

type PercentileRanksAggregation struct {
	// contains filtered or unexported fields
}

PercentileRanksAggregation See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-percentile-rank-aggregation.html

func NewPercentileRanksAggregation

func NewPercentileRanksAggregation() *PercentileRanksAggregation

func (*PercentileRanksAggregation) Compression

func (a *PercentileRanksAggregation) Compression(compression float64) *PercentileRanksAggregation

func (*PercentileRanksAggregation) Estimator

func (*PercentileRanksAggregation) Field

func (*PercentileRanksAggregation) Format

func (*PercentileRanksAggregation) Meta

func (a *PercentileRanksAggregation) Meta(metaData map[string]interface{}) *PercentileRanksAggregation

Meta sets the meta data to be included in the aggregation response.

func (*PercentileRanksAggregation) Script

func (*PercentileRanksAggregation) Source

func (a *PercentileRanksAggregation) Source() (interface{}, error)

func (*PercentileRanksAggregation) SubAggregation

func (a *PercentileRanksAggregation) SubAggregation(name string, subAggregation Aggregation) *PercentileRanksAggregation

func (*PercentileRanksAggregation) Values

type PercentilesAggregation

type PercentilesAggregation struct {
	// contains filtered or unexported fields
}

PercentilesAggregation See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-percentile-aggregation.html

func NewPercentilesAggregation

func NewPercentilesAggregation() *PercentilesAggregation

func (*PercentilesAggregation) Compression

func (a *PercentilesAggregation) Compression(compression float64) *PercentilesAggregation

func (*PercentilesAggregation) Estimator

func (a *PercentilesAggregation) Estimator(estimator string) *PercentilesAggregation

func (*PercentilesAggregation) Field

func (*PercentilesAggregation) Format

func (*PercentilesAggregation) Meta

func (a *PercentilesAggregation) Meta(metaData map[string]interface{}) *PercentilesAggregation

Meta sets the meta data to be included in the aggregation response.

func (*PercentilesAggregation) Percentiles

func (a *PercentilesAggregation) Percentiles(percentiles ...float64) *PercentilesAggregation

func (*PercentilesAggregation) Script

func (*PercentilesAggregation) Source

func (a *PercentilesAggregation) Source() (interface{}, error)

func (*PercentilesAggregation) SubAggregation

func (a *PercentilesAggregation) SubAggregation(name string, subAggregation Aggregation) *PercentilesAggregation

type PercentilesBucketAggregation

type PercentilesBucketAggregation struct {
	// contains filtered or unexported fields
}

PercentilesBucketAggregation is a sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-percentiles-bucket-aggregation.html

func NewPercentilesBucketAggregation

func NewPercentilesBucketAggregation() *PercentilesBucketAggregation

NewPercentilesBucketAggregation creates and initializes a new PercentilesBucketAggregation.

func (*PercentilesBucketAggregation) BucketsPath

func (p *PercentilesBucketAggregation) BucketsPath(bucketsPaths ...string) *PercentilesBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*PercentilesBucketAggregation) Format

Format to apply the output value of this aggregation.

func (*PercentilesBucketAggregation) GapInsertZeros

GapInsertZeros inserts zeros for gaps in the series.

func (*PercentilesBucketAggregation) GapPolicy

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*PercentilesBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*PercentilesBucketAggregation) Meta

func (p *PercentilesBucketAggregation) Meta(metaData map[string]interface{}) *PercentilesBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*PercentilesBucketAggregation) Percents

Percents to calculate percentiles for in this aggregation.

func (*PercentilesBucketAggregation) Source

func (p *PercentilesBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type PercolatorQuery

type PercolatorQuery struct {
	// contains filtered or unexported fields
}

PercolatorQuery can be used to match queries stored in an index.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-percolate-query.html

func NewPercolatorQuery

func NewPercolatorQuery() *PercolatorQuery

NewPercolatorQuery creates and initializes a new Percolator query.

func (*PercolatorQuery) Document

func (q *PercolatorQuery) Document(doc interface{}) *PercolatorQuery

func (*PercolatorQuery) DocumentType deprecated

func (q *PercolatorQuery) DocumentType(typ string) *PercolatorQuery

Deprecated: DocumentType is deprecated as of 6.0.

func (*PercolatorQuery) Field

func (q *PercolatorQuery) Field(field string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentId

func (q *PercolatorQuery) IndexedDocumentId(id string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentIndex

func (q *PercolatorQuery) IndexedDocumentIndex(index string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentPreference

func (q *PercolatorQuery) IndexedDocumentPreference(preference string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentRouting

func (q *PercolatorQuery) IndexedDocumentRouting(routing string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentType

func (q *PercolatorQuery) IndexedDocumentType(typ string) *PercolatorQuery

func (*PercolatorQuery) IndexedDocumentVersion

func (q *PercolatorQuery) IndexedDocumentVersion(version int64) *PercolatorQuery

func (*PercolatorQuery) Source

func (q *PercolatorQuery) Source() (interface{}, error)

Source returns JSON for the percolate query.

type PerformRequestOptions

type PerformRequestOptions struct {
	Method       string
	Path         string
	Params       url.Values
	Body         interface{}
	ContentType  string
	IgnoreErrors []int
	Retrier      Retrier
}

PerformRequestOptions must be passed into PerformRequest.

type PhraseSuggester

type PhraseSuggester struct {
	Suggester
	// contains filtered or unexported fields
}

PhraseSuggester provides an API to access word alternatives on a per token basis within a certain string distance. For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-phrase.html.

func NewPhraseSuggester

func NewPhraseSuggester(name string) *PhraseSuggester

NewPhraseSuggester creates a new PhraseSuggester.

func (*PhraseSuggester) Analyzer

func (q *PhraseSuggester) Analyzer(analyzer string) *PhraseSuggester

func (*PhraseSuggester) CandidateGenerator

func (q *PhraseSuggester) CandidateGenerator(generator CandidateGenerator) *PhraseSuggester

func (*PhraseSuggester) CandidateGenerators

func (q *PhraseSuggester) CandidateGenerators(generators ...CandidateGenerator) *PhraseSuggester

func (*PhraseSuggester) ClearCandidateGenerator

func (q *PhraseSuggester) ClearCandidateGenerator() *PhraseSuggester

func (*PhraseSuggester) CollateParams

func (q *PhraseSuggester) CollateParams(collateParams map[string]interface{}) *PhraseSuggester

func (*PhraseSuggester) CollatePreference

func (q *PhraseSuggester) CollatePreference(collatePreference string) *PhraseSuggester

func (*PhraseSuggester) CollatePrune

func (q *PhraseSuggester) CollatePrune(collatePrune bool) *PhraseSuggester

func (*PhraseSuggester) CollateQuery

func (q *PhraseSuggester) CollateQuery(collateQuery string) *PhraseSuggester

func (*PhraseSuggester) Confidence

func (q *PhraseSuggester) Confidence(confidence float64) *PhraseSuggester

func (*PhraseSuggester) ContextQueries

func (q *PhraseSuggester) ContextQueries(queries ...SuggesterContextQuery) *PhraseSuggester

func (*PhraseSuggester) ContextQuery

func (q *PhraseSuggester) ContextQuery(query SuggesterContextQuery) *PhraseSuggester

func (*PhraseSuggester) Field

func (q *PhraseSuggester) Field(field string) *PhraseSuggester

func (*PhraseSuggester) ForceUnigrams

func (q *PhraseSuggester) ForceUnigrams(forceUnigrams bool) *PhraseSuggester

func (*PhraseSuggester) GramSize

func (q *PhraseSuggester) GramSize(gramSize int) *PhraseSuggester

func (*PhraseSuggester) Highlight

func (q *PhraseSuggester) Highlight(preTag, postTag string) *PhraseSuggester

func (*PhraseSuggester) MaxErrors

func (q *PhraseSuggester) MaxErrors(maxErrors float64) *PhraseSuggester

func (*PhraseSuggester) Name

func (q *PhraseSuggester) Name() string

func (*PhraseSuggester) RealWordErrorLikelihood

func (q *PhraseSuggester) RealWordErrorLikelihood(realWordErrorLikelihood float64) *PhraseSuggester

func (*PhraseSuggester) Separator

func (q *PhraseSuggester) Separator(separator string) *PhraseSuggester

func (*PhraseSuggester) ShardSize

func (q *PhraseSuggester) ShardSize(shardSize int) *PhraseSuggester

func (*PhraseSuggester) Size

func (q *PhraseSuggester) Size(size int) *PhraseSuggester

func (*PhraseSuggester) SmoothingModel

func (q *PhraseSuggester) SmoothingModel(smoothingModel SmoothingModel) *PhraseSuggester

func (*PhraseSuggester) Source

func (q *PhraseSuggester) Source(includeName bool) (interface{}, error)

Source generates the source for the phrase suggester.

func (*PhraseSuggester) Text

func (q *PhraseSuggester) Text(text string) *PhraseSuggester

func (*PhraseSuggester) TokenLimit

func (q *PhraseSuggester) TokenLimit(tokenLimit int) *PhraseSuggester

type PingResult

type PingResult struct {
	Name        string `json:"name"`
	ClusterName string `json:"cluster_name"`
	Version     struct {
		Number         string `json:"number"`
		BuildHash      string `json:"build_hash"`
		BuildTimestamp string `json:"build_timestamp"`
		BuildSnapshot  bool   `json:"build_snapshot"`
		LuceneVersion  string `json:"lucene_version"`
	} `json:"version"`
	TagLine string `json:"tagline"`
}

PingResult is the result returned from querying the Elasticsearch server.

type PingService

type PingService struct {
	// contains filtered or unexported fields
}

PingService checks if an Elasticsearch server on a given URL is alive. When asked for, it can also return various information about the Elasticsearch server, e.g. the Elasticsearch version number.

Ping simply starts a HTTP GET request to the URL of the server. If the server responds with HTTP Status code 200 OK, the server is alive.

func NewPingService

func NewPingService(client *Client) *PingService

func (*PingService) Do

func (s *PingService) Do(ctx context.Context) (*PingResult, int, error)

Do returns the PingResult, the HTTP status code of the Elasticsearch server, and an error.

func (*PingService) HttpHeadOnly

func (s *PingService) HttpHeadOnly(httpHeadOnly bool) *PingService

HeadOnly makes the service to only return the status code in Do; the PingResult will be nil.

func (*PingService) Pretty

func (s *PingService) Pretty(pretty bool) *PingService

func (*PingService) Timeout

func (s *PingService) Timeout(timeout string) *PingService

func (*PingService) URL

func (s *PingService) URL(url string) *PingService

type PrefixQuery

type PrefixQuery struct {
	// contains filtered or unexported fields
}

PrefixQuery matches documents that have fields containing terms with a specified prefix (not analyzed).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-prefix-query.html

Example
package main

import (
	"context"

	"github.com/olivere/elastic"
)

func main() {
	// Get a client to the local Elasticsearch instance.
	client, err := elastic.NewClient()
	if err != nil {
		// Handle error
		panic(err)
	}

	// Define wildcard query
	q := elastic.NewPrefixQuery("user", "oli")
	q = q.QueryName("my_query_name")

	searchResult, err := client.Search().
		Index("twitter").
		Query(q).
		Pretty(true).
		Do(context.Background())
	if err != nil {
		// Handle error
		panic(err)
	}
	_ = searchResult
}
Output:

func NewPrefixQuery

func NewPrefixQuery(name string, prefix string) *PrefixQuery

NewPrefixQuery creates and initializes a new PrefixQuery.

func (*PrefixQuery) Boost

func (q *PrefixQuery) Boost(boost float64) *PrefixQuery

Boost sets the boost for this query.

func (*PrefixQuery) QueryName

func (q *PrefixQuery) QueryName(queryName string) *PrefixQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit.

func (*PrefixQuery) Rewrite

func (q *PrefixQuery) Rewrite(rewrite string) *PrefixQuery

func (*PrefixQuery) Source

func (q *PrefixQuery) Source() (interface{}, error)

Source returns JSON for the query.

type ProfileResult

type ProfileResult struct {
	Type          string           `json:"type"`
	Description   string           `json:"description,omitempty"`
	NodeTime      string           `json:"time,omitempty"`
	NodeTimeNanos int64            `json:"time_in_nanos,omitempty"`
	Breakdown     map[string]int64 `json:"breakdown,omitempty"`
	Children      []ProfileResult  `json:"children,omitempty"`
}

ProfileResult is the internal representation of a profiled query, corresponding to a single node in the query tree.

type PutMappingResponse

type PutMappingResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

PutMappingResponse is the response of IndicesPutMappingService.Do.

type Query

type Query interface {
	// Source returns the JSON-serializable query request.
	Source() (interface{}, error)
}

Query represents the generic query interface. A query's sole purpose is to return the source of the query as a JSON-serializable object. Returning map[string]interface{} is the norm for queries.

type QueryProfileShardResult

type QueryProfileShardResult struct {
	Query       []ProfileResult `json:"query,omitempty"`
	RewriteTime int64           `json:"rewrite_time,omitempty"`
	Collector   []interface{}   `json:"collector,omitempty"`
}

QueryProfileShardResult is a container class to hold the profile results for a single shard in the request. It comtains a list of query profiles, a collector tree and a total rewrite tree.

type QueryRescorer

type QueryRescorer struct {
	// contains filtered or unexported fields
}

func NewQueryRescorer

func NewQueryRescorer(query Query) *QueryRescorer

func (*QueryRescorer) Name

func (r *QueryRescorer) Name() string

func (*QueryRescorer) QueryWeight

func (r *QueryRescorer) QueryWeight(queryWeight float64) *QueryRescorer

func (*QueryRescorer) RescoreQueryWeight

func (r *QueryRescorer) RescoreQueryWeight(rescoreQueryWeight float64) *QueryRescorer

func (*QueryRescorer) ScoreMode

func (r *QueryRescorer) ScoreMode(scoreMode string) *QueryRescorer

func (*QueryRescorer) Source

func (r *QueryRescorer) Source() (interface{}, error)

type QueryStringQuery

type QueryStringQuery struct {
	// contains filtered or unexported fields
}

QueryStringQuery uses the query parser in order to parse its content.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-query-string-query.html

func NewQueryStringQuery

func NewQueryStringQuery(queryString string) *QueryStringQuery

NewQueryStringQuery creates and initializes a new QueryStringQuery.

func (*QueryStringQuery) AllowLeadingWildcard

func (q *QueryStringQuery) AllowLeadingWildcard(allowLeadingWildcard bool) *QueryStringQuery

AllowLeadingWildcard specifies whether leading wildcards should be allowed or not (defaults to true).

func (*QueryStringQuery) AnalyzeWildcard

func (q *QueryStringQuery) AnalyzeWildcard(analyzeWildcard bool) *QueryStringQuery

AnalyzeWildcard indicates whether to enabled analysis on wildcard and prefix queries.

func (*QueryStringQuery) Analyzer

func (q *QueryStringQuery) Analyzer(analyzer string) *QueryStringQuery

Analyzer is an optional analyzer used to analyze the query string. Note, if a field has search analyzer defined for it, then it will be used automatically. Defaults to the smart search analyzer.

func (*QueryStringQuery) Boost

func (q *QueryStringQuery) Boost(boost float64) *QueryStringQuery

Boost sets the boost for this query.

func (*QueryStringQuery) DefaultField

func (q *QueryStringQuery) DefaultField(defaultField string) *QueryStringQuery

DefaultField specifies the field to run against when no prefix field is specified. Only relevant when not explicitly adding fields the query string will run against.

func (*QueryStringQuery) DefaultOperator

func (q *QueryStringQuery) DefaultOperator(operator string) *QueryStringQuery

DefaultOperator sets the boolean operator of the query parser used to parse the query string.

In default mode (OR) terms without any modifiers are considered optional, e.g. "capital of Hungary" is equal to "capital OR of OR Hungary".

In AND mode, terms are considered to be in conjunction. The above mentioned query is then parsed as "capital AND of AND Hungary".

func (*QueryStringQuery) EnablePositionIncrements

func (q *QueryStringQuery) EnablePositionIncrements(enablePositionIncrements bool) *QueryStringQuery

EnablePositionIncrements indicates whether to enable position increments in result query. Defaults to true.

When set, result phrase and multi-phrase queries will be aware of position increments. Useful when e.g. a StopFilter increases the position increment of the token that follows an omitted token.

func (*QueryStringQuery) Escape

func (q *QueryStringQuery) Escape(escape bool) *QueryStringQuery

Escape performs escaping of the query string.

func (*QueryStringQuery) Field

func (q *QueryStringQuery) Field(field string) *QueryStringQuery

Field adds a field to run the query string against.

func (*QueryStringQuery) FieldWithBoost

func (q *QueryStringQuery) FieldWithBoost(field string, boost float64) *QueryStringQuery

FieldWithBoost adds a field to run the query string against with a specific boost.

func (*QueryStringQuery) Fuzziness

func (q *QueryStringQuery) Fuzziness(fuzziness string) *QueryStringQuery

Fuzziness sets the edit distance for fuzzy queries. Default is "AUTO".

func (*QueryStringQuery) FuzzyMaxExpansions

func (q *QueryStringQuery) FuzzyMaxExpansions(fuzzyMaxExpansions int) *QueryStringQuery

func (*QueryStringQuery) FuzzyPrefixLength

func (q *QueryStringQuery) FuzzyPrefixLength(fuzzyPrefixLength int) *QueryStringQuery

FuzzyPrefixLength sets the minimum prefix length for fuzzy queries. Default is 1.

func (*QueryStringQuery) FuzzyRewrite

func (q *QueryStringQuery) FuzzyRewrite(fuzzyRewrite string) *QueryStringQuery

func (*QueryStringQuery) Lenient

func (q *QueryStringQuery) Lenient(lenient bool) *QueryStringQuery

Lenient indicates whether the query string parser should be lenient when parsing field values. It defaults to the index setting and if not set, defaults to false.

func (*QueryStringQuery) Locale deprecated

func (q *QueryStringQuery) Locale(locale string) *QueryStringQuery

Locale specifies the locale to be used for string conversions.

Deprecated: Decision is now made by the analyzer.

func (*QueryStringQuery) LowercaseExpandedTerms deprecated

func (q *QueryStringQuery) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *QueryStringQuery

LowercaseExpandedTerms indicates whether terms of wildcard, prefix, fuzzy and range queries are automatically lower-cased or not. Default is true.

Deprecated: Decision is now made by the analyzer.

func (*QueryStringQuery) MaxDeterminizedState

func (q *QueryStringQuery) MaxDeterminizedState(maxDeterminizedStates int) *QueryStringQuery

MaxDeterminizedState protects against too-difficult regular expression queries.

func (*QueryStringQuery) MinimumShouldMatch

func (q *QueryStringQuery) MinimumShouldMatch(minimumShouldMatch string) *QueryStringQuery

func (*QueryStringQuery) PhraseSlop

func (q *QueryStringQuery) PhraseSlop(phraseSlop int) *QueryStringQuery

PhraseSlop sets the default slop for phrases. If zero, then exact matches are required. Default value is zero.

func (*QueryStringQuery) QueryName

func (q *QueryStringQuery) QueryName(queryName string) *QueryStringQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit.

func (*QueryStringQuery) QuoteAnalyzer

func (q *QueryStringQuery) QuoteAnalyzer(quoteAnalyzer string) *QueryStringQuery

QuoteAnalyzer is an optional analyzer to be used to analyze the query string for phrase searches. Note, if a field has search analyzer defined for it, then it will be used automatically. Defaults to the smart search analyzer.

func (*QueryStringQuery) QuoteFieldSuffix

func (q *QueryStringQuery) QuoteFieldSuffix(quoteFieldSuffix string) *QueryStringQuery

QuoteFieldSuffix is an optional field name suffix to automatically try and add to the field searched when using quoted text.

func (*QueryStringQuery) Rewrite

func (q *QueryStringQuery) Rewrite(rewrite string) *QueryStringQuery

func (*QueryStringQuery) Source

func (q *QueryStringQuery) Source() (interface{}, error)

Source returns JSON for the query.

func (*QueryStringQuery) TieBreaker

func (q *QueryStringQuery) TieBreaker(tieBreaker float64) *QueryStringQuery

TieBreaker is used when more than one field is used with the query string, and combined queries are using dismax.

func (*QueryStringQuery) TimeZone

func (q *QueryStringQuery) TimeZone(timeZone string) *QueryStringQuery

TimeZone can be used to automatically adjust to/from fields using a timezone. Only used with date fields, of course.

func (*QueryStringQuery) Type

Type sets how multiple fields should be combined to build textual part queries, e.g. "best_fields".

type RandomFunction

type RandomFunction struct {
	// contains filtered or unexported fields
}

RandomFunction builds a random score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_random for details.

func NewRandomFunction

func NewRandomFunction() *RandomFunction

NewRandomFunction initializes and returns a new RandomFunction.

func (*RandomFunction) GetWeight

func (fn *RandomFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*RandomFunction) Name

func (fn *RandomFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*RandomFunction) Seed

func (fn *RandomFunction) Seed(seed interface{}) *RandomFunction

Seed is documented in 1.6 as a numeric value. However, in the source code of the Java client, it also accepts strings. So we accept both here, too.

func (*RandomFunction) Source

func (fn *RandomFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*RandomFunction) Weight

func (fn *RandomFunction) Weight(weight float64) *RandomFunction

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type RangeAggregation

type RangeAggregation struct {
	// contains filtered or unexported fields
}

RangeAggregation is a multi-bucket value source based aggregation that enables the user to define a set of ranges - each representing a bucket. During the aggregation process, the values extracted from each document will be checked against each bucket range and "bucket" the relevant/matching document. Note that this aggregration includes the from value and excludes the to value for each range. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-range-aggregation.html

func NewRangeAggregation

func NewRangeAggregation() *RangeAggregation

func (*RangeAggregation) AddRange

func (a *RangeAggregation) AddRange(from, to interface{}) *RangeAggregation

func (*RangeAggregation) AddRangeWithKey

func (a *RangeAggregation) AddRangeWithKey(key string, from, to interface{}) *RangeAggregation

func (*RangeAggregation) AddUnboundedFrom

func (a *RangeAggregation) AddUnboundedFrom(to interface{}) *RangeAggregation

func (*RangeAggregation) AddUnboundedFromWithKey

func (a *RangeAggregation) AddUnboundedFromWithKey(key string, to interface{}) *RangeAggregation

func (*RangeAggregation) AddUnboundedTo

func (a *RangeAggregation) AddUnboundedTo(from interface{}) *RangeAggregation

func (*RangeAggregation) AddUnboundedToWithKey

func (a *RangeAggregation) AddUnboundedToWithKey(key string, from interface{}) *RangeAggregation

func (*RangeAggregation) Between

func (a *RangeAggregation) Between(from, to interface{}) *RangeAggregation

func (*RangeAggregation) BetweenWithKey

func (a *RangeAggregation) BetweenWithKey(key string, from, to interface{}) *RangeAggregation

func (*RangeAggregation) Field

func (a *RangeAggregation) Field(field string) *RangeAggregation

func (*RangeAggregation) Gt

func (a *RangeAggregation) Gt(from interface{}) *RangeAggregation

func (*RangeAggregation) GtWithKey

func (a *RangeAggregation) GtWithKey(key string, from interface{}) *RangeAggregation

func (*RangeAggregation) Keyed

func (a *RangeAggregation) Keyed(keyed bool) *RangeAggregation

func (*RangeAggregation) Lt

func (a *RangeAggregation) Lt(to interface{}) *RangeAggregation

func (*RangeAggregation) LtWithKey

func (a *RangeAggregation) LtWithKey(key string, to interface{}) *RangeAggregation

func (*RangeAggregation) Meta

func (a *RangeAggregation) Meta(metaData map[string]interface{}) *RangeAggregation

Meta sets the meta data to be included in the aggregation response.

func (*RangeAggregation) Missing

func (a *RangeAggregation) Missing(missing interface{}) *RangeAggregation

Missing configures the value to use when documents miss a value.

func (*RangeAggregation) Script

func (a *RangeAggregation) Script(script *Script) *RangeAggregation

func (*RangeAggregation) Source

func (a *RangeAggregation) Source() (interface{}, error)

func (*RangeAggregation) SubAggregation

func (a *RangeAggregation) SubAggregation(name string, subAggregation Aggregation) *RangeAggregation

func (*RangeAggregation) Unmapped

func (a *RangeAggregation) Unmapped(unmapped bool) *RangeAggregation

type RangeQuery

type RangeQuery struct {
	// contains filtered or unexported fields
}

RangeQuery matches documents with fields that have terms within a certain range.

For details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-range-query.html

func NewRangeQuery

func NewRangeQuery(name string) *RangeQuery

NewRangeQuery creates and initializes a new RangeQuery.

func (*RangeQuery) Boost

func (q *RangeQuery) Boost(boost float64) *RangeQuery

Boost sets the boost for this query.

func (*RangeQuery) Format

func (q *RangeQuery) Format(format string) *RangeQuery

Format is used for date fields. In that case, we can set the format to be used instead of the mapper format.

func (*RangeQuery) From

func (q *RangeQuery) From(from interface{}) *RangeQuery

From indicates the from part of the RangeQuery. Use nil to indicate an unbounded from part.

func (*RangeQuery) Gt

func (q *RangeQuery) Gt(from interface{}) *RangeQuery

Gt indicates a greater-than value for the from part. Use nil to indicate an unbounded from part.

func (*RangeQuery) Gte

func (q *RangeQuery) Gte(from interface{}) *RangeQuery

Gte indicates a greater-than-or-equal value for the from part. Use nil to indicate an unbounded from part.

func (*RangeQuery) IncludeLower

func (q *RangeQuery) IncludeLower(includeLower bool) *RangeQuery

IncludeLower indicates whether the lower bound should be included or not. Defaults to true.

func (*RangeQuery) IncludeUpper

func (q *RangeQuery) IncludeUpper(includeUpper bool) *RangeQuery

IncludeUpper indicates whether the upper bound should be included or not. Defaults to true.

func (*RangeQuery) Lt

func (q *RangeQuery) Lt(to interface{}) *RangeQuery

Lt indicates a less-than value for the to part. Use nil to indicate an unbounded to part.

func (*RangeQuery) Lte

func (q *RangeQuery) Lte(to interface{}) *RangeQuery

Lte indicates a less-than-or-equal value for the to part. Use nil to indicate an unbounded to part.

func (*RangeQuery) QueryName

func (q *RangeQuery) QueryName(queryName string) *RangeQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit.

func (*RangeQuery) Relation

func (q *RangeQuery) Relation(relation string) *RangeQuery

Relation is used for range fields. which can be one of "within", "contains", "intersects" (default) and "disjoint".

func (*RangeQuery) Source

func (q *RangeQuery) Source() (interface{}, error)

Source returns JSON for the query.

func (*RangeQuery) TimeZone

func (q *RangeQuery) TimeZone(timeZone string) *RangeQuery

TimeZone is used for date fields. In that case, we can adjust the from/to fields using a timezone.

func (*RangeQuery) To

func (q *RangeQuery) To(to interface{}) *RangeQuery

To indicates the to part of the RangeQuery. Use nil to indicate an unbounded to part.

type RawStringQuery

type RawStringQuery string

RawStringQuery can be used to treat a string representation of an ES query as a Query. Example usage:

q := RawStringQuery("{\"match_all\":{}}")
db.Search().Query(q).From(1).Size(100).Do()

func NewRawStringQuery

func NewRawStringQuery(q string) RawStringQuery

NewRawStringQuery ininitializes a new RawStringQuery. It is the same as RawStringQuery(q).

func (RawStringQuery) Source

func (q RawStringQuery) Source() (interface{}, error)

Source returns the JSON encoded body

type RefreshResult

type RefreshResult struct {
	Shards shardsInfo `json:"_shards,omitempty"`
}

RefreshResult is the outcome of RefreshService.Do.

type RefreshService

type RefreshService struct {
	// contains filtered or unexported fields
}

RefreshService explicitly refreshes one or more indices. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/indices-refresh.html.

func NewRefreshService

func NewRefreshService(client *Client) *RefreshService

NewRefreshService creates a new instance of RefreshService.

func (*RefreshService) Do

Do executes the request.

func (*RefreshService) Index

func (s *RefreshService) Index(index ...string) *RefreshService

Index specifies the indices to refresh.

func (*RefreshService) Pretty

func (s *RefreshService) Pretty(pretty bool) *RefreshService

Pretty asks Elasticsearch to return indented JSON.

type RegexCompletionSuggesterOptions

type RegexCompletionSuggesterOptions struct {
	// contains filtered or unexported fields
}

RegexCompletionSuggesterOptions represents the options for regex completion suggester.

func NewRegexCompletionSuggesterOptions

func NewRegexCompletionSuggesterOptions() *RegexCompletionSuggesterOptions

NewRegexCompletionSuggesterOptions initializes a new RegexCompletionSuggesterOptions instance.

func (*RegexCompletionSuggesterOptions) Flags

Flags represents internal regex flags. See https://www.elastic.co/guide/en/elasticsearch/reference/5.6/search-suggesters-completion.html#regex for details.

func (*RegexCompletionSuggesterOptions) MaxDeterminizedStates

MaxDeterminizedStates represents the maximum automaton states allowed for regex expansion. See https://www.elastic.co/guide/en/elasticsearch/reference/5.6/search-suggesters-completion.html#regex for details.

func (*RegexCompletionSuggesterOptions) Source

func (o *RegexCompletionSuggesterOptions) Source() (interface{}, error)

Source creates the JSON data.

type RegexpQuery

type RegexpQuery struct {
	// contains filtered or unexported fields
}

RegexpQuery allows you to use regular expression term queries.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-regexp-query.html

func NewRegexpQuery

func NewRegexpQuery(name string, regexp string) *RegexpQuery

NewRegexpQuery creates and initializes a new RegexpQuery.

func (*RegexpQuery) Boost

func (q *RegexpQuery) Boost(boost float64) *RegexpQuery

Boost sets the boost for this query.

func (*RegexpQuery) Flags

func (q *RegexpQuery) Flags(flags string) *RegexpQuery

Flags sets the regexp flags.

func (*RegexpQuery) MaxDeterminizedStates

func (q *RegexpQuery) MaxDeterminizedStates(maxDeterminizedStates int) *RegexpQuery

MaxDeterminizedStates protects against complex regular expressions.

func (*RegexpQuery) QueryName

func (q *RegexpQuery) QueryName(queryName string) *RegexpQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*RegexpQuery) Rewrite

func (q *RegexpQuery) Rewrite(rewrite string) *RegexpQuery

func (*RegexpQuery) Source

func (q *RegexpQuery) Source() (interface{}, error)

Source returns the JSON-serializable query data.

type ReindexDestination

type ReindexDestination struct {
	// contains filtered or unexported fields
}

ReindexDestination is the destination of a Reindex API call. It is basically the meta data of a BulkIndexRequest.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html fsourcer details.

func NewReindexDestination

func NewReindexDestination() *ReindexDestination

NewReindexDestination returns a new ReindexDestination.

func (*ReindexDestination) Discard

func (r *ReindexDestination) Discard() *ReindexDestination

Discard sets the routing on the bulk request sent for each match to null.

func (*ReindexDestination) Index

func (r *ReindexDestination) Index(index string) *ReindexDestination

Index specifies name of the Elasticsearch index to use as the destination of a reindexing process.

func (*ReindexDestination) Keep

Keep sets the routing on the bulk request sent for each match to the routing of the match (the default).

func (*ReindexDestination) OpType

func (r *ReindexDestination) OpType(opType string) *ReindexDestination

OpType specifies if this request should follow create-only or upsert behavior. This follows the OpType of the standard document index API. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-index_.html#operation-type for details.

func (*ReindexDestination) Parent

func (r *ReindexDestination) Parent(parent string) *ReindexDestination

Parent specifies the identifier of the parent document (if available).

func (*ReindexDestination) Routing

func (r *ReindexDestination) Routing(routing string) *ReindexDestination

Routing specifies a routing value for the reindexing request. It can be "keep", "discard", or start with "=". The latter specifies the routing on the bulk request.

func (*ReindexDestination) Source

func (r *ReindexDestination) Source() (interface{}, error)

Source returns a serializable JSON request for the request.

func (*ReindexDestination) Type

Type specifies the Elasticsearch type to use for reindexing.

func (*ReindexDestination) Version

func (r *ReindexDestination) Version(version int64) *ReindexDestination

Version indicates the version of the document as part of an optimistic concurrency model.

func (*ReindexDestination) VersionType

func (r *ReindexDestination) VersionType(versionType string) *ReindexDestination

VersionType specifies how versions are created.

type ReindexRemoteInfo

type ReindexRemoteInfo struct {
	// contains filtered or unexported fields
}

ReindexRemoteInfo contains information for reindexing from a remote cluster.

func NewReindexRemoteInfo

func NewReindexRemoteInfo() *ReindexRemoteInfo

NewReindexRemoteInfo creates a new ReindexRemoteInfo.

func (*ReindexRemoteInfo) ConnectTimeout

func (ri *ReindexRemoteInfo) ConnectTimeout(timeout string) *ReindexRemoteInfo

ConnectTimeout sets the connection timeout to connect with the remote cluster. Use ES compatible values like e.g. "30s" or "1m".

func (*ReindexRemoteInfo) Host

func (ri *ReindexRemoteInfo) Host(host string) *ReindexRemoteInfo

Host sets the host information of the remote cluster. It must be of the form "http(s)://<hostname>:<port>"

func (*ReindexRemoteInfo) Password

func (ri *ReindexRemoteInfo) Password(password string) *ReindexRemoteInfo

Password sets the password to authenticate with the remote cluster.

func (*ReindexRemoteInfo) SocketTimeout

func (ri *ReindexRemoteInfo) SocketTimeout(timeout string) *ReindexRemoteInfo

SocketTimeout sets the socket timeout to connect with the remote cluster. Use ES compatible values like e.g. "30s" or "1m".

func (*ReindexRemoteInfo) Source

func (ri *ReindexRemoteInfo) Source() (interface{}, error)

Source returns the serializable JSON data for the request.

func (*ReindexRemoteInfo) Username

func (ri *ReindexRemoteInfo) Username(username string) *ReindexRemoteInfo

Username sets the username to authenticate with the remote cluster.

type ReindexService

type ReindexService struct {
	// contains filtered or unexported fields
}

ReindexService is a method to copy documents from one index to another. It is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-reindex.html.

func NewReindexService

func NewReindexService(client *Client) *ReindexService

NewReindexService creates a new ReindexService.

func (*ReindexService) AbortOnVersionConflict

func (s *ReindexService) AbortOnVersionConflict() *ReindexService

AbortOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("abort").

func (*ReindexService) Body

func (s *ReindexService) Body(body interface{}) *ReindexService

Body specifies the body of the request to send to Elasticsearch. It overrides settings specified with other setters, e.g. Query.

func (*ReindexService) Conflicts

func (s *ReindexService) Conflicts(conflicts string) *ReindexService

Conflicts indicates what to do when the process detects version conflicts. Possible values are "proceed" and "abort".

func (*ReindexService) Destination

func (s *ReindexService) Destination(destination *ReindexDestination) *ReindexService

Destination specifies the destination of the reindexing process.

func (*ReindexService) DestinationIndex

func (s *ReindexService) DestinationIndex(index string) *ReindexService

DestinationIndex specifies the destination index of the reindexing process.

func (*ReindexService) DestinationIndexAndType

func (s *ReindexService) DestinationIndexAndType(index, typ string) *ReindexService

DestinationIndexAndType specifies both the destination index and type of the reindexing process.

func (*ReindexService) Do

Do executes the operation.

func (*ReindexService) DoAsync

func (s *ReindexService) DoAsync(ctx context.Context) (*StartTaskResult, error)

DoAsync executes the reindexing operation asynchronously by starting a new task. Callers need to use the Task Management API to watch the outcome of the reindexing operation.

func (*ReindexService) Pretty

func (s *ReindexService) Pretty(pretty bool) *ReindexService

Pretty indicates that the JSON response be indented and human readable.

func (*ReindexService) ProceedOnVersionConflict

func (s *ReindexService) ProceedOnVersionConflict() *ReindexService

ProceedOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("proceed").

func (*ReindexService) Refresh

func (s *ReindexService) Refresh(refresh string) *ReindexService

Refresh indicates whether Elasticsearch should refresh the effected indexes immediately.

func (*ReindexService) RequestsPerSecond

func (s *ReindexService) RequestsPerSecond(requestsPerSecond int) *ReindexService

RequestsPerSecond specifies the throttle to set on this request in sub-requests per second. -1 means set no throttle as does "unlimited" which is the only non-float this accepts.

func (*ReindexService) Script

func (s *ReindexService) Script(script *Script) *ReindexService

Script allows for modification of the documents as they are reindexed from source to destination.

func (*ReindexService) Size

func (s *ReindexService) Size(size int) *ReindexService

Size sets an upper limit for the number of processed documents.

func (*ReindexService) Slices

func (s *ReindexService) Slices(slices int) *ReindexService

Slices specifies the number of slices this task should be divided into. Defaults to 1.

func (*ReindexService) Source

func (s *ReindexService) Source(source *ReindexSource) *ReindexService

Source specifies the source of the reindexing process.

func (*ReindexService) SourceIndex

func (s *ReindexService) SourceIndex(index string) *ReindexService

SourceIndex specifies the source index of the reindexing process.

func (*ReindexService) Timeout

func (s *ReindexService) Timeout(timeout string) *ReindexService

Timeout is the time each individual bulk request should wait for shards that are unavailable.

func (*ReindexService) Validate

func (s *ReindexService) Validate() error

Validate checks if the operation is valid.

func (*ReindexService) WaitForActiveShards

func (s *ReindexService) WaitForActiveShards(waitForActiveShards string) *ReindexService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the reindex operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

func (*ReindexService) WaitForCompletion

func (s *ReindexService) WaitForCompletion(waitForCompletion bool) *ReindexService

WaitForCompletion indicates whether Elasticsearch should block until the reindex is complete.

type ReindexSource

type ReindexSource struct {
	// contains filtered or unexported fields
}

ReindexSource specifies the source of a Reindex process.

func NewReindexSource

func NewReindexSource() *ReindexSource

NewReindexSource creates a new ReindexSource.

func (*ReindexSource) Index

func (r *ReindexSource) Index(indices ...string) *ReindexSource

func (*ReindexSource) Preference

func (r *ReindexSource) Preference(preference string) *ReindexSource

func (*ReindexSource) Query

func (r *ReindexSource) Query(query Query) *ReindexSource

func (*ReindexSource) RemoteInfo

func (s *ReindexSource) RemoteInfo(ri *ReindexRemoteInfo) *ReindexSource

RemoteInfo sets up reindexing from a remote cluster.

func (*ReindexSource) RequestCache

func (r *ReindexSource) RequestCache(requestCache bool) *ReindexSource

func (*ReindexSource) Scroll

func (r *ReindexSource) Scroll(scroll string) *ReindexSource

func (*ReindexSource) SearchType

func (r *ReindexSource) SearchType(searchType string) *ReindexSource

SearchType is the search operation type. Possible values are "query_then_fetch" and "dfs_query_then_fetch".

func (*ReindexSource) SearchTypeDfsQueryThenFetch

func (r *ReindexSource) SearchTypeDfsQueryThenFetch() *ReindexSource

func (*ReindexSource) SearchTypeQueryThenFetch

func (r *ReindexSource) SearchTypeQueryThenFetch() *ReindexSource

func (*ReindexSource) Sort

func (s *ReindexSource) Sort(field string, ascending bool) *ReindexSource

Sort adds a sort order.

func (*ReindexSource) SortBy

func (s *ReindexSource) SortBy(sorter ...Sorter) *ReindexSource

SortBy adds a sort order.

func (*ReindexSource) SortWithInfo

func (s *ReindexSource) SortWithInfo(info SortInfo) *ReindexSource

SortWithInfo adds a sort order.

func (*ReindexSource) Source

func (r *ReindexSource) Source() (interface{}, error)

Source returns a serializable JSON request for the request.

func (*ReindexSource) Type

func (r *ReindexSource) Type(types ...string) *ReindexSource

type Request

type Request http.Request

Elasticsearch-specific HTTP request

func NewRequest

func NewRequest(method, url string) (*Request, error)

NewRequest is a http.Request and adds features such as encoding the body.

func (*Request) SetBasicAuth

func (r *Request) SetBasicAuth(username, password string)

SetBasicAuth wraps http.Request's SetBasicAuth.

func (*Request) SetBody

func (r *Request) SetBody(body interface{}) error

SetBody encodes the body in the request.

type Rescore

type Rescore struct {
	// contains filtered or unexported fields
}

func NewRescore

func NewRescore() *Rescore

func (*Rescore) IsEmpty

func (r *Rescore) IsEmpty() bool

func (*Rescore) Rescorer

func (r *Rescore) Rescorer(rescorer Rescorer) *Rescore

func (*Rescore) Source

func (r *Rescore) Source() (interface{}, error)

func (*Rescore) WindowSize

func (r *Rescore) WindowSize(windowSize int) *Rescore

type Rescorer

type Rescorer interface {
	Name() string
	Source() (interface{}, error)
}

type Response

type Response struct {
	// StatusCode is the HTTP status code, e.g. 200.
	StatusCode int
	// Header is the HTTP header from the HTTP response.
	// Keys in the map are canonicalized (see http.CanonicalHeaderKey).
	Header http.Header
	// Body is the deserialized response body.
	Body json.RawMessage
}

Response represents a response from Elasticsearch.

type RestoreSource

type RestoreSource struct {
	Repository string `json:"repository"`
	Snapshot   string `json:"snapshot"`
	Version    string `json:"version"`
	Index      string `json:"index"`
}

type Retrier

type Retrier interface {
	// Retry is called when a request has failed. It decides whether to retry
	// the call, how long to wait for the next call, or whether to return an
	// error (which will be returned to the service that started the HTTP
	// request in the first place).
	//
	// Callers may also use this to inspect the HTTP request/response and
	// the error that happened. Additional data can be passed through via
	// the context.
	Retry(ctx context.Context, retry int, req *http.Request, resp *http.Response, err error) (time.Duration, bool, error)
}

Retrier decides whether to retry a failed HTTP request with Elasticsearch.

type RetrierFunc

type RetrierFunc func(context.Context, int, *http.Request, *http.Response, error) (time.Duration, bool, error)

RetrierFunc specifies the signature of a Retry function.

type ReverseNestedAggregation

type ReverseNestedAggregation struct {
	// contains filtered or unexported fields
}

ReverseNestedAggregation defines a special single bucket aggregation that enables aggregating on parent docs from nested documents. Effectively this aggregation can break out of the nested block structure and link to other nested structures or the root document, which allows nesting other aggregations that aren’t part of the nested object in a nested aggregation.

See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-reverse-nested-aggregation.html

func NewReverseNestedAggregation

func NewReverseNestedAggregation() *ReverseNestedAggregation

NewReverseNestedAggregation initializes a new ReverseNestedAggregation bucket aggregation.

func (*ReverseNestedAggregation) Meta

func (a *ReverseNestedAggregation) Meta(metaData map[string]interface{}) *ReverseNestedAggregation

Meta sets the meta data to be included in the aggregation response.

func (*ReverseNestedAggregation) Path

Path set the path to use for this nested aggregation. The path must match the path to a nested object in the mappings. If it is not specified then this aggregation will go back to the root document.

func (*ReverseNestedAggregation) Source

func (a *ReverseNestedAggregation) Source() (interface{}, error)

func (*ReverseNestedAggregation) SubAggregation

func (a *ReverseNestedAggregation) SubAggregation(name string, subAggregation Aggregation) *ReverseNestedAggregation

type SamplerAggregation

type SamplerAggregation struct {
	// contains filtered or unexported fields
}

SamplerAggregation is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents. Optionally, diversity settings can be used to limit the number of matches that share a common value such as an "author".

See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-sampler-aggregation.html

func NewSamplerAggregation

func NewSamplerAggregation() *SamplerAggregation

func (*SamplerAggregation) ExecutionHint

func (a *SamplerAggregation) ExecutionHint(hint string) *SamplerAggregation

func (*SamplerAggregation) MaxDocsPerValue

func (a *SamplerAggregation) MaxDocsPerValue(maxDocsPerValue int) *SamplerAggregation

func (*SamplerAggregation) Meta

func (a *SamplerAggregation) Meta(metaData map[string]interface{}) *SamplerAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SamplerAggregation) ShardSize

func (a *SamplerAggregation) ShardSize(shardSize int) *SamplerAggregation

ShardSize sets the maximum number of docs returned from each shard.

func (*SamplerAggregation) Source

func (a *SamplerAggregation) Source() (interface{}, error)

func (*SamplerAggregation) SubAggregation

func (a *SamplerAggregation) SubAggregation(name string, subAggregation Aggregation) *SamplerAggregation

type ScoreFunction

type ScoreFunction interface {
	Name() string
	GetWeight() *float64 // returns the weight which must be serialized at the level of FunctionScoreQuery
	Source() (interface{}, error)
}

ScoreFunction is used in combination with the Function Score Query.

type ScoreSort

type ScoreSort struct {
	Sorter
	// contains filtered or unexported fields
}

ScoreSort sorts by relevancy score.

func NewScoreSort

func NewScoreSort() *ScoreSort

NewScoreSort creates a new ScoreSort.

func (*ScoreSort) Asc

func (s *ScoreSort) Asc() *ScoreSort

Asc sets ascending sort order.

func (*ScoreSort) Desc

func (s *ScoreSort) Desc() *ScoreSort

Desc sets descending sort order.

func (*ScoreSort) Order

func (s *ScoreSort) Order(ascending bool) *ScoreSort

Order defines whether sorting ascending (default) or descending.

func (*ScoreSort) Source

func (s *ScoreSort) Source() (interface{}, error)

Source returns the JSON-serializable data.

type Script

type Script struct {
	// contains filtered or unexported fields
}

Script holds all the paramaters necessary to compile or find in cache and then execute a script.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-scripting.html for details of scripting.

func NewScript

func NewScript(script string) *Script

NewScript creates and initializes a new Script.

func NewScriptInline

func NewScriptInline(script string) *Script

NewScriptInline creates and initializes a new inline script, i.e. code.

func NewScriptStored

func NewScriptStored(script string) *Script

NewScriptStored creates and initializes a new stored script.

func (*Script) Lang

func (s *Script) Lang(lang string) *Script

Lang sets the language of the script. Permitted values are "groovy", "expression", "mustache", "mvel" (default), "javascript", "python". To use certain languages, you need to configure your server and/or add plugins. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-scripting.html for details.

func (*Script) Param

func (s *Script) Param(name string, value interface{}) *Script

Param adds a key/value pair to the parameters that this script will be executed with.

func (*Script) Params

func (s *Script) Params(params map[string]interface{}) *Script

Params sets the map of parameters this script will be executed with.

func (*Script) Script

func (s *Script) Script(script string) *Script

Script is either the cache key of the script to be compiled/executed or the actual script source code for inline scripts. For indexed scripts this is the id used in the request. For file scripts this is the file name.

func (*Script) Source

func (s *Script) Source() (interface{}, error)

Source returns the JSON serializable data for this Script.

func (*Script) Type

func (s *Script) Type(typ string) *Script

Type sets the type of script: "inline" or "id".

type ScriptField

type ScriptField struct {
	FieldName string // name of the field
	// contains filtered or unexported fields
}

ScriptField is a single script field.

func NewScriptField

func NewScriptField(fieldName string, script *Script) *ScriptField

NewScriptField creates and initializes a new ScriptField.

func (*ScriptField) Source

func (f *ScriptField) Source() (interface{}, error)

Source returns the serializable JSON for the ScriptField.

type ScriptFunction

type ScriptFunction struct {
	// contains filtered or unexported fields
}

ScriptFunction builds a script score function. It uses a script to compute or influence the score of documents that match with the inner query or filter.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_script_score for details.

func NewScriptFunction

func NewScriptFunction(script *Script) *ScriptFunction

NewScriptFunction initializes and returns a new ScriptFunction.

func (*ScriptFunction) GetWeight

func (fn *ScriptFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*ScriptFunction) Name

func (fn *ScriptFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*ScriptFunction) Script

func (fn *ScriptFunction) Script(script *Script) *ScriptFunction

Script specifies the script to be executed.

func (*ScriptFunction) Source

func (fn *ScriptFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*ScriptFunction) Weight

func (fn *ScriptFunction) Weight(weight float64) *ScriptFunction

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type ScriptQuery

type ScriptQuery struct {
	// contains filtered or unexported fields
}

ScriptQuery allows to define scripts as filters.

For details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-script-query.html

func NewScriptQuery

func NewScriptQuery(script *Script) *ScriptQuery

NewScriptQuery creates and initializes a new ScriptQuery.

func (*ScriptQuery) QueryName

func (q *ScriptQuery) QueryName(queryName string) *ScriptQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*ScriptQuery) Source

func (q *ScriptQuery) Source() (interface{}, error)

Source returns JSON for the query.

type ScriptSignificanceHeuristic

type ScriptSignificanceHeuristic struct {
	// contains filtered or unexported fields
}

ScriptSignificanceHeuristic implements a scripted significance heuristic. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_scripted for details.

func NewScriptSignificanceHeuristic

func NewScriptSignificanceHeuristic() *ScriptSignificanceHeuristic

NewScriptSignificanceHeuristic initializes a new instance of ScriptSignificanceHeuristic.

func (*ScriptSignificanceHeuristic) Name

Name returns the name of the heuristic in the REST interface.

func (*ScriptSignificanceHeuristic) Script

Script specifies the script to use to get custom scores. The following parameters are available in the script: `_subset_freq`, `_superset_freq`, `_subset_size`, and `_superset_size`.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html#_scripted for details.

func (*ScriptSignificanceHeuristic) Source

func (sh *ScriptSignificanceHeuristic) Source() (interface{}, error)

Source returns the parameters that need to be added to the REST parameters.

type ScriptSort

type ScriptSort struct {
	Sorter
	// contains filtered or unexported fields
}

ScriptSort sorts by a custom script. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-scripting.html#modules-scripting for details about scripting.

func NewScriptSort

func NewScriptSort(script *Script, typ string) *ScriptSort

NewScriptSort creates and initializes a new ScriptSort. You must provide a script and a type, e.g. "string" or "number".

func (*ScriptSort) Asc

func (s *ScriptSort) Asc() *ScriptSort

Asc sets ascending sort order.

func (*ScriptSort) Desc

func (s *ScriptSort) Desc() *ScriptSort

Desc sets descending sort order.

func (*ScriptSort) NestedFilter

func (s *ScriptSort) NestedFilter(nestedFilter Query) *ScriptSort

NestedFilter sets a filter that nested objects should match with in order to be taken into account for sorting.

func (*ScriptSort) NestedPath

func (s *ScriptSort) NestedPath(nestedPath string) *ScriptSort

NestedPath is used if sorting occurs on a field that is inside a nested object.

func (*ScriptSort) NestedSort

func (s *ScriptSort) NestedSort(nestedSort *NestedSort) *ScriptSort

NestedSort is available starting with 6.1 and will replace NestedFilter and NestedPath.

func (*ScriptSort) Order

func (s *ScriptSort) Order(ascending bool) *ScriptSort

Order defines whether sorting ascending (default) or descending.

func (*ScriptSort) SortMode

func (s *ScriptSort) SortMode(sortMode string) *ScriptSort

SortMode specifies what values to pick in case a document contains multiple values for the targeted sort field. Possible values are: min or max.

func (*ScriptSort) Source

func (s *ScriptSort) Source() (interface{}, error)

Source returns the JSON-serializable data.

func (*ScriptSort) Type

func (s *ScriptSort) Type(typ string) *ScriptSort

Type sets the script type, which can be either "string" or "number".

type ScrollService

type ScrollService struct {
	// contains filtered or unexported fields
}

ScrollService iterates over pages of search results from Elasticsearch.

func NewScrollService

func NewScrollService(client *Client) *ScrollService

NewScrollService initializes and returns a new ScrollService.

func (*ScrollService) AllowNoIndices

func (s *ScrollService) AllowNoIndices(allowNoIndices bool) *ScrollService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*ScrollService) Body

func (s *ScrollService) Body(body interface{}) *ScrollService

Body sets the raw body to send to Elasticsearch. This can be e.g. a string, a map[string]interface{} or anything that can be serialized into JSON. Notice that setting the body disables the use of SearchSource and many other properties of the ScanService.

func (*ScrollService) Clear

func (s *ScrollService) Clear(ctx context.Context) error

Clear cancels the current scroll operation. If you don't do this manually, the scroll will be expired automatically by Elasticsearch. You can control how long a scroll cursor is kept alive with the KeepAlive func.

func (*ScrollService) Do

Do returns the next search result. It will return io.EOF as error if there are no more search results.

func (*ScrollService) ExpandWildcards

func (s *ScrollService) ExpandWildcards(expandWildcards string) *ScrollService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*ScrollService) FetchSource

func (s *ScrollService) FetchSource(fetchSource bool) *ScrollService

FetchSource indicates whether the response should contain the stored _source for every hit.

func (*ScrollService) FetchSourceContext

func (s *ScrollService) FetchSourceContext(fetchSourceContext *FetchSourceContext) *ScrollService

FetchSourceContext indicates how the _source should be fetched.

func (*ScrollService) IgnoreUnavailable

func (s *ScrollService) IgnoreUnavailable(ignoreUnavailable bool) *ScrollService

IgnoreUnavailable indicates whether the specified concrete indices should be ignored when unavailable (missing or closed).

func (*ScrollService) Index

func (s *ScrollService) Index(indices ...string) *ScrollService

Index sets the name of one or more indices to iterate over.

func (*ScrollService) KeepAlive

func (s *ScrollService) KeepAlive(keepAlive string) *ScrollService

KeepAlive sets the maximum time after which the cursor will expire. It is "2m" by default.

func (*ScrollService) PostFilter

func (s *ScrollService) PostFilter(postFilter Query) *ScrollService

PostFilter is executed as the last filter. It only affects the search hits but not facets. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-post-filter.html for details.

func (*ScrollService) Preference

func (s *ScrollService) Preference(preference string) *ScrollService

Preference sets the preference to execute the search. Defaults to randomize across shards ("random"). Can be set to "_local" to prefer local shards, "_primary" to execute on primary shards only, or a custom value which guarantees that the same order will be used across different requests.

func (*ScrollService) Pretty

func (s *ScrollService) Pretty(pretty bool) *ScrollService

Pretty asks Elasticsearch to pretty-print the returned JSON.

func (*ScrollService) Query

func (s *ScrollService) Query(query Query) *ScrollService

Query sets the query to perform, e.g. a MatchAllQuery.

func (*ScrollService) Retrier

func (s *ScrollService) Retrier(retrier Retrier) *ScrollService

Retrier allows to set specific retry logic for this ScrollService. If not specified, it will use the client's default retrier.

func (*ScrollService) Routing

func (s *ScrollService) Routing(routings ...string) *ScrollService

Routing is a list of specific routing values to control the shards the search will be executed on.

func (*ScrollService) Scroll

func (s *ScrollService) Scroll(keepAlive string) *ScrollService

Scroll is an alias for KeepAlive, the time to keep the cursor alive (e.g. "5m" for 5 minutes).

func (*ScrollService) ScrollId

func (s *ScrollService) ScrollId(scrollId string) *ScrollService

ScrollId specifies the identifier of a scroll in action.

func (*ScrollService) SearchSource

func (s *ScrollService) SearchSource(searchSource *SearchSource) *ScrollService

SearchSource sets the search source builder to use with this iterator. Notice that only a certain number of properties can be used when scrolling, e.g. query and sorting.

func (*ScrollService) Size

func (s *ScrollService) Size(size int) *ScrollService

Size specifies the number of documents Elasticsearch should return from each shard, per page.

func (*ScrollService) Slice

func (s *ScrollService) Slice(sliceQuery Query) *ScrollService

Slice allows slicing the scroll request into several batches. This is supported in Elasticsearch 5.0 or later. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-scroll.html#sliced-scroll for details.

func (*ScrollService) Sort

func (s *ScrollService) Sort(field string, ascending bool) *ScrollService

Sort adds a sort order. This can have negative effects on the performance of the scroll operation as Elasticsearch needs to sort first.

func (*ScrollService) SortBy

func (s *ScrollService) SortBy(sorter ...Sorter) *ScrollService

SortBy specifies a sort order. Notice that sorting can have a negative impact on scroll performance.

func (*ScrollService) SortWithInfo

func (s *ScrollService) SortWithInfo(info SortInfo) *ScrollService

SortWithInfo specifies a sort order. Notice that sorting can have a negative impact on scroll performance.

func (*ScrollService) Type

func (s *ScrollService) Type(types ...string) *ScrollService

Type sets the name of one or more types to iterate over.

func (*ScrollService) Version

func (s *ScrollService) Version(version bool) *ScrollService

Version can be set to true to return a version for each search hit. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-version.html.

type SearchExplanation

type SearchExplanation struct {
	Value       float64             `json:"value"`             // e.g. 1.0
	Description string              `json:"description"`       // e.g. "boost" or "ConstantScore(*:*), product of:"
	Details     []SearchExplanation `json:"details,omitempty"` // recursive details
}

SearchExplanation explains how the score for a hit was computed. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-explain.html.

type SearchHit

type SearchHit struct {
	Score          *float64                       `json:"_score"`          // computed score
	Index          string                         `json:"_index"`          // index name
	Type           string                         `json:"_type"`           // type meta field
	Id             string                         `json:"_id"`             // external or internal
	Uid            string                         `json:"_uid"`            // uid meta field (see MapperService.java for all meta fields)
	Routing        string                         `json:"_routing"`        // routing meta field
	Parent         string                         `json:"_parent"`         // parent meta field
	Version        *int64                         `json:"_version"`        // version number, when Version is set to true in SearchService
	Sort           []interface{}                  `json:"sort"`            // sort information
	Highlight      SearchHitHighlight             `json:"highlight"`       // highlighter information
	Source         *json.RawMessage               `json:"_source"`         // stored document source
	Fields         map[string]interface{}         `json:"fields"`          // returned (stored) fields
	Explanation    *SearchExplanation             `json:"_explanation"`    // explains how the score was computed
	MatchedQueries []string                       `json:"matched_queries"` // matched queries
	InnerHits      map[string]*SearchHitInnerHits `json:"inner_hits"`      // inner hits with ES >= 1.5.0

}

SearchHit is a single hit.

type SearchHitHighlight

type SearchHitHighlight map[string][]string

SearchHitHighlight is the highlight information of a search hit. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-highlighting.html for a general discussion of highlighting.

type SearchHitInnerHits

type SearchHitInnerHits struct {
	Hits *SearchHits `json:"hits"`
}

type SearchHits

type SearchHits struct {
	TotalHits int64        `json:"total"`     // total number of hits found
	MaxScore  *float64     `json:"max_score"` // maximum score of all hits
	Hits      []*SearchHit `json:"hits"`      // the actual hits returned
}

SearchHits specifies the list of search hits.

type SearchProfile

type SearchProfile struct {
	Shards []SearchProfileShardResult `json:"shards"`
}

SearchProfile is a list of shard profiling data collected during query execution in the "profile" section of a SearchResult

type SearchProfileShardResult

type SearchProfileShardResult struct {
	ID           string                    `json:"id"`
	Searches     []QueryProfileShardResult `json:"searches"`
	Aggregations []ProfileResult           `json:"aggregations"`
}

SearchProfileShardResult returns the profiling data for a single shard accessed during the search query or aggregation.

type SearchRequest

type SearchRequest struct {
	// contains filtered or unexported fields
}

SearchRequest combines a search request and its query details (see SearchSource). It is used in combination with MultiSearch.

func NewSearchRequest

func NewSearchRequest() *SearchRequest

NewSearchRequest creates a new search request.

func (*SearchRequest) AllowNoIndices

func (s *SearchRequest) AllowNoIndices(allowNoIndices bool) *SearchRequest

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*SearchRequest) Body

func (r *SearchRequest) Body() (string, error)

Body allows to access the search body of the request, as generated by the DSL. Notice that Body is read-only. You must not change the request body.

Body is used e.g. by MultiSearch to get information about the search body of one SearchRequest. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-multi-search.html

func (*SearchRequest) ExpandWildcards

func (s *SearchRequest) ExpandWildcards(expandWildcards string) *SearchRequest

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*SearchRequest) HasIndices

func (r *SearchRequest) HasIndices() bool

func (*SearchRequest) IgnoreUnavailable

func (s *SearchRequest) IgnoreUnavailable(ignoreUnavailable bool) *SearchRequest

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*SearchRequest) Index

func (r *SearchRequest) Index(indices ...string) *SearchRequest

func (*SearchRequest) Preference

func (r *SearchRequest) Preference(preference string) *SearchRequest

func (*SearchRequest) RequestCache

func (r *SearchRequest) RequestCache(requestCache bool) *SearchRequest

func (*SearchRequest) Routing

func (r *SearchRequest) Routing(routing string) *SearchRequest

func (*SearchRequest) Routings

func (r *SearchRequest) Routings(routings ...string) *SearchRequest

func (*SearchRequest) Scroll

func (r *SearchRequest) Scroll(scroll string) *SearchRequest

func (*SearchRequest) SearchSource

func (r *SearchRequest) SearchSource(searchSource *SearchSource) *SearchRequest

func (*SearchRequest) SearchType

func (r *SearchRequest) SearchType(searchType string) *SearchRequest

SearchRequest must be one of "dfs_query_then_fetch" or "query_then_fetch".

func (*SearchRequest) SearchTypeDfsQueryThenFetch

func (r *SearchRequest) SearchTypeDfsQueryThenFetch() *SearchRequest

SearchTypeDfsQueryThenFetch sets search type to dfs_query_then_fetch.

func (*SearchRequest) SearchTypeQueryThenFetch

func (r *SearchRequest) SearchTypeQueryThenFetch() *SearchRequest

SearchTypeQueryThenFetch sets search type to query_then_fetch.

func (*SearchRequest) Source

func (r *SearchRequest) Source(source interface{}) *SearchRequest

func (*SearchRequest) Type

func (r *SearchRequest) Type(types ...string) *SearchRequest

type SearchResult

type SearchResult struct {
	TookInMillis int64          `json:"took"`              // search time in milliseconds
	ScrollId     string         `json:"_scroll_id"`        // only used with Scroll and Scan operations
	Hits         *SearchHits    `json:"hits"`              // the actual search hits
	Suggest      SearchSuggest  `json:"suggest"`           // results from suggesters
	Aggregations Aggregations   `json:"aggregations"`      // results from aggregations
	TimedOut     bool           `json:"timed_out"`         // true if the search timed out
	Error        *ErrorDetails  `json:"error,omitempty"`   // only used in MultiGet
	Profile      *SearchProfile `json:"profile,omitempty"` // profiling results, if optional Profile API was active for this search
	Shards       *shardsInfo    `json:"_shards,omitempty"` // shard information
}

SearchResult is the result of a search in Elasticsearch.

Example
package main

import (
	"context"
	"encoding/json"
	"fmt"
	"reflect"
	"time"

	elastic "github.com/olivere/elastic"
)

type Tweet struct {
	User     string                `json:"user"`
	Message  string                `json:"message"`
	Retweets int                   `json:"retweets"`
	Image    string                `json:"image,omitempty"`
	Created  time.Time             `json:"created,omitempty"`
	Tags     []string              `json:"tags,omitempty"`
	Location string                `json:"location,omitempty"`
	Suggest  *elastic.SuggestField `json:"suggest_field,omitempty"`
}

func main() {
	client, err := elastic.NewClient()
	if err != nil {
		panic(err)
	}

	// Do a search
	searchResult, err := client.Search().Index("twitter").Query(elastic.NewMatchAllQuery()).Do(context.Background())
	if err != nil {
		panic(err)
	}

	// searchResult is of type SearchResult and returns hits, suggestions,
	// and all kinds of other information from Elasticsearch.
	fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)

	// Each is a utility function that iterates over hits in a search result.
	// It makes sure you don't need to check for nil values in the response.
	// However, it ignores errors in serialization. If you want full control
	// over iterating the hits, see below.
	var ttyp Tweet
	for _, item := range searchResult.Each(reflect.TypeOf(ttyp)) {
		t := item.(Tweet)
		fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
	}
	fmt.Printf("Found a total of %d tweets\n", searchResult.TotalHits())

	// Here's how you iterate hits with full control.
	if searchResult.Hits.TotalHits > 0 {
		fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)

		// Iterate through results
		for _, hit := range searchResult.Hits.Hits {
			// hit.Index contains the name of the index

			// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
			var t Tweet
			err := json.Unmarshal(*hit.Source, &t)
			if err != nil {
				// Deserialization failed
			}

			// Work with tweet
			fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
		}
	} else {
		// No hits
		fmt.Print("Found no tweets\n")
	}
}
Output:

func (*SearchResult) Each

func (r *SearchResult) Each(typ reflect.Type) []interface{}

Each is a utility function to iterate over all hits. It saves you from checking for nil values. Notice that Each will ignore errors in serializing JSON and hits with empty/nil _source will get an empty value

func (*SearchResult) TotalHits

func (r *SearchResult) TotalHits() int64

TotalHits is a convenience function to return the number of hits for a search result.

type SearchService

type SearchService struct {
	// contains filtered or unexported fields
}

Search for documents in Elasticsearch.

Example
package main

import (
	"context"
	"encoding/json"
	"fmt"
	"time"

	elastic "github.com/olivere/elastic"
)

type Tweet struct {
	User     string                `json:"user"`
	Message  string                `json:"message"`
	Retweets int                   `json:"retweets"`
	Image    string                `json:"image,omitempty"`
	Created  time.Time             `json:"created,omitempty"`
	Tags     []string              `json:"tags,omitempty"`
	Location string                `json:"location,omitempty"`
	Suggest  *elastic.SuggestField `json:"suggest_field,omitempty"`
}

func main() {
	// Get a client to the local Elasticsearch instance.
	client, err := elastic.NewClient()
	if err != nil {
		// Handle error
		panic(err)
	}

	// Search with a term query
	termQuery := elastic.NewTermQuery("user", "olivere")
	searchResult, err := client.Search().
		Index("twitter").        // search in index "twitter"
		Query(termQuery).        // specify the query
		Sort("user", true).      // sort by "user" field, ascending
		From(0).Size(10).        // take documents 0-9
		Pretty(true).            // pretty print request and response JSON
		Do(context.Background()) // execute
	if err != nil {
		// Handle error
		panic(err)
	}

	// searchResult is of type SearchResult and returns hits, suggestions,
	// and all kinds of other information from Elasticsearch.
	fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)

	// Number of hits
	if searchResult.Hits.TotalHits > 0 {
		fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)

		// Iterate through results
		for _, hit := range searchResult.Hits.Hits {
			// hit.Index contains the name of the index

			// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
			var t Tweet
			err := json.Unmarshal(*hit.Source, &t)
			if err != nil {
				// Deserialization failed
			}

			// Work with tweet
			fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
		}
	} else {
		// No hits
		fmt.Print("Found no tweets\n")
	}
}
Output:

func NewSearchService

func NewSearchService(client *Client) *SearchService

NewSearchService creates a new service for searching in Elasticsearch.

func (*SearchService) Aggregation

func (s *SearchService) Aggregation(name string, aggregation Aggregation) *SearchService

Aggregation adds an aggreation to perform as part of the search.

func (*SearchService) AllowNoIndices

func (s *SearchService) AllowNoIndices(allowNoIndices bool) *SearchService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*SearchService) Collapse

func (s *SearchService) Collapse(collapse *CollapseBuilder) *SearchService

Collapse adds field collapsing.

func (*SearchService) Do

Do executes the search and returns a SearchResult.

func (*SearchService) ExpandWildcards

func (s *SearchService) ExpandWildcards(expandWildcards string) *SearchService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*SearchService) Explain

func (s *SearchService) Explain(explain bool) *SearchService

Explain indicates whether each search hit should be returned with an explanation of the hit (ranking).

func (*SearchService) FetchSource

func (s *SearchService) FetchSource(fetchSource bool) *SearchService

FetchSource indicates whether the response should contain the stored _source for every hit.

func (*SearchService) FetchSourceContext

func (s *SearchService) FetchSourceContext(fetchSourceContext *FetchSourceContext) *SearchService

FetchSourceContext indicates how the _source should be fetched.

func (*SearchService) FilterPath

func (s *SearchService) FilterPath(filterPath ...string) *SearchService

FilterPath allows reducing the response, a mechanism known as response filtering and described here: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/common-options.html#common-options-response-filtering.

func (*SearchService) From

func (s *SearchService) From(from int) *SearchService

From index to start the search from. Defaults to 0.

func (*SearchService) GlobalSuggestText

func (s *SearchService) GlobalSuggestText(globalText string) *SearchService

GlobalSuggestText defines the global text to use with all suggesters. This avoids repetition.

func (*SearchService) Highlight

func (s *SearchService) Highlight(highlight *Highlight) *SearchService

Highlight adds highlighting to the search.

func (*SearchService) IgnoreUnavailable

func (s *SearchService) IgnoreUnavailable(ignoreUnavailable bool) *SearchService

IgnoreUnavailable indicates whether the specified concrete indices should be ignored when unavailable (missing or closed).

func (*SearchService) Index

func (s *SearchService) Index(index ...string) *SearchService

Index sets the names of the indices to use for search.

func (*SearchService) MinScore

func (s *SearchService) MinScore(minScore float64) *SearchService

MinScore sets the minimum score below which docs will be filtered out.

func (*SearchService) NoStoredFields

func (s *SearchService) NoStoredFields() *SearchService

NoStoredFields indicates that no stored fields should be loaded, resulting in only id and type to be returned per field.

func (*SearchService) PostFilter

func (s *SearchService) PostFilter(postFilter Query) *SearchService

PostFilter will be executed after the query has been executed and only affects the search hits, not the aggregations. This filter is always executed as the last filtering mechanism.

func (*SearchService) Preference

func (s *SearchService) Preference(preference string) *SearchService

Preference sets the preference to execute the search. Defaults to randomize across shards ("random"). Can be set to "_local" to prefer local shards, "_primary" to execute on primary shards only, or a custom value which guarantees that the same order will be used across different requests.

func (*SearchService) Pretty

func (s *SearchService) Pretty(pretty bool) *SearchService

Pretty enables the caller to indent the JSON output.

func (*SearchService) Profile

func (s *SearchService) Profile(profile bool) *SearchService

Profile sets the Profile API flag on the search source. When enabled, a search executed by this service will return query profiling data.

func (*SearchService) Query

func (s *SearchService) Query(query Query) *SearchService

Query sets the query to perform, e.g. MatchAllQuery.

func (*SearchService) RequestCache

func (s *SearchService) RequestCache(requestCache bool) *SearchService

RequestCache indicates whether the cache should be used for this request or not, defaults to index level setting.

func (*SearchService) Routing

func (s *SearchService) Routing(routings ...string) *SearchService

Routing is a list of specific routing values to control the shards the search will be executed on.

func (*SearchService) SearchAfter

func (s *SearchService) SearchAfter(sortValues ...interface{}) *SearchService

SearchAfter allows a different form of pagination by using a live cursor, using the results of the previous page to help the retrieval of the next.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-search-after.html

func (*SearchService) SearchSource

func (s *SearchService) SearchSource(searchSource *SearchSource) *SearchService

SearchSource sets the search source builder to use with this service.

func (*SearchService) SearchType

func (s *SearchService) SearchType(searchType string) *SearchService

SearchType sets the search operation type. Valid values are: "dfs_query_then_fetch" and "query_then_fetch". See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-search-type.html for details.

func (*SearchService) Size

func (s *SearchService) Size(size int) *SearchService

Size is the number of search hits to return. Defaults to 10.

func (*SearchService) Sort

func (s *SearchService) Sort(field string, ascending bool) *SearchService

Sort adds a sort order.

func (*SearchService) SortBy

func (s *SearchService) SortBy(sorter ...Sorter) *SearchService

SortBy adds a sort order.

func (*SearchService) SortWithInfo

func (s *SearchService) SortWithInfo(info SortInfo) *SearchService

SortWithInfo adds a sort order.

func (*SearchService) Source

func (s *SearchService) Source(source interface{}) *SearchService

Source allows the user to set the request body manually without using any of the structs and interfaces in Elastic.

func (*SearchService) StoredField

func (s *SearchService) StoredField(fieldName string) *SearchService

StoredField adds a single field to load and return (note, must be stored) as part of the search request. If none are specified, the source of the document will be returned.

func (*SearchService) StoredFields

func (s *SearchService) StoredFields(fields ...string) *SearchService

StoredFields sets the fields to load and return as part of the search request. If none are specified, the source of the document will be returned.

func (*SearchService) Suggester

func (s *SearchService) Suggester(suggester Suggester) *SearchService

Suggester adds a suggester to the search.

func (*SearchService) TerminateAfter

func (s *SearchService) TerminateAfter(terminateAfter int) *SearchService

TerminateAfter specifies the maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early.

func (*SearchService) Timeout

func (s *SearchService) Timeout(timeout string) *SearchService

Timeout sets the timeout to use, e.g. "1s" or "1000ms".

func (*SearchService) TimeoutInMillis

func (s *SearchService) TimeoutInMillis(timeoutInMillis int) *SearchService

TimeoutInMillis sets the timeout in milliseconds.

func (*SearchService) TrackScores

func (s *SearchService) TrackScores(trackScores bool) *SearchService

TrackScores is applied when sorting and controls if scores will be tracked as well. Defaults to false.

func (*SearchService) Type

func (s *SearchService) Type(typ ...string) *SearchService

Types adds search restrictions for a list of types.

func (*SearchService) Validate

func (s *SearchService) Validate() error

Validate checks if the operation is valid.

func (*SearchService) Version

func (s *SearchService) Version(version bool) *SearchService

Version indicates whether each search hit should be returned with a version associated to it.

type SearchShardsResponse

type SearchShardsResponse struct {
	Nodes   map[string]interface{} `json:"nodes"`
	Indices map[string]interface{} `json:"indices"`
	Shards  [][]ShardsInfo         `json:"shards"`
}

SearchShardsResponse is the response of SearchShardsService.Do.

type SearchShardsService

type SearchShardsService struct {
	// contains filtered or unexported fields
}

SearchShardsService returns the indices and shards that a search request would be executed against. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-shards.html

func NewSearchShardsService

func NewSearchShardsService(client *Client) *SearchShardsService

NewSearchShardsService creates a new SearchShardsService.

func (*SearchShardsService) AllowNoIndices

func (s *SearchShardsService) AllowNoIndices(allowNoIndices bool) *SearchShardsService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*SearchShardsService) Do

Do executes the operation.

func (*SearchShardsService) ExpandWildcards

func (s *SearchShardsService) ExpandWildcards(expandWildcards string) *SearchShardsService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*SearchShardsService) IgnoreUnavailable

func (s *SearchShardsService) IgnoreUnavailable(ignoreUnavailable bool) *SearchShardsService

IgnoreUnavailable indicates whether the specified concrete indices should be ignored when unavailable (missing or closed).

func (*SearchShardsService) Index

func (s *SearchShardsService) Index(index ...string) *SearchShardsService

Index sets the names of the indices to restrict the results.

func (*SearchShardsService) Local

func (s *SearchShardsService) Local(local bool) *SearchShardsService

A boolean value whether to read the cluster state locally in order to determine where shards are allocated instead of using the Master node’s cluster state.

func (*SearchShardsService) Preference

func (s *SearchShardsService) Preference(preference string) *SearchShardsService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*SearchShardsService) Pretty

func (s *SearchShardsService) Pretty(pretty bool) *SearchShardsService

Pretty indicates that the JSON response be indented and human readable.

func (*SearchShardsService) Routing

func (s *SearchShardsService) Routing(routing string) *SearchShardsService

Routing sets a specific routing value.

func (*SearchShardsService) Validate

func (s *SearchShardsService) Validate() error

Validate checks if the operation is valid.

type SearchSource

type SearchSource struct {
	// contains filtered or unexported fields
}

SearchSource enables users to build the search source. It resembles the SearchSourceBuilder in Elasticsearch.

func NewSearchSource

func NewSearchSource() *SearchSource

NewSearchSource initializes a new SearchSource.

func (*SearchSource) Aggregation

func (s *SearchSource) Aggregation(name string, aggregation Aggregation) *SearchSource

Aggregation adds an aggreation to perform as part of the search.

func (*SearchSource) ClearRescorers

func (s *SearchSource) ClearRescorers() *SearchSource

ClearRescorers removes all rescorers from the search.

func (*SearchSource) Collapse

func (s *SearchSource) Collapse(collapse *CollapseBuilder) *SearchSource

Collapse adds field collapsing.

func (*SearchSource) DefaultRescoreWindowSize

func (s *SearchSource) DefaultRescoreWindowSize(defaultRescoreWindowSize int) *SearchSource

DefaultRescoreWindowSize sets the rescore window size for rescores that don't specify their window.

func (*SearchSource) DocvalueField

func (s *SearchSource) DocvalueField(fieldDataField string) *SearchSource

DocvalueField adds a single field to load from the field data cache and return as part of the search request.

func (*SearchSource) DocvalueFields

func (s *SearchSource) DocvalueFields(docvalueFields ...string) *SearchSource

DocvalueFields adds one or more fields to load from the field data cache and return as part of the search request.

func (*SearchSource) Explain

func (s *SearchSource) Explain(explain bool) *SearchSource

Explain indicates whether each search hit should be returned with an explanation of the hit (ranking).

func (*SearchSource) FetchSource

func (s *SearchSource) FetchSource(fetchSource bool) *SearchSource

FetchSource indicates whether the response should contain the stored _source for every hit.

func (*SearchSource) FetchSourceContext

func (s *SearchSource) FetchSourceContext(fetchSourceContext *FetchSourceContext) *SearchSource

FetchSourceContext indicates how the _source should be fetched.

func (*SearchSource) From

func (s *SearchSource) From(from int) *SearchSource

From index to start the search from. Defaults to 0.

func (*SearchSource) GlobalSuggestText

func (s *SearchSource) GlobalSuggestText(text string) *SearchSource

GlobalSuggestText defines the global text to use with all suggesters. This avoids repetition.

func (*SearchSource) Highlight

func (s *SearchSource) Highlight(highlight *Highlight) *SearchSource

Highlight adds highlighting to the search.

func (*SearchSource) Highlighter

func (s *SearchSource) Highlighter() *Highlight

Highlighter returns the highlighter.

func (*SearchSource) IndexBoost

func (s *SearchSource) IndexBoost(index string, boost float64) *SearchSource

IndexBoost sets the boost that a specific index will receive when the query is executed against it.

func (*SearchSource) InnerHit

func (s *SearchSource) InnerHit(name string, innerHit *InnerHit) *SearchSource

InnerHit adds an inner hit to return with the result.

func (*SearchSource) MinScore

func (s *SearchSource) MinScore(minScore float64) *SearchSource

MinScore sets the minimum score below which docs will be filtered out.

func (*SearchSource) NoStoredFields

func (s *SearchSource) NoStoredFields() *SearchSource

NoStoredFields indicates that no fields should be loaded, resulting in only id and type to be returned per field.

func (*SearchSource) PostFilter

func (s *SearchSource) PostFilter(postFilter Query) *SearchSource

PostFilter will be executed after the query has been executed and only affects the search hits, not the aggregations. This filter is always executed as the last filtering mechanism.

func (*SearchSource) Profile

func (s *SearchSource) Profile(profile bool) *SearchSource

Profile specifies that this search source should activate the Profile API for queries made on it.

func (*SearchSource) Query

func (s *SearchSource) Query(query Query) *SearchSource

Query sets the query to use with this search source.

func (*SearchSource) Rescorer

func (s *SearchSource) Rescorer(rescore *Rescore) *SearchSource

Rescorer adds a rescorer to the search.

func (*SearchSource) ScriptField

func (s *SearchSource) ScriptField(scriptField *ScriptField) *SearchSource

ScriptField adds a single script field with the provided script.

func (*SearchSource) ScriptFields

func (s *SearchSource) ScriptFields(scriptFields ...*ScriptField) *SearchSource

ScriptFields adds one or more script fields with the provided scripts.

func (*SearchSource) SearchAfter

func (s *SearchSource) SearchAfter(sortValues ...interface{}) *SearchSource

SearchAfter allows a different form of pagination by using a live cursor, using the results of the previous page to help the retrieval of the next.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-search-after.html

func (*SearchSource) Size

func (s *SearchSource) Size(size int) *SearchSource

Size is the number of search hits to return. Defaults to 10.

func (*SearchSource) Slice

func (s *SearchSource) Slice(sliceQuery Query) *SearchSource

Slice allows partitioning the documents in multiple slices. It is e.g. used to slice a scroll operation, supported in Elasticsearch 5.0 or later. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-scroll.html#sliced-scroll for details.

func (*SearchSource) Sort

func (s *SearchSource) Sort(field string, ascending bool) *SearchSource

Sort adds a sort order.

func (*SearchSource) SortBy

func (s *SearchSource) SortBy(sorter ...Sorter) *SearchSource

SortBy adds a sort order.

func (*SearchSource) SortWithInfo

func (s *SearchSource) SortWithInfo(info SortInfo) *SearchSource

SortWithInfo adds a sort order.

func (*SearchSource) Source

func (s *SearchSource) Source() (interface{}, error)

Source returns the serializable JSON for the source builder.

func (*SearchSource) Stats

func (s *SearchSource) Stats(statsGroup ...string) *SearchSource

Stats group this request will be aggregated under.

func (*SearchSource) StoredField

func (s *SearchSource) StoredField(storedFieldName string) *SearchSource

StoredField adds a single field to load and return (note, must be stored) as part of the search request. If none are specified, the source of the document will be returned.

func (*SearchSource) StoredFields

func (s *SearchSource) StoredFields(storedFieldNames ...string) *SearchSource

StoredFields sets the fields to load and return as part of the search request. If none are specified, the source of the document will be returned.

func (*SearchSource) Suggester

func (s *SearchSource) Suggester(suggester Suggester) *SearchSource

Suggester adds a suggester to the search.

func (*SearchSource) TerminateAfter

func (s *SearchSource) TerminateAfter(terminateAfter int) *SearchSource

TerminateAfter specifies the maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early.

func (*SearchSource) Timeout

func (s *SearchSource) Timeout(timeout string) *SearchSource

Timeout controls how long a search is allowed to take, e.g. "1s" or "500ms".

func (*SearchSource) TimeoutInMillis

func (s *SearchSource) TimeoutInMillis(timeoutInMillis int) *SearchSource

TimeoutInMillis controls how many milliseconds a search is allowed to take before it is canceled.

func (*SearchSource) TrackScores

func (s *SearchSource) TrackScores(trackScores bool) *SearchSource

TrackScores is applied when sorting and controls if scores will be tracked as well. Defaults to false.

func (*SearchSource) Version

func (s *SearchSource) Version(version bool) *SearchSource

Version indicates whether each search hit should be returned with a version associated to it.

type SearchSuggest

type SearchSuggest map[string][]SearchSuggestion

SearchSuggest is a map of suggestions. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters.html.

type SearchSuggestion

type SearchSuggestion struct {
	Text    string                   `json:"text"`
	Offset  int                      `json:"offset"`
	Length  int                      `json:"length"`
	Options []SearchSuggestionOption `json:"options"`
}

SearchSuggestion is a single search suggestion. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters.html.

type SearchSuggestionOption

type SearchSuggestionOption struct {
	Text         string           `json:"text"`
	Index        string           `json:"_index"`
	Type         string           `json:"_type"`
	Id           string           `json:"_id"`
	Score        float64          `json:"score"`
	Highlighted  string           `json:"highlighted"`
	CollateMatch bool             `json:"collate_match"`
	Freq         int              `json:"freq"` // from TermSuggestion.Option in Java API
	Source       *json.RawMessage `json:"_source"`
}

SearchSuggestionOption is an option of a SearchSuggestion. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters.html.

type SerialDiffAggregation

type SerialDiffAggregation struct {
	// contains filtered or unexported fields
}

SerialDiffAggregation implements serial differencing. Serial differencing is a technique where values in a time series are subtracted from itself at different time lags or periods.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-serialdiff-aggregation.html

func NewSerialDiffAggregation

func NewSerialDiffAggregation() *SerialDiffAggregation

NewSerialDiffAggregation creates and initializes a new SerialDiffAggregation.

func (*SerialDiffAggregation) BucketsPath

func (a *SerialDiffAggregation) BucketsPath(bucketsPaths ...string) *SerialDiffAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*SerialDiffAggregation) Format

Format to use on the output of this aggregation.

func (*SerialDiffAggregation) GapInsertZeros

func (a *SerialDiffAggregation) GapInsertZeros() *SerialDiffAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*SerialDiffAggregation) GapPolicy

func (a *SerialDiffAggregation) GapPolicy(gapPolicy string) *SerialDiffAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*SerialDiffAggregation) GapSkip

GapSkip skips gaps in the series.

func (*SerialDiffAggregation) Lag

Lag specifies the historical bucket to subtract from the current value. E.g. a lag of 7 will subtract the current value from the value 7 buckets ago. Lag must be a positive, non-zero integer.

func (*SerialDiffAggregation) Meta

func (a *SerialDiffAggregation) Meta(metaData map[string]interface{}) *SerialDiffAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SerialDiffAggregation) Source

func (a *SerialDiffAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type ShardsInfo

type ShardsInfo struct {
	Index          string      `json:"index"`
	Node           string      `json:"node"`
	Primary        bool        `json:"primary"`
	Shard          uint        `json:"shard"`
	State          string      `json:"state"`
	AllocationId   interface{} `json:"allocation_id"`
	RelocatingNode bool        `json:"relocating_node"`
}

type SignificanceHeuristic

type SignificanceHeuristic interface {
	Name() string
	Source() (interface{}, error)
}

type SignificantTermsAggregation

type SignificantTermsAggregation struct {
	// contains filtered or unexported fields
}

SignificantTermsAggregation is an aggregation that returns interesting or unusual occurrences of terms in a set. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significantterms-aggregation.html

func NewSignificantTermsAggregation

func NewSignificantTermsAggregation() *SignificantTermsAggregation

func (*SignificantTermsAggregation) BackgroundFilter

func (a *SignificantTermsAggregation) BackgroundFilter(filter Query) *SignificantTermsAggregation

func (*SignificantTermsAggregation) ExecutionHint

func (*SignificantTermsAggregation) Field

func (*SignificantTermsAggregation) Meta

func (a *SignificantTermsAggregation) Meta(metaData map[string]interface{}) *SignificantTermsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SignificantTermsAggregation) MinDocCount

func (a *SignificantTermsAggregation) MinDocCount(minDocCount int) *SignificantTermsAggregation

func (*SignificantTermsAggregation) RequiredSize

func (a *SignificantTermsAggregation) RequiredSize(requiredSize int) *SignificantTermsAggregation

func (*SignificantTermsAggregation) ShardMinDocCount

func (a *SignificantTermsAggregation) ShardMinDocCount(shardMinDocCount int) *SignificantTermsAggregation

func (*SignificantTermsAggregation) ShardSize

func (*SignificantTermsAggregation) SignificanceHeuristic

func (*SignificantTermsAggregation) Source

func (a *SignificantTermsAggregation) Source() (interface{}, error)

func (*SignificantTermsAggregation) SubAggregation

func (a *SignificantTermsAggregation) SubAggregation(name string, subAggregation Aggregation) *SignificantTermsAggregation

type SignificantTextAggregation

type SignificantTextAggregation struct {
	// contains filtered or unexported fields
}

SignificantTextAggregation returns interesting or unusual occurrences of free-text terms in a set. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-bucket-significanttext-aggregation.html

func NewSignificantTextAggregation

func NewSignificantTextAggregation() *SignificantTextAggregation

func (*SignificantTextAggregation) BackgroundFilter

func (a *SignificantTextAggregation) BackgroundFilter(filter Query) *SignificantTextAggregation

func (*SignificantTextAggregation) Exclude

func (*SignificantTextAggregation) ExcludeValues

func (a *SignificantTextAggregation) ExcludeValues(values ...interface{}) *SignificantTextAggregation

func (*SignificantTextAggregation) Field

func (*SignificantTextAggregation) FilterDuplicateText

func (a *SignificantTextAggregation) FilterDuplicateText(filter bool) *SignificantTextAggregation

func (*SignificantTextAggregation) Include

func (*SignificantTextAggregation) IncludeValues

func (a *SignificantTextAggregation) IncludeValues(values ...interface{}) *SignificantTextAggregation

func (*SignificantTextAggregation) Meta

func (a *SignificantTextAggregation) Meta(metaData map[string]interface{}) *SignificantTextAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SignificantTextAggregation) MinDocCount

func (a *SignificantTextAggregation) MinDocCount(minDocCount int64) *SignificantTextAggregation

func (*SignificantTextAggregation) NumPartitions

func (*SignificantTextAggregation) Partition

func (*SignificantTextAggregation) ShardMinDocCount

func (a *SignificantTextAggregation) ShardMinDocCount(shardMinDocCount int64) *SignificantTextAggregation

func (*SignificantTextAggregation) ShardSize

func (*SignificantTextAggregation) SignificanceHeuristic

func (*SignificantTextAggregation) Size

func (*SignificantTextAggregation) Source

func (a *SignificantTextAggregation) Source() (interface{}, error)

func (*SignificantTextAggregation) SourceFieldNames

func (a *SignificantTextAggregation) SourceFieldNames(names ...string) *SignificantTextAggregation

func (*SignificantTextAggregation) SubAggregation

func (a *SignificantTextAggregation) SubAggregation(name string, subAggregation Aggregation) *SignificantTextAggregation

type SimpleBackoff

type SimpleBackoff struct {
	sync.Mutex
	// contains filtered or unexported fields
}

SimpleBackoff takes a list of fixed values for backoff intervals. Each call to Next returns the next value from that fixed list. After each value is returned, subsequent calls to Next will only return the last element. The values are optionally "jittered" (off by default).

func NewSimpleBackoff

func NewSimpleBackoff(ticks ...int) *SimpleBackoff

NewSimpleBackoff creates a SimpleBackoff algorithm with the specified list of fixed intervals in milliseconds.

func (*SimpleBackoff) Jitter

func (b *SimpleBackoff) Jitter(flag bool) *SimpleBackoff

Jitter enables or disables jittering values.

func (*SimpleBackoff) Next

func (b *SimpleBackoff) Next(retry int) (time.Duration, bool)

Next implements BackoffFunc for SimpleBackoff.

type SimpleMovAvgModel

type SimpleMovAvgModel struct {
}

SimpleMovAvgModel calculates a simple unweighted (arithmetic) moving average.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-pipeline-movavg-aggregation.html#_simple

func NewSimpleMovAvgModel

func NewSimpleMovAvgModel() *SimpleMovAvgModel

NewSimpleMovAvgModel creates and initializes a new SimpleMovAvgModel.

func (*SimpleMovAvgModel) Name

func (m *SimpleMovAvgModel) Name() string

Name of the model.

func (*SimpleMovAvgModel) Settings

func (m *SimpleMovAvgModel) Settings() map[string]interface{}

Settings of the model.

type SimpleQueryStringQuery

type SimpleQueryStringQuery struct {
	// contains filtered or unexported fields
}

SimpleQueryStringQuery is a query that uses the SimpleQueryParser to parse its context. Unlike the regular query_string query, the simple_query_string query will never throw an exception, and discards invalid parts of the query.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-simple-query-string-query.html

func NewSimpleQueryStringQuery

func NewSimpleQueryStringQuery(text string) *SimpleQueryStringQuery

NewSimpleQueryStringQuery creates and initializes a new SimpleQueryStringQuery.

func (*SimpleQueryStringQuery) AnalyzeWildcard

func (q *SimpleQueryStringQuery) AnalyzeWildcard(analyzeWildcard bool) *SimpleQueryStringQuery

AnalyzeWildcard indicates whether to enabled analysis on wildcard and prefix queries.

func (*SimpleQueryStringQuery) Analyzer

func (q *SimpleQueryStringQuery) Analyzer(analyzer string) *SimpleQueryStringQuery

Analyzer specifies the analyzer to use for the query.

func (*SimpleQueryStringQuery) Boost

Boost sets the boost for this query.

func (*SimpleQueryStringQuery) DefaultOperator

func (q *SimpleQueryStringQuery) DefaultOperator(defaultOperator string) *SimpleQueryStringQuery

DefaultOperator specifies the default operator for the query.

func (*SimpleQueryStringQuery) Field

Field adds a field to run the query against.

func (*SimpleQueryStringQuery) FieldWithBoost

func (q *SimpleQueryStringQuery) FieldWithBoost(field string, boost float64) *SimpleQueryStringQuery

Field adds a field to run the query against with a specific boost.

func (*SimpleQueryStringQuery) Flags

Flags sets the flags for the query.

func (*SimpleQueryStringQuery) Lenient

func (q *SimpleQueryStringQuery) Lenient(lenient bool) *SimpleQueryStringQuery

Lenient indicates whether the query string parser should be lenient when parsing field values. It defaults to the index setting and if not set, defaults to false.

func (*SimpleQueryStringQuery) Locale

func (*SimpleQueryStringQuery) LowercaseExpandedTerms

func (q *SimpleQueryStringQuery) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *SimpleQueryStringQuery

LowercaseExpandedTerms indicates whether terms of wildcard, prefix, fuzzy and range queries are automatically lower-cased or not. Default is true.

func (*SimpleQueryStringQuery) MinimumShouldMatch

func (q *SimpleQueryStringQuery) MinimumShouldMatch(minimumShouldMatch string) *SimpleQueryStringQuery

func (*SimpleQueryStringQuery) QueryName

func (q *SimpleQueryStringQuery) QueryName(queryName string) *SimpleQueryStringQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit.

func (*SimpleQueryStringQuery) Source

func (q *SimpleQueryStringQuery) Source() (interface{}, error)

Source returns JSON for the query.

type SliceQuery

type SliceQuery struct {
	// contains filtered or unexported fields
}

SliceQuery allows to partition the documents into several slices. It is used e.g. to slice scroll operations in Elasticsearch 5.0 or later. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-scroll.html#sliced-scroll for details.

func NewSliceQuery

func NewSliceQuery() *SliceQuery

NewSliceQuery creates a new SliceQuery.

func (*SliceQuery) Field

func (s *SliceQuery) Field(field string) *SliceQuery

Field is the name of the field to slice against (_uid by default).

func (*SliceQuery) Id

func (s *SliceQuery) Id(id int) *SliceQuery

Id is the id of the slice.

func (*SliceQuery) Max

func (s *SliceQuery) Max(max int) *SliceQuery

Max is the maximum number of slices.

func (*SliceQuery) Source

func (s *SliceQuery) Source() (interface{}, error)

Source returns the JSON body.

type SmoothingModel

type SmoothingModel interface {
	Type() string
	Source() (interface{}, error)
}

type SnapshotCreateRepositoryResponse

type SnapshotCreateRepositoryResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

SnapshotCreateRepositoryResponse is the response of SnapshotCreateRepositoryService.Do.

type SnapshotCreateRepositoryService

type SnapshotCreateRepositoryService struct {
	// contains filtered or unexported fields
}

SnapshotCreateRepositoryService creates a snapshot repository. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-snapshots.html for details.

func NewSnapshotCreateRepositoryService

func NewSnapshotCreateRepositoryService(client *Client) *SnapshotCreateRepositoryService

NewSnapshotCreateRepositoryService creates a new SnapshotCreateRepositoryService.

func (*SnapshotCreateRepositoryService) BodyJson

BodyJson is documented as: The repository definition.

func (*SnapshotCreateRepositoryService) BodyString

BodyString is documented as: The repository definition.

func (*SnapshotCreateRepositoryService) Do

Do executes the operation.

func (*SnapshotCreateRepositoryService) MasterTimeout

MasterTimeout specifies an explicit operation timeout for connection to master node.

func (*SnapshotCreateRepositoryService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*SnapshotCreateRepositoryService) Repository

Repository is the repository name.

func (*SnapshotCreateRepositoryService) Setting

func (s *SnapshotCreateRepositoryService) Setting(name string, value interface{}) *SnapshotCreateRepositoryService

Setting sets a single settings of the snapshot repository.

func (*SnapshotCreateRepositoryService) Settings

func (s *SnapshotCreateRepositoryService) Settings(settings map[string]interface{}) *SnapshotCreateRepositoryService

Settings sets all settings of the snapshot repository.

func (*SnapshotCreateRepositoryService) Timeout

Timeout is an explicit operation timeout.

func (*SnapshotCreateRepositoryService) Type

Type sets the snapshot repository type, e.g. "fs".

func (*SnapshotCreateRepositoryService) Validate

func (s *SnapshotCreateRepositoryService) Validate() error

Validate checks if the operation is valid.

func (*SnapshotCreateRepositoryService) Verify

Verify indicates whether to verify the repository after creation.

type SnapshotCreateResponse

type SnapshotCreateResponse struct {
	// Accepted indicates whether the request was accepted by elasticsearch.
	// It's available when waitForCompletion is false.
	Accepted *bool `json:"accepted"`

	// Snapshot is available when waitForCompletion is true.
	Snapshot *struct {
		Snapshot          string                 `json:"snapshot"`
		UUID              string                 `json:"uuid"`
		VersionID         int                    `json:"version_id"`
		Version           string                 `json:"version"`
		Indices           []string               `json:"indices"`
		State             string                 `json:"state"`
		Reason            string                 `json:"reason"`
		StartTime         time.Time              `json:"start_time"`
		StartTimeInMillis int64                  `json:"start_time_in_millis"`
		EndTime           time.Time              `json:"end_time"`
		EndTimeInMillis   int64                  `json:"end_time_in_millis"`
		DurationInMillis  int64                  `json:"duration_in_millis"`
		Failures          []SnapshotShardFailure `json:"failures"`
		Shards            shardsInfo             `json:"shards"`
	} `json:"snapshot"`
}

SnapshotCreateResponse is the response of SnapshotCreateService.Do.

type SnapshotCreateService

type SnapshotCreateService struct {
	// contains filtered or unexported fields
}

SnapshotCreateService is documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-snapshots.html.

func NewSnapshotCreateService

func NewSnapshotCreateService(client *Client) *SnapshotCreateService

NewSnapshotCreateService creates a new SnapshotCreateService.

func (*SnapshotCreateService) BodyJson

func (s *SnapshotCreateService) BodyJson(body interface{}) *SnapshotCreateService

BodyJson is documented as: The snapshot definition.

func (*SnapshotCreateService) BodyString

func (s *SnapshotCreateService) BodyString(body string) *SnapshotCreateService

BodyString is documented as: The snapshot definition.

func (*SnapshotCreateService) Do

Do executes the operation.

func (*SnapshotCreateService) MasterTimeout

func (s *SnapshotCreateService) MasterTimeout(masterTimeout string) *SnapshotCreateService

MasterTimeout is documented as: Explicit operation timeout for connection to master node.

func (*SnapshotCreateService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*SnapshotCreateService) Repository

func (s *SnapshotCreateService) Repository(repository string) *SnapshotCreateService

Repository is the repository name.

func (*SnapshotCreateService) Snapshot

func (s *SnapshotCreateService) Snapshot(snapshot string) *SnapshotCreateService

Snapshot is the snapshot name.

func (*SnapshotCreateService) Validate

func (s *SnapshotCreateService) Validate() error

Validate checks if the operation is valid.

func (*SnapshotCreateService) WaitForCompletion

func (s *SnapshotCreateService) WaitForCompletion(waitForCompletion bool) *SnapshotCreateService

WaitForCompletion is documented as: Should this request wait until the operation has completed before returning.

type SnapshotDeleteRepositoryResponse

type SnapshotDeleteRepositoryResponse struct {
	Acknowledged       bool   `json:"acknowledged"`
	ShardsAcknowledged bool   `json:"shards_acknowledged"`
	Index              string `json:"index,omitempty"`
}

SnapshotDeleteRepositoryResponse is the response of SnapshotDeleteRepositoryService.Do.

type SnapshotDeleteRepositoryService

type SnapshotDeleteRepositoryService struct {
	// contains filtered or unexported fields
}

SnapshotDeleteRepositoryService deletes a snapshot repository. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-snapshots.html for details.

func NewSnapshotDeleteRepositoryService

func NewSnapshotDeleteRepositoryService(client *Client) *SnapshotDeleteRepositoryService

NewSnapshotDeleteRepositoryService creates a new SnapshotDeleteRepositoryService.

func (*SnapshotDeleteRepositoryService) Do

Do executes the operation.

func (*SnapshotDeleteRepositoryService) MasterTimeout

MasterTimeout specifies an explicit operation timeout for connection to master node.

func (*SnapshotDeleteRepositoryService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*SnapshotDeleteRepositoryService) Repository

Repository is the list of repository names.

func (*SnapshotDeleteRepositoryService) Timeout

Timeout is an explicit operation timeout.

func (*SnapshotDeleteRepositoryService) Validate

func (s *SnapshotDeleteRepositoryService) Validate() error

Validate checks if the operation is valid.

type SnapshotGetRepositoryResponse

type SnapshotGetRepositoryResponse map[string]*SnapshotRepositoryMetaData

SnapshotGetRepositoryResponse is the response of SnapshotGetRepositoryService.Do.

type SnapshotGetRepositoryService

type SnapshotGetRepositoryService struct {
	// contains filtered or unexported fields
}

SnapshotGetRepositoryService reads a snapshot repository. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-snapshots.html for details.

func NewSnapshotGetRepositoryService

func NewSnapshotGetRepositoryService(client *Client) *SnapshotGetRepositoryService

NewSnapshotGetRepositoryService creates a new SnapshotGetRepositoryService.

func (*SnapshotGetRepositoryService) Do

Do executes the operation.

func (*SnapshotGetRepositoryService) Local

Local indicates whether to return local information, i.e. do not retrieve the state from master node (default: false).

func (*SnapshotGetRepositoryService) MasterTimeout

func (s *SnapshotGetRepositoryService) MasterTimeout(masterTimeout string) *SnapshotGetRepositoryService

MasterTimeout specifies an explicit operation timeout for connection to master node.

func (*SnapshotGetRepositoryService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*SnapshotGetRepositoryService) Repository

func (s *SnapshotGetRepositoryService) Repository(repositories ...string) *SnapshotGetRepositoryService

Repository is the list of repository names.

func (*SnapshotGetRepositoryService) Validate

func (s *SnapshotGetRepositoryService) Validate() error

Validate checks if the operation is valid.

type SnapshotRepositoryMetaData

type SnapshotRepositoryMetaData struct {
	Type     string                 `json:"type"`
	Settings map[string]interface{} `json:"settings,omitempty"`
}

SnapshotRepositoryMetaData contains all information about a single snapshot repository.

type SnapshotShardFailure

type SnapshotShardFailure struct {
	Index     string `json:"index"`
	IndexUUID string `json:"index_uuid"`
	ShardID   int    `json:"shard_id"`
	Reason    string `json:"reason"`
	NodeID    string `json:"node_id"`
	Status    string `json:"status"`
}

SnapshotShardFailure stores information about failures that occurred during shard snapshotting process.

type SnapshotVerifyRepositoryNode

type SnapshotVerifyRepositoryNode struct {
	Name string `json:"name"`
}

type SnapshotVerifyRepositoryResponse

type SnapshotVerifyRepositoryResponse struct {
	Nodes map[string]*SnapshotVerifyRepositoryNode `json:"nodes"`
}

SnapshotVerifyRepositoryResponse is the response of SnapshotVerifyRepositoryService.Do.

type SnapshotVerifyRepositoryService

type SnapshotVerifyRepositoryService struct {
	// contains filtered or unexported fields
}

SnapshotVerifyRepositoryService verifies a snapshop repository. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/modules-snapshots.html for details.

func NewSnapshotVerifyRepositoryService

func NewSnapshotVerifyRepositoryService(client *Client) *SnapshotVerifyRepositoryService

NewSnapshotVerifyRepositoryService creates a new SnapshotVerifyRepositoryService.

func (*SnapshotVerifyRepositoryService) Do

Do executes the operation.

func (*SnapshotVerifyRepositoryService) MasterTimeout

MasterTimeout is the explicit operation timeout for connection to master node.

func (*SnapshotVerifyRepositoryService) Pretty

Pretty indicates that the JSON response be indented and human readable.

func (*SnapshotVerifyRepositoryService) Repository

Repository specifies the repository name.

func (*SnapshotVerifyRepositoryService) Timeout

Timeout is an explicit operation timeout.

func (*SnapshotVerifyRepositoryService) Validate

func (s *SnapshotVerifyRepositoryService) Validate() error

Validate checks if the operation is valid.

type SnifferCallback

type SnifferCallback func(*NodesInfoNode) bool

SnifferCallback defines the protocol for sniffing decisions.

type SortByDoc

type SortByDoc struct {
	Sorter
}

SortByDoc sorts by the "_doc" field, as described in https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-scroll.html.

Example:

ss := elastic.NewSearchSource()
ss = ss.SortBy(elastic.SortByDoc{})

func (SortByDoc) Source

func (s SortByDoc) Source() (interface{}, error)

Source returns the JSON-serializable data.

type SortInfo

type SortInfo struct {
	Sorter
	Field          string
	Ascending      bool
	Missing        interface{}
	IgnoreUnmapped *bool
	UnmappedType   string
	SortMode       string
	NestedFilter   Query
	NestedPath     string
	NestedSort     *NestedSort // available in 6.1 or later
}

SortInfo contains information about sorting a field.

func (SortInfo) Source

func (info SortInfo) Source() (interface{}, error)

type Sorter

type Sorter interface {
	Source() (interface{}, error)
}

Sorter is an interface for sorting strategies, e.g. ScoreSort or FieldSort. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-request-sort.html.

type StartTaskResult

type StartTaskResult struct {
	TaskId string `json:"task"`
}

StartTaskResult is used in cases where a task gets started asynchronously and the operation simply returnes a TaskID to watch for via the Task Management API.

type StatsAggregation

type StatsAggregation struct {
	// contains filtered or unexported fields
}

StatsAggregation is a multi-value metrics aggregation that computes stats over numeric values extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-stats-aggregation.html

func NewStatsAggregation

func NewStatsAggregation() *StatsAggregation

func (*StatsAggregation) Field

func (a *StatsAggregation) Field(field string) *StatsAggregation

func (*StatsAggregation) Format

func (a *StatsAggregation) Format(format string) *StatsAggregation

func (*StatsAggregation) Meta

func (a *StatsAggregation) Meta(metaData map[string]interface{}) *StatsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*StatsAggregation) Script

func (a *StatsAggregation) Script(script *Script) *StatsAggregation

func (*StatsAggregation) Source

func (a *StatsAggregation) Source() (interface{}, error)

func (*StatsAggregation) SubAggregation

func (a *StatsAggregation) SubAggregation(name string, subAggregation Aggregation) *StatsAggregation

type StatsBucketAggregation

type StatsBucketAggregation struct {
	// contains filtered or unexported fields
}

StatsBucketAggregation is a sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-stats-bucket-aggregation.html

func NewStatsBucketAggregation

func NewStatsBucketAggregation() *StatsBucketAggregation

NewStatsBucketAggregation creates and initializes a new StatsBucketAggregation.

func (*StatsBucketAggregation) BucketsPath

func (s *StatsBucketAggregation) BucketsPath(bucketsPaths ...string) *StatsBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*StatsBucketAggregation) Format

Format to use on the output of this aggregation.

func (*StatsBucketAggregation) GapInsertZeros

func (s *StatsBucketAggregation) GapInsertZeros() *StatsBucketAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*StatsBucketAggregation) GapPolicy

func (s *StatsBucketAggregation) GapPolicy(gapPolicy string) *StatsBucketAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*StatsBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*StatsBucketAggregation) Meta

func (s *StatsBucketAggregation) Meta(metaData map[string]interface{}) *StatsBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*StatsBucketAggregation) Source

func (s *StatsBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type StopBackoff

type StopBackoff struct{}

StopBackoff is a fixed backoff policy that always returns false for Next(), meaning that the operation should never be retried.

func (StopBackoff) Next

func (b StopBackoff) Next(retry int) (time.Duration, bool)

Next implements BackoffFunc for StopBackoff.

type StopRetrier

type StopRetrier struct {
}

StopRetrier is an implementation that does no retries.

func NewStopRetrier

func NewStopRetrier() *StopRetrier

NewStopRetrier returns a retrier that does no retries.

func (*StopRetrier) Retry

func (r *StopRetrier) Retry(ctx context.Context, retry int, req *http.Request, resp *http.Response, err error) (time.Duration, bool, error)

Retry does not retry.

type StupidBackoffSmoothingModel

type StupidBackoffSmoothingModel struct {
	// contains filtered or unexported fields
}

StupidBackoffSmoothingModel implements a stupid backoff smoothing model. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-phrase.html#_smoothing_models for details about smoothing models.

func NewStupidBackoffSmoothingModel

func NewStupidBackoffSmoothingModel(discount float64) *StupidBackoffSmoothingModel

func (*StupidBackoffSmoothingModel) Source

func (sm *StupidBackoffSmoothingModel) Source() (interface{}, error)

func (*StupidBackoffSmoothingModel) Type

type SuggestField

type SuggestField struct {
	// contains filtered or unexported fields
}

SuggestField can be used by the caller to specify a suggest field at index time. For a detailed example, see e.g. https://www.elastic.co/blog/you-complete-me.

func NewSuggestField

func NewSuggestField(input ...string) *SuggestField

func (*SuggestField) ContextQuery

func (f *SuggestField) ContextQuery(queries ...SuggesterContextQuery) *SuggestField

func (*SuggestField) Input

func (f *SuggestField) Input(input ...string) *SuggestField

func (*SuggestField) MarshalJSON

func (f *SuggestField) MarshalJSON() ([]byte, error)

MarshalJSON encodes SuggestField into JSON.

func (*SuggestField) Weight

func (f *SuggestField) Weight(weight int) *SuggestField

type Suggester

type Suggester interface {
	Name() string
	Source(includeName bool) (interface{}, error)
}

Represents the generic suggester interface. A suggester's only purpose is to return the source of the query as a JSON-serializable object. Returning a map[string]interface{} will do.

type SuggesterCategoryMapping

type SuggesterCategoryMapping struct {
	// contains filtered or unexported fields
}

SuggesterCategoryMapping provides a mapping for a category context in a suggester. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/suggester-context.html#_category_mapping.

func NewSuggesterCategoryMapping

func NewSuggesterCategoryMapping(name string) *SuggesterCategoryMapping

NewSuggesterCategoryMapping creates a new SuggesterCategoryMapping.

func (*SuggesterCategoryMapping) DefaultValues

func (q *SuggesterCategoryMapping) DefaultValues(values ...string) *SuggesterCategoryMapping

func (*SuggesterCategoryMapping) FieldName

func (q *SuggesterCategoryMapping) FieldName(fieldName string) *SuggesterCategoryMapping

func (*SuggesterCategoryMapping) Source

func (q *SuggesterCategoryMapping) Source() (interface{}, error)

Source returns a map that will be used to serialize the context query as JSON.

type SuggesterCategoryQuery

type SuggesterCategoryQuery struct {
	// contains filtered or unexported fields
}

SuggesterCategoryQuery provides querying a category context in a suggester. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/suggester-context.html#_category_query.

func NewSuggesterCategoryQuery

func NewSuggesterCategoryQuery(name string, values ...string) *SuggesterCategoryQuery

NewSuggesterCategoryQuery creates a new SuggesterCategoryQuery.

func (*SuggesterCategoryQuery) Source

func (q *SuggesterCategoryQuery) Source() (interface{}, error)

Source returns a map that will be used to serialize the context query as JSON.

func (*SuggesterCategoryQuery) Value

func (*SuggesterCategoryQuery) ValueWithBoost

func (q *SuggesterCategoryQuery) ValueWithBoost(val string, boost int) *SuggesterCategoryQuery

func (*SuggesterCategoryQuery) Values

type SuggesterContextQuery

type SuggesterContextQuery interface {
	Source() (interface{}, error)
}

SuggesterContextQuery is used to define context information within a suggestion request.

type SuggesterGeoMapping

type SuggesterGeoMapping struct {
	// contains filtered or unexported fields
}

SuggesterGeoMapping provides a mapping for a geolocation context in a suggester. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/suggester-context.html#_geo_location_mapping.

func NewSuggesterGeoMapping

func NewSuggesterGeoMapping(name string) *SuggesterGeoMapping

NewSuggesterGeoMapping creates a new SuggesterGeoMapping.

func (*SuggesterGeoMapping) DefaultLocations

func (q *SuggesterGeoMapping) DefaultLocations(locations ...*GeoPoint) *SuggesterGeoMapping

func (*SuggesterGeoMapping) FieldName

func (q *SuggesterGeoMapping) FieldName(fieldName string) *SuggesterGeoMapping

func (*SuggesterGeoMapping) Neighbors

func (q *SuggesterGeoMapping) Neighbors(neighbors bool) *SuggesterGeoMapping

func (*SuggesterGeoMapping) Precision

func (q *SuggesterGeoMapping) Precision(precision ...string) *SuggesterGeoMapping

func (*SuggesterGeoMapping) Source

func (q *SuggesterGeoMapping) Source() (interface{}, error)

Source returns a map that will be used to serialize the context query as JSON.

type SuggesterGeoQuery

type SuggesterGeoQuery struct {
	// contains filtered or unexported fields
}

SuggesterGeoQuery provides querying a geolocation context in a suggester. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/suggester-context.html#_geo_location_query

func NewSuggesterGeoQuery

func NewSuggesterGeoQuery(name string, location *GeoPoint) *SuggesterGeoQuery

NewSuggesterGeoQuery creates a new SuggesterGeoQuery.

func (*SuggesterGeoQuery) Precision

func (q *SuggesterGeoQuery) Precision(precision ...string) *SuggesterGeoQuery

func (*SuggesterGeoQuery) Source

func (q *SuggesterGeoQuery) Source() (interface{}, error)

Source returns a map that will be used to serialize the context query as JSON.

type SumAggregation

type SumAggregation struct {
	// contains filtered or unexported fields
}

SumAggregation is a single-value metrics aggregation that sums up numeric values that are extracted from the aggregated documents. These values can be extracted either from specific numeric fields in the documents, or be generated by a provided script. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-sum-aggregation.html

func NewSumAggregation

func NewSumAggregation() *SumAggregation

func (*SumAggregation) Field

func (a *SumAggregation) Field(field string) *SumAggregation

func (*SumAggregation) Format

func (a *SumAggregation) Format(format string) *SumAggregation

func (*SumAggregation) Meta

func (a *SumAggregation) Meta(metaData map[string]interface{}) *SumAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SumAggregation) Script

func (a *SumAggregation) Script(script *Script) *SumAggregation

func (*SumAggregation) Source

func (a *SumAggregation) Source() (interface{}, error)

func (*SumAggregation) SubAggregation

func (a *SumAggregation) SubAggregation(name string, subAggregation Aggregation) *SumAggregation

type SumBucketAggregation

type SumBucketAggregation struct {
	// contains filtered or unexported fields
}

SumBucketAggregation is a sibling pipeline aggregation which calculates the sum across all buckets of a specified metric in a sibling aggregation. The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/search-aggregations-pipeline-sum-bucket-aggregation.html

func NewSumBucketAggregation

func NewSumBucketAggregation() *SumBucketAggregation

NewSumBucketAggregation creates and initializes a new SumBucketAggregation.

func (*SumBucketAggregation) BucketsPath

func (a *SumBucketAggregation) BucketsPath(bucketsPaths ...string) *SumBucketAggregation

BucketsPath sets the paths to the buckets to use for this pipeline aggregator.

func (*SumBucketAggregation) Format

Format to use on the output of this aggregation.

func (*SumBucketAggregation) GapInsertZeros

func (a *SumBucketAggregation) GapInsertZeros() *SumBucketAggregation

GapInsertZeros inserts zeros for gaps in the series.

func (*SumBucketAggregation) GapPolicy

func (a *SumBucketAggregation) GapPolicy(gapPolicy string) *SumBucketAggregation

GapPolicy defines what should be done when a gap in the series is discovered. Valid values include "insert_zeros" or "skip". Default is "insert_zeros".

func (*SumBucketAggregation) GapSkip

GapSkip skips gaps in the series.

func (*SumBucketAggregation) Meta

func (a *SumBucketAggregation) Meta(metaData map[string]interface{}) *SumBucketAggregation

Meta sets the meta data to be included in the aggregation response.

func (*SumBucketAggregation) Source

func (a *SumBucketAggregation) Source() (interface{}, error)

Source returns the a JSON-serializable interface.

type TaskInfo

type TaskInfo struct {
	Node               string      `json:"node"`
	Id                 int64       `json:"id"` // the task id (yes, this is a long in the Java source)
	Type               string      `json:"type"`
	Action             string      `json:"action"`
	Status             interface{} `json:"status"`      // has separate implementations of Task.Status in Java for reindexing, replication, and "RawTaskStatus"
	Description        interface{} `json:"description"` // same as Status
	StartTime          string      `json:"start_time"`
	StartTimeInMillis  int64       `json:"start_time_in_millis"`
	RunningTime        string      `json:"running_time"`
	RunningTimeInNanos int64       `json:"running_time_in_nanos"`
	Cancellable        bool        `json:"cancellable"`
	ParentTaskId       string      `json:"parent_task_id"` // like "YxJnVYjwSBm_AUbzddTajQ:12356"
}

TaskInfo represents information about a currently running task.

type TaskOperationFailure

type TaskOperationFailure struct {
	TaskId int64         `json:"task_id"` // this is a long in the Java source
	NodeId string        `json:"node_id"`
	Status string        `json:"status"`
	Reason *ErrorDetails `json:"reason"`
}

type TasksCancelService

type TasksCancelService struct {
	// contains filtered or unexported fields
}

TasksCancelService can cancel long-running tasks. It is supported as of Elasticsearch 2.3.0.

See http://www.elastic.co/guide/en/elasticsearch/reference/5.2/tasks-cancel.html for details.

func NewTasksCancelService

func NewTasksCancelService(client *Client) *TasksCancelService

NewTasksCancelService creates a new TasksCancelService.

func (*TasksCancelService) Actions

func (s *TasksCancelService) Actions(actions []string) *TasksCancelService

Actions is a list of actions that should be cancelled. Leave empty to cancel all.

func (*TasksCancelService) Do

Do executes the operation.

func (*TasksCancelService) NodeId

func (s *TasksCancelService) NodeId(nodeId []string) *TasksCancelService

NodeId is a list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes.

func (*TasksCancelService) ParentNode

func (s *TasksCancelService) ParentNode(parentNode string) *TasksCancelService

ParentNode specifies to cancel tasks with specified parent node.

func (*TasksCancelService) ParentTask

func (s *TasksCancelService) ParentTask(parentTask int64) *TasksCancelService

ParentTask specifies to cancel tasks with specified parent task id. Set to -1 to cancel all.

func (*TasksCancelService) Pretty

func (s *TasksCancelService) Pretty(pretty bool) *TasksCancelService

Pretty indicates that the JSON response be indented and human readable.

func (*TasksCancelService) TaskId

func (s *TasksCancelService) TaskId(taskId int64) *TasksCancelService

TaskId specifies the task to cancel. Set to -1 to cancel all tasks.

func (*TasksCancelService) Validate

func (s *TasksCancelService) Validate() error

Validate checks if the operation is valid.

type TasksGetTaskResponse

type TasksGetTaskResponse struct {
	Completed bool      `json:"completed"`
	Task      *TaskInfo `json:"task,omitempty"`
}

type TasksGetTaskService

type TasksGetTaskService struct {
	// contains filtered or unexported fields
}

TasksGetTaskService retrieves the state of a task in the cluster. It is part of the Task Management API documented at http://www.elastic.co/guide/en/elasticsearch/reference/5.2/tasks-list.html.

It is supported as of Elasticsearch 2.3.0.

func NewTasksGetTaskService

func NewTasksGetTaskService(client *Client) *TasksGetTaskService

NewTasksGetTaskService creates a new TasksGetTaskService.

func (*TasksGetTaskService) Do

Do executes the operation.

func (*TasksGetTaskService) Pretty

func (s *TasksGetTaskService) Pretty(pretty bool) *TasksGetTaskService

Pretty indicates that the JSON response be indented and human readable.

func (*TasksGetTaskService) TaskId

func (s *TasksGetTaskService) TaskId(taskId string) *TasksGetTaskService

TaskId indicates to return the task with specified id.

func (*TasksGetTaskService) Validate

func (s *TasksGetTaskService) Validate() error

Validate checks if the operation is valid.

func (*TasksGetTaskService) WaitForCompletion

func (s *TasksGetTaskService) WaitForCompletion(waitForCompletion bool) *TasksGetTaskService

WaitForCompletion indicates whether to wait for the matching tasks to complete (default: false).

type TasksListResponse

type TasksListResponse struct {
	TaskFailures []*TaskOperationFailure `json:"task_failures"`
	NodeFailures []*FailedNodeException  `json:"node_failures"`
	// Nodes returns the tasks per node. The key is the node id.
	Nodes map[string]*DiscoveryNode `json:"nodes"`
}

TasksListResponse is the response of TasksListService.Do.

type TasksListService

type TasksListService struct {
	// contains filtered or unexported fields
}

TasksListService retrieves the list of currently executing tasks on one ore more nodes in the cluster. It is part of the Task Management API documented at https://www.elastic.co/guide/en/elasticsearch/reference/6.0/tasks.html.

It is supported as of Elasticsearch 2.3.0.

func NewTasksListService

func NewTasksListService(client *Client) *TasksListService

NewTasksListService creates a new TasksListService.

func (*TasksListService) Actions

func (s *TasksListService) Actions(actions ...string) *TasksListService

Actions is a list of actions that should be returned. Leave empty to return all.

func (*TasksListService) Detailed

func (s *TasksListService) Detailed(detailed bool) *TasksListService

Detailed indicates whether to return detailed task information (default: false).

func (*TasksListService) Do

Do executes the operation.

func (*TasksListService) GroupBy

func (s *TasksListService) GroupBy(groupBy string) *TasksListService

GroupBy groups tasks by nodes or parent/child relationships. As of now, it can either be "nodes" (default) or "parents".

func (*TasksListService) NodeId

func (s *TasksListService) NodeId(nodeId ...string) *TasksListService

NodeId is a list of node IDs or names to limit the returned information; use `_local` to return information from the node you're connecting to, leave empty to get information from all nodes.

func (*TasksListService) ParentNode

func (s *TasksListService) ParentNode(parentNode string) *TasksListService

ParentNode returns tasks with specified parent node.

func (*TasksListService) ParentTaskId

func (s *TasksListService) ParentTaskId(parentTaskId string) *TasksListService

ParentTaskId returns tasks with specified parent task id (node_id:task_number). Set to -1 to return all.

func (*TasksListService) Pretty

func (s *TasksListService) Pretty(pretty bool) *TasksListService

Pretty indicates that the JSON response be indented and human readable.

func (*TasksListService) TaskId

func (s *TasksListService) TaskId(taskId ...string) *TasksListService

TaskId indicates to returns the task(s) with specified id(s).

func (*TasksListService) Validate

func (s *TasksListService) Validate() error

Validate checks if the operation is valid.

func (*TasksListService) WaitForCompletion

func (s *TasksListService) WaitForCompletion(waitForCompletion bool) *TasksListService

WaitForCompletion indicates whether to wait for the matching tasks to complete (default: false).

type TermQuery

type TermQuery struct {
	// contains filtered or unexported fields
}

TermQuery finds documents that contain the exact term specified in the inverted index.

For details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-term-query.html

func NewTermQuery

func NewTermQuery(name string, value interface{}) *TermQuery

NewTermQuery creates and initializes a new TermQuery.

func (*TermQuery) Boost

func (q *TermQuery) Boost(boost float64) *TermQuery

Boost sets the boost for this query.

func (*TermQuery) QueryName

func (q *TermQuery) QueryName(queryName string) *TermQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*TermQuery) Source

func (q *TermQuery) Source() (interface{}, error)

Source returns JSON for the query.

type TermSuggester

type TermSuggester struct {
	Suggester
	// contains filtered or unexported fields
}

TermSuggester suggests terms based on edit distance. For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-term.html.

func NewTermSuggester

func NewTermSuggester(name string) *TermSuggester

NewTermSuggester creates a new TermSuggester.

func (*TermSuggester) Accuracy

func (q *TermSuggester) Accuracy(accuracy float64) *TermSuggester

func (*TermSuggester) Analyzer

func (q *TermSuggester) Analyzer(analyzer string) *TermSuggester

func (*TermSuggester) ContextQueries

func (q *TermSuggester) ContextQueries(queries ...SuggesterContextQuery) *TermSuggester

func (*TermSuggester) ContextQuery

func (q *TermSuggester) ContextQuery(query SuggesterContextQuery) *TermSuggester

func (*TermSuggester) Field

func (q *TermSuggester) Field(field string) *TermSuggester

func (*TermSuggester) MaxEdits

func (q *TermSuggester) MaxEdits(maxEdits int) *TermSuggester

func (*TermSuggester) MaxInspections

func (q *TermSuggester) MaxInspections(maxInspections int) *TermSuggester

func (*TermSuggester) MaxTermFreq

func (q *TermSuggester) MaxTermFreq(maxTermFreq float64) *TermSuggester

func (*TermSuggester) MinDocFreq

func (q *TermSuggester) MinDocFreq(minDocFreq float64) *TermSuggester

func (*TermSuggester) MinWordLength

func (q *TermSuggester) MinWordLength(minWordLength int) *TermSuggester

func (*TermSuggester) Name

func (q *TermSuggester) Name() string

func (*TermSuggester) PrefixLength

func (q *TermSuggester) PrefixLength(prefixLength int) *TermSuggester

func (*TermSuggester) ShardSize

func (q *TermSuggester) ShardSize(shardSize int) *TermSuggester

func (*TermSuggester) Size

func (q *TermSuggester) Size(size int) *TermSuggester

func (*TermSuggester) Sort

func (q *TermSuggester) Sort(sort string) *TermSuggester

func (*TermSuggester) Source

func (q *TermSuggester) Source(includeName bool) (interface{}, error)

Source generates the source for the term suggester.

func (*TermSuggester) StringDistance

func (q *TermSuggester) StringDistance(stringDistance string) *TermSuggester

func (*TermSuggester) SuggestMode

func (q *TermSuggester) SuggestMode(suggestMode string) *TermSuggester

func (*TermSuggester) Text

func (q *TermSuggester) Text(text string) *TermSuggester

type TermVectorsFieldInfo

type TermVectorsFieldInfo struct {
	FieldStatistics FieldStatistics      `json:"field_statistics"`
	Terms           map[string]TermsInfo `json:"terms"`
}

type TermsAggregation

type TermsAggregation struct {
	// contains filtered or unexported fields
}

TermsAggregation is a multi-bucket value source based aggregation where buckets are dynamically built - one per unique value. See: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html

func NewTermsAggregation

func NewTermsAggregation() *TermsAggregation

func (*TermsAggregation) CollectionMode

func (a *TermsAggregation) CollectionMode(collectionMode string) *TermsAggregation

Collection mode can be depth_first or breadth_first as of 1.4.0.

func (*TermsAggregation) Exclude

func (a *TermsAggregation) Exclude(regexp string) *TermsAggregation

func (*TermsAggregation) ExcludeValues

func (a *TermsAggregation) ExcludeValues(values ...interface{}) *TermsAggregation

func (*TermsAggregation) ExecutionHint

func (a *TermsAggregation) ExecutionHint(hint string) *TermsAggregation

func (*TermsAggregation) Field

func (a *TermsAggregation) Field(field string) *TermsAggregation

func (*TermsAggregation) Include

func (a *TermsAggregation) Include(regexp string) *TermsAggregation

func (*TermsAggregation) IncludeValues

func (a *TermsAggregation) IncludeValues(values ...interface{}) *TermsAggregation

func (*TermsAggregation) Meta

func (a *TermsAggregation) Meta(metaData map[string]interface{}) *TermsAggregation

Meta sets the meta data to be included in the aggregation response.

func (*TermsAggregation) MinDocCount

func (a *TermsAggregation) MinDocCount(minDocCount int) *TermsAggregation

func (*TermsAggregation) Missing

func (a *TermsAggregation) Missing(missing interface{}) *TermsAggregation

Missing configures the value to use when documents miss a value.

func (*TermsAggregation) NumPartitions

func (a *TermsAggregation) NumPartitions(n int) *TermsAggregation

func (*TermsAggregation) Order

func (a *TermsAggregation) Order(order string, asc bool) *TermsAggregation

func (*TermsAggregation) OrderByAggregation

func (a *TermsAggregation) OrderByAggregation(aggName string, asc bool) *TermsAggregation

OrderByAggregation creates a bucket ordering strategy which sorts buckets based on a single-valued calc get.

func (*TermsAggregation) OrderByAggregationAndMetric

func (a *TermsAggregation) OrderByAggregationAndMetric(aggName, metric string, asc bool) *TermsAggregation

OrderByAggregationAndMetric creates a bucket ordering strategy which sorts buckets based on a multi-valued calc get.

func (*TermsAggregation) OrderByCount

func (a *TermsAggregation) OrderByCount(asc bool) *TermsAggregation

func (*TermsAggregation) OrderByCountAsc

func (a *TermsAggregation) OrderByCountAsc() *TermsAggregation

func (*TermsAggregation) OrderByCountDesc

func (a *TermsAggregation) OrderByCountDesc() *TermsAggregation

func (*TermsAggregation) OrderByTerm

func (a *TermsAggregation) OrderByTerm(asc bool) *TermsAggregation

func (*TermsAggregation) OrderByTermAsc

func (a *TermsAggregation) OrderByTermAsc() *TermsAggregation

func (*TermsAggregation) OrderByTermDesc

func (a *TermsAggregation) OrderByTermDesc() *TermsAggregation

func (*TermsAggregation) Partition

func (a *TermsAggregation) Partition(p int) *TermsAggregation

func (*TermsAggregation) RequiredSize

func (a *TermsAggregation) RequiredSize(requiredSize int) *TermsAggregation

func (*TermsAggregation) Script

func (a *TermsAggregation) Script(script *Script) *TermsAggregation

func (*TermsAggregation) ShardMinDocCount

func (a *TermsAggregation) ShardMinDocCount(shardMinDocCount int) *TermsAggregation

func (*TermsAggregation) ShardSize

func (a *TermsAggregation) ShardSize(shardSize int) *TermsAggregation

func (*TermsAggregation) ShowTermDocCountError

func (a *TermsAggregation) ShowTermDocCountError(showTermDocCountError bool) *TermsAggregation

func (*TermsAggregation) Size

func (a *TermsAggregation) Size(size int) *TermsAggregation

func (*TermsAggregation) Source

func (a *TermsAggregation) Source() (interface{}, error)

func (*TermsAggregation) SubAggregation

func (a *TermsAggregation) SubAggregation(name string, subAggregation Aggregation) *TermsAggregation

func (*TermsAggregation) ValueType

func (a *TermsAggregation) ValueType(valueType string) *TermsAggregation

ValueType can be string, long, or double.

type TermsAggregationIncludeExclude

type TermsAggregationIncludeExclude struct {
	Include       string
	Exclude       string
	IncludeValues []interface{}
	ExcludeValues []interface{}
	Partition     int
	NumPartitions int
}

TermsAggregationIncludeExclude allows for include/exclude in a TermsAggregation.

type TermsInfo

type TermsInfo struct {
	DocFreq  int64       `json:"doc_freq"`
	Score    float64     `json:"score"`
	TermFreq int64       `json:"term_freq"`
	Ttf      int64       `json:"ttf"`
	Tokens   []TokenInfo `json:"tokens"`
}

type TermsLookup

type TermsLookup struct {
	// contains filtered or unexported fields
}

TermsLookup encapsulates the parameters needed to fetch terms.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-terms-query.html#query-dsl-terms-lookup.

func NewTermsLookup

func NewTermsLookup() *TermsLookup

NewTermsLookup creates and initializes a new TermsLookup.

func (*TermsLookup) Id

func (t *TermsLookup) Id(id string) *TermsLookup

Id to look up.

func (*TermsLookup) Index

func (t *TermsLookup) Index(index string) *TermsLookup

Index name.

func (*TermsLookup) Path

func (t *TermsLookup) Path(path string) *TermsLookup

Path to use for lookup.

func (*TermsLookup) Routing

func (t *TermsLookup) Routing(routing string) *TermsLookup

Routing value.

func (*TermsLookup) Source

func (t *TermsLookup) Source() (interface{}, error)

Source creates the JSON source of the builder.

func (*TermsLookup) Type

func (t *TermsLookup) Type(typ string) *TermsLookup

Type name.

type TermsOrder

type TermsOrder struct {
	Field     string
	Ascending bool
}

TermsOrder specifies a single order field for a terms aggregation.

func (*TermsOrder) Source

func (order *TermsOrder) Source() (interface{}, error)

Source returns serializable JSON of the TermsOrder.

type TermsQuery

type TermsQuery struct {
	// contains filtered or unexported fields
}

TermsQuery filters documents that have fields that match any of the provided terms (not analyzed).

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-terms-query.html

func NewTermsQuery

func NewTermsQuery(name string, values ...interface{}) *TermsQuery

NewTermsQuery creates and initializes a new TermsQuery.

func (*TermsQuery) Boost

func (q *TermsQuery) Boost(boost float64) *TermsQuery

Boost sets the boost for this query.

func (*TermsQuery) QueryName

func (q *TermsQuery) QueryName(queryName string) *TermsQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*TermsQuery) Source

func (q *TermsQuery) Source() (interface{}, error)

Creates the query source for the term query.

func (*TermsQuery) TermsLookup

func (q *TermsQuery) TermsLookup(lookup *TermsLookup) *TermsQuery

TermsLookup adds terms lookup details to the query.

type TermsSetQuery

type TermsSetQuery struct {
	// contains filtered or unexported fields
}

TermsSetQuery returns any documents that match with at least one or more of the provided terms. The terms are not analyzed and thus must match exactly. The number of terms that must match varies per document and is either controlled by a minimum should match field or computed per document in a minimum should match script.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.1/query-dsl-terms-set-query.html

func NewTermsSetQuery

func NewTermsSetQuery(name string, values ...interface{}) *TermsSetQuery

NewTermsSetQuery creates and initializes a new TermsSetQuery.

func (*TermsSetQuery) Boost

func (q *TermsSetQuery) Boost(boost float64) *TermsSetQuery

Boost sets the boost for this query.

func (*TermsSetQuery) MinimumShouldMatchField

func (q *TermsSetQuery) MinimumShouldMatchField(minimumShouldMatchField string) *TermsSetQuery

MinimumShouldMatchField specifies the field to match.

func (*TermsSetQuery) MinimumShouldMatchScript

func (q *TermsSetQuery) MinimumShouldMatchScript(minimumShouldMatchScript *Script) *TermsSetQuery

MinimumShouldMatchScript specifies the script to match.

func (*TermsSetQuery) QueryName

func (q *TermsSetQuery) QueryName(queryName string) *TermsSetQuery

QueryName sets the query name for the filter that can be used when searching for matched_filters per hit

func (*TermsSetQuery) Source

func (q *TermsSetQuery) Source() (interface{}, error)

Source creates the query source for the term query.

type TermvectorsFilterSettings

type TermvectorsFilterSettings struct {
	// contains filtered or unexported fields
}

TermvectorsFilterSettings adds additional filters to a Termsvector request. It allows to filter terms based on their tf-idf scores. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-termvectors.html#_terms_filtering for more information.

func NewTermvectorsFilterSettings

func NewTermvectorsFilterSettings() *TermvectorsFilterSettings

NewTermvectorsFilterSettings creates and initializes a new TermvectorsFilterSettings struct.

func (*TermvectorsFilterSettings) MaxDocFreq

MaxDocFreq ignores terms which occur in more than this many docs.

func (*TermvectorsFilterSettings) MaxNumTerms

MaxNumTerms specifies the maximum number of terms the must be returned per field.

func (*TermvectorsFilterSettings) MaxTermFreq

MaxTermFreq ignores words with more than this frequency in the source doc.

func (*TermvectorsFilterSettings) MaxWordLength

func (fs *TermvectorsFilterSettings) MaxWordLength(value int64) *TermvectorsFilterSettings

MaxWordLength specifies the maximum word length above which words will be ignored.

func (*TermvectorsFilterSettings) MinDocFreq

MinDocFreq ignores terms which do not occur in at least this many docs.

func (*TermvectorsFilterSettings) MinTermFreq

MinTermFreq ignores words with less than this frequency in the source doc.

func (*TermvectorsFilterSettings) MinWordLength

func (fs *TermvectorsFilterSettings) MinWordLength(value int64) *TermvectorsFilterSettings

MinWordLength specifies the minimum word length below which words will be ignored.

func (*TermvectorsFilterSettings) Source

func (fs *TermvectorsFilterSettings) Source() (interface{}, error)

Source returns JSON for the query.

type TermvectorsResponse

type TermvectorsResponse struct {
	Index       string                          `json:"_index"`
	Type        string                          `json:"_type"`
	Id          string                          `json:"_id,omitempty"`
	Version     int                             `json:"_version"`
	Found       bool                            `json:"found"`
	Took        int64                           `json:"took"`
	TermVectors map[string]TermVectorsFieldInfo `json:"term_vectors"`
}

TermvectorsResponse is the response of TermvectorsService.Do.

type TermvectorsService

type TermvectorsService struct {
	// contains filtered or unexported fields
}

TermvectorsService returns information and statistics on terms in the fields of a particular document. The document could be stored in the index or artificially provided by the user.

See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-termvectors.html for documentation.

func NewTermvectorsService

func NewTermvectorsService(client *Client) *TermvectorsService

NewTermvectorsService creates a new TermvectorsService.

func (*TermvectorsService) BodyJson

func (s *TermvectorsService) BodyJson(body interface{}) *TermvectorsService

BodyJson defines the body parameters. See documentation.

func (*TermvectorsService) BodyString

func (s *TermvectorsService) BodyString(body string) *TermvectorsService

BodyString defines the body parameters as a string. See documentation.

func (*TermvectorsService) Dfs

Dfs specifies if distributed frequencies should be returned instead shard frequencies.

func (*TermvectorsService) Do

Do executes the operation.

func (*TermvectorsService) Doc

func (s *TermvectorsService) Doc(doc interface{}) *TermvectorsService

Doc is the document to analyze.

func (*TermvectorsService) FieldStatistics

func (s *TermvectorsService) FieldStatistics(fieldStatistics bool) *TermvectorsService

FieldStatistics specifies if document count, sum of document frequencies and sum of total term frequencies should be returned.

func (*TermvectorsService) Fields

func (s *TermvectorsService) Fields(fields ...string) *TermvectorsService

Fields a list of fields to return.

func (*TermvectorsService) Filter

Filter adds terms filter settings.

func (*TermvectorsService) Id

Id of the document.

func (*TermvectorsService) Index

func (s *TermvectorsService) Index(index string) *TermvectorsService

Index in which the document resides.

func (*TermvectorsService) Offsets

func (s *TermvectorsService) Offsets(offsets bool) *TermvectorsService

Offsets specifies if term offsets should be returned.

func (*TermvectorsService) Parent

func (s *TermvectorsService) Parent(parent string) *TermvectorsService

Parent id of documents.

func (*TermvectorsService) Payloads

func (s *TermvectorsService) Payloads(payloads bool) *TermvectorsService

Payloads specifies if term payloads should be returned.

func (*TermvectorsService) PerFieldAnalyzer

func (s *TermvectorsService) PerFieldAnalyzer(perFieldAnalyzer map[string]string) *TermvectorsService

PerFieldAnalyzer allows to specify a different analyzer than the one at the field.

func (*TermvectorsService) Positions

func (s *TermvectorsService) Positions(positions bool) *TermvectorsService

Positions specifies if term positions should be returned.

func (*TermvectorsService) Preference

func (s *TermvectorsService) Preference(preference string) *TermvectorsService

Preference specify the node or shard the operation should be performed on (default: random).

func (*TermvectorsService) Pretty

func (s *TermvectorsService) Pretty(pretty bool) *TermvectorsService

Pretty indicates that the JSON response be indented and human readable.

func (*TermvectorsService) Realtime

func (s *TermvectorsService) Realtime(realtime bool) *TermvectorsService

Realtime specifies if request is real-time as opposed to near-real-time (default: true).

func (*TermvectorsService) Routing

func (s *TermvectorsService) Routing(routing string) *TermvectorsService

Routing is a specific routing value.

func (*TermvectorsService) TermStatistics

func (s *TermvectorsService) TermStatistics(termStatistics bool) *TermvectorsService

TermStatistics specifies if total term frequency and document frequency should be returned.

func (*TermvectorsService) Type

Type of the document.

func (*TermvectorsService) Validate

func (s *TermvectorsService) Validate() error

Validate checks if the operation is valid.

func (*TermvectorsService) Version

func (s *TermvectorsService) Version(version interface{}) *TermvectorsService

Version an explicit version number for concurrency control.

func (*TermvectorsService) VersionType

func (s *TermvectorsService) VersionType(versionType string) *TermvectorsService

VersionType specifies a version type ("internal", "external", or "external_gte").

type TokenInfo

type TokenInfo struct {
	StartOffset int64  `json:"start_offset"`
	EndOffset   int64  `json:"end_offset"`
	Position    int64  `json:"position"`
	Payload     string `json:"payload"`
}

type TopHitsAggregation

type TopHitsAggregation struct {
	// contains filtered or unexported fields
}

TopHitsAggregation keeps track of the most relevant document being aggregated. This aggregator is intended to be used as a sub aggregator, so that the top matching documents can be aggregated per bucket.

It can effectively be used to group result sets by certain fields via a bucket aggregator. One or more bucket aggregators determines by which properties a result set get sliced into.

See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-top-hits-aggregation.html

func NewTopHitsAggregation

func NewTopHitsAggregation() *TopHitsAggregation

func (*TopHitsAggregation) DocvalueField

func (a *TopHitsAggregation) DocvalueField(docvalueField string) *TopHitsAggregation

func (*TopHitsAggregation) DocvalueFields

func (a *TopHitsAggregation) DocvalueFields(docvalueFields ...string) *TopHitsAggregation

func (*TopHitsAggregation) Explain

func (a *TopHitsAggregation) Explain(explain bool) *TopHitsAggregation

func (*TopHitsAggregation) FetchSource

func (a *TopHitsAggregation) FetchSource(fetchSource bool) *TopHitsAggregation

func (*TopHitsAggregation) FetchSourceContext

func (a *TopHitsAggregation) FetchSourceContext(fetchSourceContext *FetchSourceContext) *TopHitsAggregation

func (*TopHitsAggregation) From

func (a *TopHitsAggregation) From(from int) *TopHitsAggregation

func (*TopHitsAggregation) Highlight

func (a *TopHitsAggregation) Highlight(highlight *Highlight) *TopHitsAggregation

func (*TopHitsAggregation) Highlighter

func (a *TopHitsAggregation) Highlighter() *Highlight

func (*TopHitsAggregation) NoStoredFields

func (a *TopHitsAggregation) NoStoredFields() *TopHitsAggregation

func (*TopHitsAggregation) ScriptField

func (a *TopHitsAggregation) ScriptField(scriptField *ScriptField) *TopHitsAggregation

func (*TopHitsAggregation) ScriptFields

func (a *TopHitsAggregation) ScriptFields(scriptFields ...*ScriptField) *TopHitsAggregation

func (*TopHitsAggregation) Size

func (a *TopHitsAggregation) Size(size int) *TopHitsAggregation

func (*TopHitsAggregation) Sort

func (a *TopHitsAggregation) Sort(field string, ascending bool) *TopHitsAggregation

func (*TopHitsAggregation) SortBy

func (a *TopHitsAggregation) SortBy(sorter ...Sorter) *TopHitsAggregation

func (*TopHitsAggregation) SortWithInfo

func (a *TopHitsAggregation) SortWithInfo(info SortInfo) *TopHitsAggregation

func (*TopHitsAggregation) Source

func (a *TopHitsAggregation) Source() (interface{}, error)

func (*TopHitsAggregation) TrackScores

func (a *TopHitsAggregation) TrackScores(trackScores bool) *TopHitsAggregation

func (*TopHitsAggregation) Version

func (a *TopHitsAggregation) Version(version bool) *TopHitsAggregation

type TypeQuery

type TypeQuery struct {
	// contains filtered or unexported fields
}

TypeQuery filters documents matching the provided document / mapping type.

For details, see: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-type-query.html

func NewTypeQuery

func NewTypeQuery(typ string) *TypeQuery

func (*TypeQuery) Source

func (q *TypeQuery) Source() (interface{}, error)

Source returns JSON for the query.

type UpdateByQueryService

type UpdateByQueryService struct {
	// contains filtered or unexported fields
}

UpdateByQueryService is documented at https://www.elastic.co/guide/en/elasticsearch/plugins/master/plugins-reindex.html.

func NewUpdateByQueryService

func NewUpdateByQueryService(client *Client) *UpdateByQueryService

NewUpdateByQueryService creates a new UpdateByQueryService.

func (*UpdateByQueryService) AbortOnVersionConflict

func (s *UpdateByQueryService) AbortOnVersionConflict() *UpdateByQueryService

AbortOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("abort").

func (*UpdateByQueryService) AllowNoIndices

func (s *UpdateByQueryService) AllowNoIndices(allowNoIndices bool) *UpdateByQueryService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*UpdateByQueryService) AnalyzeWildcard

func (s *UpdateByQueryService) AnalyzeWildcard(analyzeWildcard bool) *UpdateByQueryService

AnalyzeWildcard specifies whether wildcard and prefix queries should be analyzed (default: false).

func (*UpdateByQueryService) Analyzer

func (s *UpdateByQueryService) Analyzer(analyzer string) *UpdateByQueryService

Analyzer specifies the analyzer to use for the query string.

func (*UpdateByQueryService) Body

Body specifies the body of the request. It overrides data being specified via SearchService or Script.

func (*UpdateByQueryService) Conflicts

func (s *UpdateByQueryService) Conflicts(conflicts string) *UpdateByQueryService

Conflicts indicates what to do when the process detects version conflicts. Possible values are "proceed" and "abort".

func (*UpdateByQueryService) DF

DF specifies the field to use as default where no field prefix is given in the query string.

func (*UpdateByQueryService) DefaultOperator

func (s *UpdateByQueryService) DefaultOperator(defaultOperator string) *UpdateByQueryService

DefaultOperator is the default operator for query string query (AND or OR).

func (*UpdateByQueryService) Do

Do executes the operation.

func (*UpdateByQueryService) DocvalueFields

func (s *UpdateByQueryService) DocvalueFields(docvalueFields ...string) *UpdateByQueryService

DocvalueFields specifies the list of fields to return as the docvalue representation of a field for each hit.

func (*UpdateByQueryService) ExpandWildcards

func (s *UpdateByQueryService) ExpandWildcards(expandWildcards string) *UpdateByQueryService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*UpdateByQueryService) Explain

func (s *UpdateByQueryService) Explain(explain bool) *UpdateByQueryService

Explain specifies whether to return detailed information about score computation as part of a hit.

func (*UpdateByQueryService) FielddataFields

func (s *UpdateByQueryService) FielddataFields(fielddataFields ...string) *UpdateByQueryService

FielddataFields is a list of fields to return as the field data representation of a field for each hit.

func (*UpdateByQueryService) From

From is the starting offset (default: 0).

func (*UpdateByQueryService) IgnoreUnavailable

func (s *UpdateByQueryService) IgnoreUnavailable(ignoreUnavailable bool) *UpdateByQueryService

IgnoreUnavailable indicates whether specified concrete indices should be ignored when unavailable (missing or closed).

func (*UpdateByQueryService) Index

func (s *UpdateByQueryService) Index(index ...string) *UpdateByQueryService

Index is a list of index names to search; use `_all` or empty string to perform the operation on all indices.

func (*UpdateByQueryService) Lenient

func (s *UpdateByQueryService) Lenient(lenient bool) *UpdateByQueryService

Lenient specifies whether format-based query failures (such as providing text to a numeric field) should be ignored.

func (*UpdateByQueryService) LowercaseExpandedTerms

func (s *UpdateByQueryService) LowercaseExpandedTerms(lowercaseExpandedTerms bool) *UpdateByQueryService

LowercaseExpandedTerms specifies whether query terms should be lowercased.

func (*UpdateByQueryService) Pipeline

func (s *UpdateByQueryService) Pipeline(pipeline string) *UpdateByQueryService

Pipeline specifies the ingest pipeline to set on index requests made by this action (default: none).

func (*UpdateByQueryService) Preference

func (s *UpdateByQueryService) Preference(preference string) *UpdateByQueryService

Preference specifies the node or shard the operation should be performed on (default: random).

func (*UpdateByQueryService) Pretty

func (s *UpdateByQueryService) Pretty(pretty bool) *UpdateByQueryService

Pretty indicates that the JSON response be indented and human readable.

func (*UpdateByQueryService) ProceedOnVersionConflict

func (s *UpdateByQueryService) ProceedOnVersionConflict() *UpdateByQueryService

ProceedOnVersionConflict aborts the request on version conflicts. It is an alias to setting Conflicts("proceed").

func (*UpdateByQueryService) Q

Q specifies the query in the Lucene query string syntax.

func (*UpdateByQueryService) Query

Query sets a query definition using the Query DSL.

func (*UpdateByQueryService) Refresh

func (s *UpdateByQueryService) Refresh(refresh string) *UpdateByQueryService

Refresh indicates whether the effected indexes should be refreshed.

func (*UpdateByQueryService) RequestCache

func (s *UpdateByQueryService) RequestCache(requestCache bool) *UpdateByQueryService

RequestCache specifies if request cache should be used for this request or not, defaults to index level setting.

func (*UpdateByQueryService) RequestsPerSecond

func (s *UpdateByQueryService) RequestsPerSecond(requestsPerSecond int) *UpdateByQueryService

RequestsPerSecond sets the throttle on this request in sub-requests per second. -1 means set no throttle as does "unlimited" which is the only non-float this accepts.

func (*UpdateByQueryService) Routing

func (s *UpdateByQueryService) Routing(routing ...string) *UpdateByQueryService

Routing is a list of specific routing values.

func (*UpdateByQueryService) Script

func (s *UpdateByQueryService) Script(script *Script) *UpdateByQueryService

Script sets an update script.

func (*UpdateByQueryService) Scroll

Scroll specifies how long a consistent view of the index should be maintained for scrolled search.

func (*UpdateByQueryService) ScrollSize

func (s *UpdateByQueryService) ScrollSize(scrollSize int) *UpdateByQueryService

ScrollSize is the size on the scroll request powering the update_by_query.

func (*UpdateByQueryService) SearchTimeout

func (s *UpdateByQueryService) SearchTimeout(searchTimeout string) *UpdateByQueryService

SearchTimeout defines an explicit timeout for each search request. Defaults to no timeout.

func (*UpdateByQueryService) SearchType

func (s *UpdateByQueryService) SearchType(searchType string) *UpdateByQueryService

SearchType is the search operation type. Possible values are "query_then_fetch" and "dfs_query_then_fetch".

func (*UpdateByQueryService) Size

Size represents the number of hits to return (default: 10).

func (*UpdateByQueryService) Sort

Sort is a list of <field>:<direction> pairs.

func (*UpdateByQueryService) SortByField

func (s *UpdateByQueryService) SortByField(field string, ascending bool) *UpdateByQueryService

SortByField adds a sort order.

func (*UpdateByQueryService) Stats

func (s *UpdateByQueryService) Stats(stats ...string) *UpdateByQueryService

Stats specifies specific tag(s) of the request for logging and statistical purposes.

func (*UpdateByQueryService) StoredFields

func (s *UpdateByQueryService) StoredFields(storedFields ...string) *UpdateByQueryService

StoredFields specifies the list of stored fields to return as part of a hit.

func (*UpdateByQueryService) SuggestField

func (s *UpdateByQueryService) SuggestField(suggestField string) *UpdateByQueryService

SuggestField specifies which field to use for suggestions.

func (*UpdateByQueryService) SuggestMode

func (s *UpdateByQueryService) SuggestMode(suggestMode string) *UpdateByQueryService

SuggestMode specifies the suggest mode. Possible values are "missing", "popular", and "always".

func (*UpdateByQueryService) SuggestSize

func (s *UpdateByQueryService) SuggestSize(suggestSize int) *UpdateByQueryService

SuggestSize specifies how many suggestions to return in response.

func (*UpdateByQueryService) SuggestText

func (s *UpdateByQueryService) SuggestText(suggestText string) *UpdateByQueryService

SuggestText specifies the source text for which the suggestions should be returned.

func (*UpdateByQueryService) TerminateAfter

func (s *UpdateByQueryService) TerminateAfter(terminateAfter int) *UpdateByQueryService

TerminateAfter indicates the maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early.

func (*UpdateByQueryService) Timeout

func (s *UpdateByQueryService) Timeout(timeout string) *UpdateByQueryService

Timeout is the time each individual bulk request should wait for shards that are unavailable.

func (*UpdateByQueryService) TimeoutInMillis

func (s *UpdateByQueryService) TimeoutInMillis(timeoutInMillis int) *UpdateByQueryService

TimeoutInMillis sets the timeout in milliseconds.

func (*UpdateByQueryService) TrackScores

func (s *UpdateByQueryService) TrackScores(trackScores bool) *UpdateByQueryService

TrackScores indicates whether to calculate and return scores even if they are not used for sorting.

func (*UpdateByQueryService) Type

Type is a list of document types to search; leave empty to perform the operation on all types.

func (*UpdateByQueryService) Validate

func (s *UpdateByQueryService) Validate() error

Validate checks if the operation is valid.

func (*UpdateByQueryService) Version

func (s *UpdateByQueryService) Version(version bool) *UpdateByQueryService

Version specifies whether to return document version as part of a hit.

func (*UpdateByQueryService) VersionType

func (s *UpdateByQueryService) VersionType(versionType bool) *UpdateByQueryService

VersionType indicates if the document increment the version number (internal) on hit or not (reindex).

func (*UpdateByQueryService) WaitForActiveShards

func (s *UpdateByQueryService) WaitForActiveShards(waitForActiveShards string) *UpdateByQueryService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the update by query operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

func (*UpdateByQueryService) WaitForCompletion

func (s *UpdateByQueryService) WaitForCompletion(waitForCompletion bool) *UpdateByQueryService

WaitForCompletion indicates if the request should block until the reindex is complete.

func (*UpdateByQueryService) XSource

func (s *UpdateByQueryService) XSource(xSource ...string) *UpdateByQueryService

XSource is true or false to return the _source field or not, or a list of fields to return.

func (*UpdateByQueryService) XSourceExclude

func (s *UpdateByQueryService) XSourceExclude(xSourceExclude ...string) *UpdateByQueryService

XSourceExclude represents a list of fields to exclude from the returned _source field.

func (*UpdateByQueryService) XSourceInclude

func (s *UpdateByQueryService) XSourceInclude(xSourceInclude ...string) *UpdateByQueryService

XSourceInclude represents a list of fields to extract and return from the _source field.

type UpdateResponse

type UpdateResponse struct {
	Index         string      `json:"_index,omitempty"`
	Type          string      `json:"_type,omitempty"`
	Id            string      `json:"_id,omitempty"`
	Version       int64       `json:"_version,omitempty"`
	Result        string      `json:"result,omitempty"`
	Shards        *shardsInfo `json:"_shards,omitempty"`
	SeqNo         int64       `json:"_seq_no,omitempty"`
	PrimaryTerm   int64       `json:"_primary_term,omitempty"`
	Status        int         `json:"status,omitempty"`
	ForcedRefresh bool        `json:"forced_refresh,omitempty"`
	GetResult     *GetResult  `json:"get,omitempty"`
}

UpdateResponse is the result of updating a document in Elasticsearch.

type UpdateService

type UpdateService struct {
	// contains filtered or unexported fields
}

UpdateService updates a document in Elasticsearch. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/docs-update.html for details.

func NewUpdateService

func NewUpdateService(client *Client) *UpdateService

NewUpdateService creates the service to update documents in Elasticsearch.

func (*UpdateService) DetectNoop

func (b *UpdateService) DetectNoop(detectNoop bool) *UpdateService

DetectNoop will instruct Elasticsearch to check if changes will occur when updating via Doc. It there aren't any changes, the request will turn into a no-op.

func (*UpdateService) Do

Do executes the update operation.

func (*UpdateService) Doc

func (b *UpdateService) Doc(doc interface{}) *UpdateService

Doc allows for updating a partial document.

func (*UpdateService) DocAsUpsert

func (b *UpdateService) DocAsUpsert(docAsUpsert bool) *UpdateService

DocAsUpsert can be used to insert the document if it doesn't already exist.

func (*UpdateService) FetchSource

func (s *UpdateService) FetchSource(fetchSource bool) *UpdateService

FetchSource asks Elasticsearch to return the updated _source in the response.

func (*UpdateService) FetchSourceContext

func (s *UpdateService) FetchSourceContext(fetchSourceContext *FetchSourceContext) *UpdateService

FetchSourceContext indicates that _source should be returned in the response, allowing wildcard patterns to be defined via FetchSourceContext.

func (*UpdateService) Fields

func (b *UpdateService) Fields(fields ...string) *UpdateService

Fields is a list of fields to return in the response.

func (*UpdateService) Id

func (b *UpdateService) Id(id string) *UpdateService

Id is the identifier of the document to update (required).

func (*UpdateService) Index

func (b *UpdateService) Index(name string) *UpdateService

Index is the name of the Elasticsearch index (required).

func (*UpdateService) Parent

func (b *UpdateService) Parent(parent string) *UpdateService

Parent sets the id of the parent document.

func (*UpdateService) Pretty

func (b *UpdateService) Pretty(pretty bool) *UpdateService

Pretty instructs to return human readable, prettified JSON.

func (*UpdateService) Refresh

func (b *UpdateService) Refresh(refresh string) *UpdateService

Refresh the index after performing the update.

func (*UpdateService) RetryOnConflict

func (b *UpdateService) RetryOnConflict(retryOnConflict int) *UpdateService

RetryOnConflict specifies how many times the operation should be retried when a conflict occurs (default: 0).

func (*UpdateService) Routing

func (b *UpdateService) Routing(routing string) *UpdateService

Routing specifies a specific routing value.

func (*UpdateService) Script

func (b *UpdateService) Script(script *Script) *UpdateService

Script is the script definition.

func (*UpdateService) ScriptedUpsert

func (b *UpdateService) ScriptedUpsert(scriptedUpsert bool) *UpdateService

ScriptedUpsert should be set to true if the referenced script (defined in Script or ScriptId) should be called to perform an insert. The default is false.

func (*UpdateService) Timeout

func (b *UpdateService) Timeout(timeout string) *UpdateService

Timeout is an explicit timeout for the operation, e.g. "1000", "1s" or "500ms".

func (*UpdateService) Type

func (b *UpdateService) Type(typ string) *UpdateService

Type is the type of the document (required).

func (*UpdateService) Upsert

func (b *UpdateService) Upsert(doc interface{}) *UpdateService

Upsert can be used to index the document when it doesn't exist yet. Use this e.g. to initialize a document with a default value.

func (*UpdateService) Version

func (b *UpdateService) Version(version int64) *UpdateService

Version defines the explicit version number for concurrency control.

func (*UpdateService) VersionType

func (b *UpdateService) VersionType(versionType string) *UpdateService

VersionType is e.g. "internal".

func (*UpdateService) WaitForActiveShards

func (b *UpdateService) WaitForActiveShards(waitForActiveShards string) *UpdateService

WaitForActiveShards sets the number of shard copies that must be active before proceeding with the update operation. Defaults to 1, meaning the primary shard only. Set to `all` for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1).

type ValidateResponse

type ValidateResponse struct {
	Valid        bool                   `json:"valid"`
	Shards       map[string]interface{} `json:"_shards"`
	Explanations []interface{}          `json:"explanations"`
}

ValidateResponse is the response of ValidateService.Do.

type ValidateService

type ValidateService struct {
	// contains filtered or unexported fields
}

ValidateService allows a user to validate a potentially expensive query without executing it. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-validate.html.

func NewValidateService

func NewValidateService(client *Client) *ValidateService

NewValidateService creates a new ValidateService.

func (*ValidateService) AllShards

func (s *ValidateService) AllShards(allShards *bool) *ValidateService

Execute validation on all shards instead of one random shard per index.

func (*ValidateService) AllowNoIndices

func (s *ValidateService) AllowNoIndices(allowNoIndices bool) *ValidateService

AllowNoIndices indicates whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified).

func (*ValidateService) AnalyzeWildcard

func (s *ValidateService) AnalyzeWildcard(analyzeWildcard bool) *ValidateService

AnalyzeWildcard specifies whether wildcards and prefix queries in the query string query should be analyzed (default: false).

func (*ValidateService) Analyzer

func (s *ValidateService) Analyzer(analyzer string) *ValidateService

Analyzer is the analyzer for the query string query.

func (*ValidateService) BodyJson

func (s *ValidateService) BodyJson(body interface{}) *ValidateService

BodyJson sets the query definition using the Query DSL.

func (*ValidateService) BodyString

func (s *ValidateService) BodyString(body string) *ValidateService

BodyString sets the query definition using the Query DSL as a string.

func (*ValidateService) DefaultOperator

func (s *ValidateService) DefaultOperator(defaultOperator string) *ValidateService

DefaultOperator is the default operator for query string query (AND or OR).

func (*ValidateService) Df

Df is the default field for query string query (default: _all).

func (*ValidateService) Do

Do executes the operation.

func (*ValidateService) ExpandWildcards

func (s *ValidateService) ExpandWildcards(expandWildcards string) *ValidateService

ExpandWildcards indicates whether to expand wildcard expression to concrete indices that are open, closed or both.

func (*ValidateService) Explain

func (s *ValidateService) Explain(explain *bool) *ValidateService

An explain parameter can be specified to get more detailed information about why a query failed.

func (*ValidateService) IgnoreUnavailable

func (s *ValidateService) IgnoreUnavailable(ignoreUnavailable bool) *ValidateService

IgnoreUnavailable indicates whether the specified concrete indices should be ignored when unavailable (missing or closed).

func (*ValidateService) Index

func (s *ValidateService) Index(index ...string) *ValidateService

Index sets the names of the indices to use for search.

func (*ValidateService) Lenient

func (s *ValidateService) Lenient(lenient bool) *ValidateService

Lenient specifies whether format-based query failures (such as providing text to a numeric field) should be ignored.

func (*ValidateService) Pretty

func (s *ValidateService) Pretty(pretty bool) *ValidateService

Pretty indicates that the JSON response be indented and human readable.

func (*ValidateService) Q

Query in the Lucene query string syntax.

func (*ValidateService) Query

func (s *ValidateService) Query(query Query) *ValidateService

Query sets a query definition using the Query DSL.

func (*ValidateService) Rewrite

func (s *ValidateService) Rewrite(rewrite *bool) *ValidateService

Provide a more detailed explanation showing the actual Lucene query that will be executed.

func (*ValidateService) Type

func (s *ValidateService) Type(typ ...string) *ValidateService

Types adds search restrictions for a list of types.

func (*ValidateService) Validate

func (s *ValidateService) Validate() error

Validate checks if the operation is valid.

type ValueCountAggregation

type ValueCountAggregation struct {
	// contains filtered or unexported fields
}

ValueCountAggregation is a single-value metrics aggregation that counts the number of values that are extracted from the aggregated documents. These values can be extracted either from specific fields in the documents, or be generated by a provided script. Typically, this aggregator will be used in conjunction with other single-value aggregations. For example, when computing the avg one might be interested in the number of values the average is computed over. See: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-aggregations-metrics-valuecount-aggregation.html

func NewValueCountAggregation

func NewValueCountAggregation() *ValueCountAggregation

func (*ValueCountAggregation) Field

func (*ValueCountAggregation) Format

func (*ValueCountAggregation) Meta

func (a *ValueCountAggregation) Meta(metaData map[string]interface{}) *ValueCountAggregation

Meta sets the meta data to be included in the aggregation response.

func (*ValueCountAggregation) Script

func (*ValueCountAggregation) Source

func (a *ValueCountAggregation) Source() (interface{}, error)

func (*ValueCountAggregation) SubAggregation

func (a *ValueCountAggregation) SubAggregation(name string, subAggregation Aggregation) *ValueCountAggregation

type WeightFactorFunction

type WeightFactorFunction struct {
	// contains filtered or unexported fields
}

WeightFactorFunction builds a weight factor function that multiplies the weight to the score. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_weight for details.

func NewWeightFactorFunction

func NewWeightFactorFunction(weight float64) *WeightFactorFunction

NewWeightFactorFunction initializes and returns a new WeightFactorFunction.

func (*WeightFactorFunction) GetWeight

func (fn *WeightFactorFunction) GetWeight() *float64

GetWeight returns the adjusted score. It is part of the ScoreFunction interface. Returns nil if weight is not specified.

func (*WeightFactorFunction) Name

func (fn *WeightFactorFunction) Name() string

Name represents the JSON field name under which the output of Source needs to be serialized by FunctionScoreQuery (see FunctionScoreQuery.Source).

func (*WeightFactorFunction) Source

func (fn *WeightFactorFunction) Source() (interface{}, error)

Source returns the serializable JSON data of this score function.

func (*WeightFactorFunction) Weight

Weight adjusts the score of the score function. See https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-function-score-query.html#_using_function_score for details.

type WildcardQuery

type WildcardQuery struct {
	// contains filtered or unexported fields
}

WildcardQuery matches documents that have fields matching a wildcard expression (not analyzed). Supported wildcards are *, which matches any character sequence (including the empty one), and ?, which matches any single character. Note this query can be slow, as it needs to iterate over many terms. In order to prevent extremely slow wildcard queries, a wildcard term should not start with one of the wildcards * or ?. The wildcard query maps to Lucene WildcardQuery.

For more details, see https://www.elastic.co/guide/en/elasticsearch/reference/6.0/query-dsl-wildcard-query.html

Example
package main

import (
	"context"

	"github.com/olivere/elastic"
)

func main() {
	// Get a client to the local Elasticsearch instance.
	client, err := elastic.NewClient()
	if err != nil {
		// Handle error
		panic(err)
	}

	// Define wildcard query
	q := elastic.NewWildcardQuery("user", "oli*er?").Boost(1.2)
	searchResult, err := client.Search().
		Index("twitter").  // search in index "twitter"
		Query(q).          // use wildcard query defined above
		Do(context.TODO()) // execute
	if err != nil {
		// Handle error
		panic(err)
	}
	_ = searchResult
}
Output:

func NewWildcardQuery

func NewWildcardQuery(name, wildcard string) *WildcardQuery

NewWildcardQuery creates and initializes a new WildcardQuery.

func (*WildcardQuery) Boost

func (q *WildcardQuery) Boost(boost float64) *WildcardQuery

Boost sets the boost for this query.

func (*WildcardQuery) QueryName

func (q *WildcardQuery) QueryName(queryName string) *WildcardQuery

QueryName sets the name of this query.

func (*WildcardQuery) Rewrite

func (q *WildcardQuery) Rewrite(rewrite string) *WildcardQuery

func (*WildcardQuery) Source

func (q *WildcardQuery) Source() (interface{}, error)

Source returns the JSON serializable body of this query.

type ZeroBackoff

type ZeroBackoff struct{}

ZeroBackoff is a fixed backoff policy whose backoff time is always zero, meaning that the operation is retried immediately without waiting, indefinitely.

func (ZeroBackoff) Next

func (b ZeroBackoff) Next(retry int) (time.Duration, bool)

Next implements BackoffFunc for ZeroBackoff.

Source Files

Directories

Path Synopsis
Package config allows parsing a configuration for Elasticsearch from a URL.
Package config allows parsing a configuration for Elasticsearch from a URL.
recipes
bulk_insert
BulkInsert illustrates how to bulk insert documents into Elasticsearch.
BulkInsert illustrates how to bulk insert documents into Elasticsearch.
bulk_processor
BulkProcessor runs a bulk processing job that fills an index given certain criteria like flush interval etc.
BulkProcessor runs a bulk processing job that fills an index given certain criteria like flush interval etc.
connect
Connect simply connects to Elasticsearch.
Connect simply connects to Elasticsearch.
sliced_scroll
SlicedScroll illustrates scrolling through a set of documents in parallel.
SlicedScroll illustrates scrolling through a set of documents in parallel.
Package uritemplates is a level 4 implementation of RFC 6570 (URI Template, http://tools.ietf.org/html/rfc6570).
Package uritemplates is a level 4 implementation of RFC 6570 (URI Template, http://tools.ietf.org/html/rfc6570).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL