mongoreplay

package
v0.0.0-...-ae3bf8c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 16, 2017 License: Apache-2.0 Imports: 31 Imported by: 0

README

mongoreplay

Purpose

mongoreplay is a traffic capture and replay tool for MongoDB. It can be used to inspect commands being sent to a MongoDB instance, record them, and replay them back onto another host at a later time.

Use cases
  • Preview how well your database cluster would perform a production workload under a different environment (storage engine, index, hardware, OS, etc.)
  • Reproduce and investigate bugs by recording and replaying the operations that trigger them
  • Inspect the details of what an application is doing to a mongo cluster (i.e. a more flexible version of mongosniff)

Quickstart

Make a recording:

mongoreplay record -i lo0 -e "port 27017" -p playback.bson

Analyze it:

mongoreplay stat -p playback.bson --report playback_stats.json

Replay it against another server, at 2x speed:

mongoreplay play -p playback.bson --speed=2.0 --report replay_stats.json --host 192.168.0.4:27018

Detailed Usage

Basic usage of mongoreplay works in two phases: record and play. Analyzing recordings can also be performed with the stat command.

  • The record phase takes a pcap file (generated by tcpdump) and analyzes it to produce a playback file (in BSON format). The playback file contains a list of all the requests and replies to/from the Mongo instance that were recorded in the pcap dump, along with their connection identifier, timestamp, and other metadata.
  • The play reads in the playback file that was generated by record, and re-executes the workload against some target host.
  • The stat command reads a playback file and analyzes it, detecting the latency between each request and response.
Capturing TCP (pcap) data

To create a recording of traffic, use the record command as follows:

mongoreplay record -i lo0 -e "port 27017" -p recording.bson

This will record traffic on the network interface lo0 targeting port 27017. The options to record are:

  • -i: The network interface to listen on, e.g. eth0 or lo0. You may be required to run mongoreplay with root privileges for this to work.
  • -e: An expression in Berkeley Packet Filter (BPF) syntax to apply to incoming traffic to record. See http://biot.com/capstats/bpf.html for details on how to construct BPF expressions.
  • -p: The output file to write the recording to.
Recording a playback file from pcap data

Alternatively, you can capture traffic using tcpdump and create a recording from a static PCAP file. First, capture TCP traffic on the system where the workload you wish to record is targeting. Then, run mongoreplay record using the -f argument (instead of -i) to create the playback file.

sudo tcpdump -i lo0 -n "port 27017" -w traffic.pcap

$ ./mongoreplay record -f traffic.pcap -p playback.bson

Using the record command of mongoreplay, this will process the .pcap file to create a playback file. The playback file will contain everything needed to re-execute the workload.

Using playback files

There are several useful operations that can be performed with the playback file.

Re-executing the playback file

The play command takes a playback file and executes the operations in it against a target host.

./mongoreplay play -p playback.bson --host mongodb://target-host.com:27017

To modify playback speed, add the --speed command line flag to the play command. For example, --speed=2.0 will run playback at twice the rate of the recording, while --speed=0.5 will run playback at half the rate of the recording.

mongoreplay play -p workload.playback --host staging-mongo-cluster-hostname
Playback speed

You can also play the workload back at a faster rate by adding the --speed argument; for example, --speed=2.0 will execute the workload at twice the speed it was recorded at.

Logging metrics about execution performance during playback

Use the --report=<path-to-file> flag to save detailed metrics about the performance of each operation performed during playback to the specified json file. This can be used in later analysis to compare performance and behavior across different executions of the same workload.

Inspecting the operations in a playback file

The stat command takes a static workload file (bson) and generates a json report, showing each operation and some metadata about its execution. The output is in the same format as that used by the json output generated by using the play command with --report.

Report format

The data in the json reports consists of one record for each request/response. Each record has the following format:

{
    "connection_num": 1,
    "latency_us": 89,
    "ns": "test.test",
    "op": "getmore",
    "order": 16,
    "play_at": "2016-02-02T16:24:16.309322601-05:00",
    "played_at": "2016-02-02T16:24:16.310908311-05:00",
    "playbacklag_us": 1585
}             

The fields are as follows:

  • connection_num: a key that identifies the connection on which the request was executed. All requests/replies that executed on the same connection will have the same value for this field. The value for this field does not match the connection ID logged on the server-side.
  • latency_us: the time difference (in microseconds) between when the request was sent by the client, and a response from the server was received.
  • ns: the namespace that the request was executed on.
  • op: the type of operation represented by the request - e.g. "query", "insert", "command", "getmore"
  • order: a monotonically increasing key indicating the order in which the operations were recorded and played back. This can be used to reconstruct the ordering of the series of ops executed on a connection, since the order in which they appear in the report file might not match the order of playback.
  • data: the payload of the actual operation. For queries, this will contain the actual query that was issued. For inserts, this will contain the documents being inserted. For updates, it will contain the query selector and the update modifier, etc.
  • play_at: The time at which the operation was supposed to be executed.
  • played_at: The time at which the play command actually executed the operation.
  • playbacklag_us: The difference (in microseconds) in time between played_at and play_at. Higher values generally indicate that the target server is not able to keep up with the rate at which requests need to be executed according to the playback file.

Documentation

Index

Constants

View Source
const (
	// ReplyFromWire is the ReplyPair index for live replies.
	ReplyFromWire = 0
	// ReplyFromFile is the ReplyPair index for recorded replies.
	ReplyFromFile = 1
)
View Source
const (
	// Always denotes that a log be performed without needing any verbosity
	Always = iota
	// Info denotes that a log be performed with verbosity level 1 (-v)
	Info
	// DebugLow denotes that a log be performed with verbosity level 2 (-vv)
	DebugLow
	// DebugHigh denotes that a log be performed with verbosity level 3 (-vvv)
	DebugHigh
)
View Source
const (
	OpCodeReply        = OpCode(1)
	OpCodeUpdate       = OpCode(2001)
	OpCodeInsert       = OpCode(2002)
	OpCodeReserved     = OpCode(2003)
	OpCodeQuery        = OpCode(2004)
	OpCodeGetMore      = OpCode(2005)
	OpCodeDelete       = OpCode(2006)
	OpCodeKillCursors  = OpCode(2007)
	OpCodeCommand      = OpCode(2010)
	OpCodeCommandReply = OpCode(2011)
	OpCodeCompressed   = OpCode(2012)
	OpCodeMessage      = OpCode(2013)
)

The full set of known request op codes: http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#request-opcodes

View Source
const MaxMessageSize = 48 * 1000 * 1000

MaxMessageSize is the maximum message size as defined in the server

View Source
const MsgHeaderLen = 16

MsgHeaderLen is the message header length in bytes

View Source
const PlaybackFileVersion = 1
View Source
const TruncateLength = 350

TruncateLength is the maximum number of characters allowed for long substrings when constructing log output lines.

Variables

View Source
var DefaultAssemblerOptions = AssemblerOptions{
	MaxBufferedPagesPerConnection: 0,
	MaxBufferedPagesTotal:         0,
}

DefaultAssemblerOptions provides default options for an assembler. These options are used by default when calling NewAssembler, so if modified before a NewAssembler call they'll affect the resulting Assembler.

Note that the default options can result in ever-increasing memory usage unless one of the Flush* methods is called on a regular basis.

View Source
var (
	// ErrInvalidSize means the size of the BSON document is invalid
	ErrInvalidSize = errors.New("got invalid document size")
)
View Source
var ErrNotMsg = fmt.Errorf("buffer is too small to be a Mongo message")

ErrNotMsg is returned if a provided buffer is too small to contain a Mongo message

Functions

func Abbreviate

func Abbreviate(data string, maxLen int) string

Abbreviate returns a reduced copy of the given string if it's longer than maxLen by showing only a prefix and suffix of size windowLen with an ellipsis in the middle.

func AbbreviateBytes

func AbbreviateBytes(data []byte, maxLen int) []byte

AbbreviateBytes returns a reduced byte array of the given one if it's longer than maxLen by showing only a prefix and suffix of size windowLen with an ellipsis in the middle.

func ConvertBSONValueToJSON

func ConvertBSONValueToJSON(x interface{}) (interface{}, error)

ConvertBSONValueToJSON walks through a document or an array and converts any BSON value to its corresponding extended JSON type. It returns the converted JSON document and any error encountered.

func CopyMessage

func CopyMessage(w io.Writer, r io.Reader) error

CopyMessage copies reads & writes an entire message.

func Filter

func Filter(opChan <-chan *RecordedOp,
	outfiles []*PlaybackFileWriter,
	removeDriverOps bool,
	truncateTime time.Time) error

func FindValueByKey

func FindValueByKey(keyName string, document *bson.D) (interface{}, bool)

FindValueByKey returns the value of keyName in document. The second return arg is a bool which is true if and only if the key was present in the doc.

func IsDriverOp

func IsDriverOp(op Op) bool

IsDriverOp checks if an operation is one of the types generated by the driver such as 'ismaster', or 'getnonce'. It takes an Op that has already been unmarshalled using its 'FromReader' method and checks if it is a command matching the ones the driver generates.

func Play

func Play(context *ExecutionContext,
	opChan <-chan *RecordedOp,
	speed float64,
	repeat int,
	queueTime int) error

Play is responsible for playing ops from a RecordedOp channel to the session.

func ReadDocument

func ReadDocument(r io.Reader) (doc []byte, err error)

ReadDocument read an entire BSON document. This document can be used with bson.Unmarshal.

func Record

func Record(ctx *packetHandlerContext,
	playbackWriter *PlaybackFileWriter,
	noShortenReply bool) error

Record writes pcap data into a playback file

func SetInt32

func SetInt32(b []byte, pos int, i int32)

SetInt32 sets the 32-bit int into the given byte array at position post Taken from gopkg.in/mgo.v2/socket.go

func SetInt64

func SetInt64(b []byte, pos int, i int64)

SetInt64 sets the 64-bit int into the given byte array at position post Taken from gopkg.in/mgo.v2/socket.go

Types

type Assembler

type Assembler struct {
	AssemblerOptions
	// contains filtered or unexported fields
}

Assembler handles reassembling TCP streams. It is not safe for concurrency... after passing a packet in via the Assemble call, the caller must wait for that call to return before calling Assemble again. Callers can get around this by creating multiple assemblers that share a StreamPool. In that case, each individual stream will still be handled serially (each stream has an individual mutex associated with it), however multiple assemblers can assemble different connections concurrently.

The Assembler provides (hopefully) fast TCP stream re-assembly for sniffing applications written in Go. The Assembler uses the following methods to be as fast as possible, to keep packet processing speedy:

Avoids Lock Contention

Assemblers locks connections, but each connection has an individual lock, and rarely will two Assemblers be looking at the same connection. Assemblers lock the StreamPool when looking up connections, but they use Reader locks initially, and only force a write lock if they need to create a new connection or close one down. These happen much less frequently than individual packet handling.

Each assembler runs in its own goroutine, and the only state shared between goroutines is through the StreamPool. Thus all internal Assembler state can be handled without any locking.

NOTE: If you can guarantee that packets going to a set of Assemblers will contain information on different connections per Assembler (for example, they're already hashed by PF_RING hashing or some other hashing mechanism), then we recommend you use a seperate StreamPool per Assembler, thus avoiding all lock contention. Only when different Assemblers could receive packets for the same Stream should a StreamPool be shared between them.

Avoids Memory Copying

In the common case, handling of a single TCP packet should result in zero memory allocations. The Assembler will look up the connection, figure out that the packet has arrived in order, and immediately pass that packet on to the appropriate connection's handling code. Only if a packet arrives out of order is its contents copied and stored in memory for later.

Avoids Memory Allocation

Assemblers try very hard to not use memory allocation unless absolutely necessary. Packet data for sequential packets is passed directly to streams with no copying or allocation. Packet data for out-of-order packets is copied into reusable pages, and new pages are only allocated rarely when the page cache runs out. Page caches are Assembler-specific, thus not used concurrently and requiring no locking.

Internal representations for connection objects are also reused over time. Because of this, the most common memory allocation done by the Assembler is generally what's done by the caller in StreamFactory.New. If no allocation is done there, then very little allocation is done ever, mostly to handle large increases in bandwidth or numbers of connections.

TODO: The page caches used by an Assembler will grow to the size necessary to handle a workload, and currently will never shrink. This means that traffic spikes can result in large memory usage which isn't garbage collected when typical traffic levels return.

func NewAssembler

func NewAssembler(pool *StreamPool) *Assembler

NewAssembler creates a new assembler. Pass in the StreamPool to use, may be shared across assemblers.

This sets some sane defaults for the assembler options, see DefaultAssemblerOptions for details.

func (*Assembler) Assemble

func (a *Assembler) Assemble(netFlow gopacket.Flow, t *layers.TCP)

Assemble calls AssembleWithTimestamp with the current timestamp, useful for packets being read directly off the wire.

func (*Assembler) AssembleWithTimestamp

func (a *Assembler) AssembleWithTimestamp(netFlow gopacket.Flow, t *layers.TCP, timestamp time.Time)

AssembleWithTimestamp reassembles the given TCP packet into its appropriate stream.

The timestamp passed in must be the timestamp the packet was seen. For packets read off the wire, time.Now() should be fine. For packets read from PCAP files, CaptureInfo.Timestamp should be passed in. This timestamp will affect which streams are flushed by a call to FlushOlderThan.

Each Assemble call results in, in order:

zero or one calls to StreamFactory.New, creating a stream
zero or one calls to Reassembled on a single stream
zero or one calls to ReassemblyComplete on the same stream

func (*Assembler) FlushAll

func (a *Assembler) FlushAll() (closed int)

FlushAll flushes all remaining data into all remaining connections, closing those connections. It returns the total number of connections flushed/closed by the call.

func (*Assembler) FlushOlderThan

func (a *Assembler) FlushOlderThan(t time.Time) (flushed, closed int)

FlushOlderThan finds any streams waiting for packets older than the given time, and pushes through the data they have (IE: tells them to stop waiting and skip the data they're waiting for).

Each Stream maintains a list of zero or more sets of bytes it has received out-of-order. For example, if it has processed up through sequence number 10, it might have bytes [15-20), [20-25), [30,50) in its list. Each set of bytes also has the timestamp it was originally viewed. A flush call will look at the smallest subsequent set of bytes, in this case [15-20), and if its timestamp is older than the passed-in time, it will push it and all contiguous byte-sets out to the Stream's Reassembled function. In this case, it will push [15-20), but also [20-25), since that's contiguous. It will only push [30-50) if its timestamp is also older than the passed-in time, otherwise it will wait until the next FlushOlderThan to see if bytes [25-30) come in.

If it pushes all bytes (or there were no sets of bytes to begin with) AND the connection has not received any bytes since the passed-in time, the connection will be closed.

Returns the number of connections flushed, and of those, the number closed because of the flush.

type AssemblerOptions

type AssemblerOptions struct {
	// MaxBufferedPagesTotal is an upper limit on the total number of pages to
	// buffer while waiting for out-of-order packets.  Once this limit is
	// reached, the assembler will degrade to flushing every connection it gets
	// a packet for.  If <= 0, this is ignored.
	MaxBufferedPagesTotal int
	// MaxBufferedPagesPerConnection is an upper limit on the number of pages
	// buffered for a single connection.  Should this limit be reached for a
	// particular connection, the smallest sequence number will be flushed,
	// along with any contiguous data.  If <= 0, this is ignored.
	MaxBufferedPagesPerConnection int
}

AssemblerOptions controls the behavior of each assembler. Modify the options of each assembler you create to change their behavior.

type BufferedStatRecorder

type BufferedStatRecorder struct {
	// Buffer is a slice of OpStats that is appended to every time the Collect
	// function makes a record It stores an in-order series of OpStats that
	// store information about the commands mongoreplay ran as a result of reading
	// a playback file
	Buffer []OpStat
}

BufferedStatRecorder implements the StatRecorder interface using an in-memory slice of OpStats. This allows for the statistics on operations executed by mongoreplay to be reviewed by a program directly following execution.

BufferedStatCollector's main purpose is for asserting correct execution of ops for testing

func (*BufferedStatRecorder) Close

func (bsr *BufferedStatRecorder) Close() error

Close closes the BufferedStatRecorder

func (*BufferedStatRecorder) RecordStat

func (bsr *BufferedStatRecorder) RecordStat(stat *OpStat)

RecordStat records the stat into a buffer

type CommandGetMore

type CommandGetMore struct {
	CommandOp
	// contains filtered or unexported fields
}

CommandGetMore is a struct representing a special case of an OP_COMMAND which has commandName 'getmore'. It implements the cursorsRewriteable interface and has fields for caching the found cursors so that multiple calls to these methods do not incur the overhead of searching the underlying bson for the cursorID.

type CommandOp

type CommandOp struct {
	Header MsgHeader
	mgo.CommandOp
}

CommandOp is a struct for parsing OP_COMMAND as defined here: https://github.com/mongodb/mongo/blob/master/src/mongo/rpc/command_request.h.

func (*CommandOp) Abbreviated

func (op *CommandOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the OpCommand, abbreviated so it doesn't exceed the given number of characters.

func (*CommandOp) Execute

func (op *CommandOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the CommandOp on a given session, yielding the reply when successful (and an error otherwise).

func (*CommandOp) FromReader

func (op *CommandOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized OpCommand into its concrete structure.

func (*CommandOp) Meta

func (op *CommandOp) Meta() OpMetadata

Meta returns metadata about the operation, useful for analysis of traffic.

func (*CommandOp) OpCode

func (op *CommandOp) OpCode() OpCode

OpCode returns the OpCode for a CommandOp.

func (*CommandOp) Preprocess

func (op *CommandOp) Preprocess()

func (*CommandOp) String

func (op *CommandOp) String() string

type CommandReplyOp

type CommandReplyOp struct {
	Header MsgHeader
	mgo.CommandReplyOp
	Docs    []bson.Raw
	Latency time.Duration
	// contains filtered or unexported fields
}

CommandReplyOp is a struct for parsing OP_COMMANDREPLY as defined here: https://github.com/mongodb/mongo/blob/master/src/mongo/rpc/command_reply.h. Although this file parses the wire protocol message into a more useable struct, it does not currently provide functionality to execute the operation, as it is not implemented fully in llmgo.

func (*CommandReplyOp) Abbreviated

func (op *CommandReplyOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the OpCommand, abbreviated so it doesn't exceed the given number of characters.

func (*CommandReplyOp) Execute

func (op *CommandReplyOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute logs a warning and returns nil because OP_COMMANDREPLY cannot yet be handled fully by mongoreplay.

func (*CommandReplyOp) FromReader

func (op *CommandReplyOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized CommandReplyOp into its concrete structure.

func (*CommandReplyOp) Meta

func (op *CommandReplyOp) Meta() OpMetadata

Meta returns metadata about the operation, useful for analysis of traffic. Currently only returns 'unknown' as it is not fully parsed and analyzed.

func (*CommandReplyOp) OpCode

func (op *CommandReplyOp) OpCode() OpCode

OpCode returns the OpCode for a CommandReplyOp.

func (*CommandReplyOp) String

func (op *CommandReplyOp) String() string

type CommandReplyStruct

type CommandReplyStruct struct {
	Cursor struct {
		Id         int64    `bson:"id"`
		Ns         string   `bson:"ns"`
		FirstBatch bson.Raw `bson:"firstBatch,omitempty"`
		NextBatch  bson.Raw `bson:"nextBatch,omitempty"`
	} `bson:"cursor"`
	Ok int `bson:"ok"`
}

type ComparativeStatGenerator

type ComparativeStatGenerator struct {
}

ComparativeStatGenerator implements a basic StatGenerator

func (*ComparativeStatGenerator) Finalize

func (gen *ComparativeStatGenerator) Finalize(statStream chan *OpStat)

Finalize concludes any final stats that still need to be yielded by the ComparativeStatGenerator

func (*ComparativeStatGenerator) GenerateOpStat

func (gen *ComparativeStatGenerator) GenerateOpStat(op *RecordedOp, replayedOp Op, reply Replyable, msg string) *OpStat

GenerateOpStat creates an OpStat using the ComparativeStatGenerator

type ConnStub

type ConnStub struct {
	// contains filtered or unexported fields
}

ConnStub mocks the connection used by an mgo session. It implements the net.Conn interface so that it may be used as a connection for testing in llmgo It contains a write buffer and a read buffer. It writes into the write buffer, and reads from the read buffer so that its ends may be given in reverse to another function. (i.e., another function can write to its read buffer and it will receive this as incoming data)

func (*ConnStub) Close

func (conn *ConnStub) Close() error

Close doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) LocalAddr

func (conn *ConnStub) LocalAddr() net.Addr

LocalAddr doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) Read

func (conn *ConnStub) Read(b []byte) (n int, err error)

func (*ConnStub) RemoteAddr

func (conn *ConnStub) RemoteAddr() net.Addr

RemoteAddr doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) SetDeadline

func (conn *ConnStub) SetDeadline(t time.Time) error

SetDeadline doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) SetReadDeadline

func (conn *ConnStub) SetReadDeadline(t time.Time) error

SetReadDeadline doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) SetWriteDeadline

func (conn *ConnStub) SetWriteDeadline(t time.Time) error

SetWriteDeadline doesn't actually do anything, and is here to implement net.Conn.

func (*ConnStub) Write

func (conn *ConnStub) Write(b []byte) (n int, err error)

type DeleteOp

type DeleteOp struct {
	Header MsgHeader
	mgo.DeleteOp
}

DeleteOp is used to remove one or more documents from a collection. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-delete

func (*DeleteOp) Abbreviated

func (op *DeleteOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the DeleteOp, abbreviated so it doesn't exceed the given number of characters.

func (*DeleteOp) Execute

func (op *DeleteOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the DeleteOp on a given session, yielding the reply when successful (and an error otherwise).

func (*DeleteOp) FromReader

func (op *DeleteOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized DeleteOp into its concrete structure.

func (*DeleteOp) Meta

func (op *DeleteOp) Meta() OpMetadata

Meta returns metadata about the operation, useful for analysis of traffic.

func (*DeleteOp) OpCode

func (op *DeleteOp) OpCode() OpCode

OpCode returns the OpCode for DeleteOp.

func (*DeleteOp) String

func (op *DeleteOp) String() string

type ErrPacketsDropped

type ErrPacketsDropped struct {
	Count int
}

ErrPacketsDropped means that some packets were dropped

func (ErrPacketsDropped) Error

func (e ErrPacketsDropped) Error() string

type ErrUnknownOpcode

type ErrUnknownOpcode int

ErrUnknownOpcode is an error that represents an unrecognized opcode.

func (ErrUnknownOpcode) Error

func (e ErrUnknownOpcode) Error() string

type ExecutionContext

type ExecutionContext struct {
	// IncompleteReplies holds half complete ReplyPairs, which contains either a
	// live reply or a recorded reply when one arrives before the other.
	IncompleteReplies *cache.Cache

	// CompleteReplies contains ReplyPairs that have been competed by the
	// arrival of the missing half of.
	CompleteReplies map[string]*ReplyPair

	// CursorIDMap contains the mapping between recorded cursorIDs and live
	// cursorIDs
	CursorIDMap cursorManager

	// lock synchronizes access to all of the caches and maps in the
	// ExecutionContext
	sync.Mutex

	ConnectionChansWaitGroup sync.WaitGroup

	*StatCollector
	// contains filtered or unexported fields
}

ExecutionContext maintains information for a mongoreplay execution.

func NewExecutionContext

func NewExecutionContext(statColl *StatCollector, session *mgo.Session, options *ExecutionOptions) *ExecutionContext

NewExecutionContext initializes a new ExecutionContext.

func (*ExecutionContext) AddFromFile

func (context *ExecutionContext) AddFromFile(reply Replyable, recordedOp *RecordedOp)

AddFromFile adds a from-file reply to its IncompleteReplies ReplyPair and moves that ReplyPair to CompleteReplies if it's complete. The index is based on the reversed src/dest of the recordedOp which should the RecordedOp that this ReplyOp was unmarshaled out of.

func (*ExecutionContext) AddFromWire

func (context *ExecutionContext) AddFromWire(reply Replyable, recordedOp *RecordedOp)

AddFromWire adds a from-wire reply to its IncompleteReplies ReplyPair and moves that ReplyPair to CompleteReplies if it's complete. The index is based on the src/dest of the recordedOp which should be the op that this ReplyOp is a reply to.

func (*ExecutionContext) Execute

func (context *ExecutionContext) Execute(op *RecordedOp, socket *mgo.MongoSocket) (Op, Replyable, error)

Execute plays a particular command on an mgo socket.

type ExecutionOptions

type ExecutionOptions struct {
	// contains filtered or unexported fields
}

ExecutionOptions holds the additional configuration options needed to completely create an execution session.

type FilterCommand

type FilterCommand struct {
	GlobalOpts      *Options `no-flag:"true"`
	PlaybackFile    string   `description:"path to the playback file to read from" short:"p" long:"playback-file" required:"yes"`
	OutFile         string   `description:"path to the output file to write to" short:"o" long:"outputFile"`
	SplitFilePrefix string   `description:"prefix file name to use for the output files being written when splitting traffic" long:"outfilePrefix"`
	StartTime       string   `description:"ISO 8601 timestamp to remove all operations before" long:"startAt"`
	Split           int      `description:"split the traffic into n files with roughly equal numbers of connecitons in each" default:"1" long:"split"`
	RemoveDriverOps bool     `description:"remove driver issued operations from the playback" long:"removeDriverOps"`
	Gzip            bool     `long:"gzip" description:"decompress gzipped input"`
	// contains filtered or unexported fields
}

FilterCommand stores settings for the mongoreplay 'filter' subcommand

func (*FilterCommand) Execute

func (filter *FilterCommand) Execute(args []string) error

Execute runs the program for the 'filter' subcommand

func (*FilterCommand) ValidateParams

func (filter *FilterCommand) ValidateParams(args []string) error

type GetMoreOp

type GetMoreOp struct {
	Header MsgHeader
	mgo.GetMoreOp
}

GetMoreOp is used to query the database for documents in a collection. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-get-more

func (*GetMoreOp) Abbreviated

func (op *GetMoreOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the GetMoreOp, abbreviated so it doesn't exceed the given number of characters.

func (*GetMoreOp) Execute

func (op *GetMoreOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the GetMoreOp on a given session, yielding the reply when successful (and an error otherwise).

func (*GetMoreOp) FromReader

func (op *GetMoreOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized GetMoreOp into its concrete structure.

func (*GetMoreOp) Meta

func (op *GetMoreOp) Meta() OpMetadata

Meta returns metadata about the GetMoreOp, useful for analysis of traffic.

func (*GetMoreOp) OpCode

func (op *GetMoreOp) OpCode() OpCode

OpCode returns the OpCode for a GetMoreOp.

func (*GetMoreOp) String

func (op *GetMoreOp) String() string

type GzipReadSeeker

type GzipReadSeeker struct {
	*gzip.Reader
	// contains filtered or unexported fields
}

GzipReadSeeker wraps an io.ReadSeeker for gzip reading

func NewGzipReadSeeker

func NewGzipReadSeeker(rs io.ReadSeeker) (*GzipReadSeeker, error)

NewGzipReadSeeker initializes a new GzipReadSeeker

func (*GzipReadSeeker) Seek

func (g *GzipReadSeeker) Seek(offset int64, whence int) (int64, error)

Seek sets the offset for the next Read, and can only seek to the beginning of the file.

type InsertOp

type InsertOp struct {
	Header MsgHeader
	mgo.InsertOp
}

InsertOp is used to insert one or more documents into a collection. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-insert

func (*InsertOp) Abbreviated

func (op *InsertOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the InsertOp, abbreviated so it doesn't exceed the given number of characters.

func (*InsertOp) Execute

func (op *InsertOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the InsertOp on a given socket, yielding the reply when successful (and an error otherwise).

func (*InsertOp) FromReader

func (op *InsertOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized InsertOp into its concrete structure.

func (*InsertOp) Meta

func (op *InsertOp) Meta() OpMetadata

Meta returns metadata about the InsertOp, useful for analysis of traffic.

func (*InsertOp) OpCode

func (op *InsertOp) OpCode() OpCode

OpCode returns the OpCode for the InsertOp.

func (*InsertOp) String

func (op *InsertOp) String() string

type JSONStatRecorder

type JSONStatRecorder struct {
	// contains filtered or unexported fields
}

JSONStatRecorder records stats in JSON output

func (*JSONStatRecorder) Close

func (jsr *JSONStatRecorder) Close() error

Close closes the JSONStatRecorder

func (*JSONStatRecorder) RecordStat

func (jsr *JSONStatRecorder) RecordStat(stat *OpStat)

RecordStat records the stat using the JSONStatRecorder

type KillCursorsOp

type KillCursorsOp struct {
	Header MsgHeader
	mgo.KillCursorsOp
}

KillCursorsOp is used to close an active cursor in the database. This is necessary to ensure that database resources are reclaimed at the end of the query. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-kill-cursors

func (*KillCursorsOp) Abbreviated

func (op *KillCursorsOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the KillCursorsOp, abbreviated so it doesn't exceed the given number of characters.

func (*KillCursorsOp) Execute

func (op *KillCursorsOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the KillCursorsOp on a given session, yielding the reply when successful (and an error otherwise).

func (*KillCursorsOp) FromReader

func (op *KillCursorsOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized KillCursorsOp into its concrete structure.

func (*KillCursorsOp) Meta

func (op *KillCursorsOp) Meta() OpMetadata

Meta returns metadata about the KillCursorsOp, useful for analysis of traffic.

func (*KillCursorsOp) OpCode

func (op *KillCursorsOp) OpCode() OpCode

OpCode returns the OpCode for the KillCursorsOp.

func (*KillCursorsOp) String

func (op *KillCursorsOp) String() string

type MongoOpStream

type MongoOpStream struct {
	Ops chan *RecordedOp

	FirstSeen time.Time
	// contains filtered or unexported fields
}

MongoOpStream is the opstream which yields RecordedOps

func NewMongoOpStream

func NewMongoOpStream(heapBufSize int) *MongoOpStream

NewMongoOpStream initializes a new MongoOpStream

func (*MongoOpStream) Close

func (os *MongoOpStream) Close() error

Close is called by the tcpassembly to indicate that all of the packets have been processed.

func (*MongoOpStream) New

func (os *MongoOpStream) New(netFlow, tcpFlow gopacket.Flow) tcpassembly.Stream

New is the factory method called by the tcpassembly to generate new tcpassembly.Stream.

func (*MongoOpStream) SetFirstSeen

func (os *MongoOpStream) SetFirstSeen(t time.Time)

SetFirstSeen sets the time for the first message on the MongoOpStream. All of this SetFirstSeen/FirstSeen/SetFirstseer stuff can go away ( from here and from packet_handler.go ) it's a cruft and was how someone was trying to get around the fact that using the tcpassembly.tcpreader library throws away all of the metadata about the stream.

type MonitorCommand

type MonitorCommand struct {
	GlobalOpts *Options `no-flag:"true"`
	StatOptions
	OpStreamSettings
	Collect      string `` /* 154-byte string literal not displayed */
	PairedMode   bool   `long:"paired" description:"Output only one line for a request/reply pair"`
	Gzip         bool   `long:"gzip" description:"decompress gzipped input"`
	PlaybackFile string `short:"p" description:"path to playback file to read from" long:"playback-file"`
}

MonitorCommand stores settings for the mongoreplay 'monitor' subcommand

func (*MonitorCommand) Execute

func (monitor *MonitorCommand) Execute(args []string) error

Execute runs the program for the 'monitor' subcommand

func (*MonitorCommand) ValidateParams

func (monitor *MonitorCommand) ValidateParams(args []string) error

ValidateParams validates the settings described in the MonitorCommand struct.

type MsgHeader

type MsgHeader struct {
	// MessageLength is the total message size, including this header
	MessageLength int32
	// RequestID is the identifier for this miessage
	RequestID int32
	// ResponseTo is the RequestID of the message being responded to;
	// used in DB responses
	ResponseTo int32
	// OpCode is the request type, see consts above.
	OpCode OpCode
}

MsgHeader is the mongo MessageHeader

func ReadHeader

func ReadHeader(r io.Reader) (*MsgHeader, error)

ReadHeader creates a new MsgHeader given a reader at the beginning of a message.

func (*MsgHeader) FromWire

func (m *MsgHeader) FromWire(b []byte)

FromWire reads the wirebytes into this object

func (*MsgHeader) LooksReal

func (m *MsgHeader) LooksReal() bool

LooksReal does a best efffort to detect if a MsgHeadr is not invalid

func (*MsgHeader) String

func (m *MsgHeader) String() string

String returns a string representation of the message header. Useful for debugging.

func (MsgHeader) ToWire

func (m MsgHeader) ToWire() []byte

ToWire converts the MsgHeader to the wire protocol

func (*MsgHeader) WriteTo

func (m *MsgHeader) WriteTo(w io.Writer) (int64, error)

WriteTo writes the MsgHeader into a writer.

type MsgOp

type MsgOp struct {
	Header MsgHeader
	mgo.MsgOp

	CommandName string
	Database    string
}

MsgOp is a struct for parsing OP_MSG as defined here: https://github.com/mongodb/mongo/blob/master/src/mongo/rpc/command_request.h.

func (*MsgOp) Abbreviated

func (msgOp *MsgOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the MsgOp, abbreviated.

func (*MsgOp) Execute

func (op *MsgOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the MsgOp on a given session, yielding the reply when successful (and an error otherwise).

func (*MsgOp) FromReader

func (op *MsgOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized MsgOp into its concrete structure.

func (*MsgOp) Meta

func (msgOp *MsgOp) Meta() OpMetadata

Meta returns metadata about the operation, useful for analysis of traffic.

func (*MsgOp) OpCode

func (msgOp *MsgOp) OpCode() OpCode

OpCode returns the OpCode for the MsgOp.

type MsgOpGetMore

type MsgOpGetMore struct {
	MsgOp
	// contains filtered or unexported fields
}

MsgOpGetMore is a struct representing the case of an OP_MSG which was a getmore command. It implements the cursorsRewriteable interface and has a field for caching the cursor found so that multiple calls to these methods do not incur the overhead of searching the underlying bson.

type MsgOpReply

type MsgOpReply struct {
	MsgOp
	Latency time.Duration

	Docs []bson.Raw
	// contains filtered or unexported fields
}

MsgOpReply is a struct representing the case of an OP_MSG which was a response from the server. It implements the Replyable interface and has fields for caching the docs found and cursor so that multiple calls to these methods do not incur the overhead of searching the underlying bson.

type NopRecorder

type NopRecorder struct{}

NopRecorder implements the StatRecorder interface but doesn't do anything

func (*NopRecorder) Close

func (nr *NopRecorder) Close() error

Close closes the NopRecorder (i.e. does nothing)

func (*NopRecorder) RecordStat

func (nr *NopRecorder) RecordStat(stat *OpStat)

RecordStat doesn't do anything for the NopRecorder

type Op

type Op interface {
	// OpCode returns the OpCode for a particular kind of op.
	OpCode() OpCode

	// FromReader extracts data from a serialized op into its concrete
	// structure.
	FromReader(io.Reader) error

	// Execute performs the op on a given socket, yielding the reply when
	// successful (and an error otherwise).
	Execute(*mgo.MongoSocket) (Replyable, error)

	// Meta returns metadata about the operation, useful for analysis of traffic.
	Meta() OpMetadata

	// Abbreviated returns a serialization of the op, abbreviated so it doesn't
	// exceed the given number of characters.
	Abbreviated(int) string
}

Op is a Mongo operation

type OpCode

type OpCode int32

OpCode allow identifying the type of operation: http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#request-opcodes

func (OpCode) String

func (c OpCode) String() string

String returns a human readable representation of the OpCode.

type OpMetadata

type OpMetadata struct {
	// Op represents the actual operation being performed accounting for write
	// commands, so this may be "insert" or "update" even when the wire protocol
	// message was OP_QUERY.
	Op string

	// Namespace against which the operation executes.
	// If not applicable, will be blank.
	Ns string

	// Command name is the name of the command, when Op is "command" (otherwise
	// will be blank.) For example, this might be "getLastError" or
	// "serverStatus".
	Command string

	// Data contains the payload of the operation.
	// For queries: the query selector, limit and sort, etc.
	// For inserts: the document(s) to be inserted.
	// For updates: the query selector, modifiers, and upsert/multi flags.
	// For removes: the query selector for removes.
	// For commands: the full set of parameters for the command.
	// For killcursors: the list of cursorID's to be killed.
	// For getmores: the cursorID for the getmore batch.
	Data interface{}
}

OpMetadata stores metadata for an Op

type OpStat

type OpStat struct {
	// Order is a number denoting the position in the traffic in which this operation appeared
	Order int64 `json:"order"`

	// OpType is a string representation of the function of this operation. For example an 'insert'
	// or a 'query'
	OpType string `json:"op,omitempty"`

	// If the operation was a command, this field represents the name of the database command
	// performed, like "isMaster" or "getLastError". Left blank for ops that are not commands
	// like a query, insert, or getmore.
	Command string `json:"command,omitempty"`

	// Namespace that the operation was performed against, if relevant.
	Ns string `json:"ns,omitempty"`

	// Data represents the payload of the request operation.
	RequestData interface{} `json:"request_data, omitempty"`

	// Data represents the payload of the reply operation.
	ReplyData interface{} `json:"reply_data, omitempty"`

	// NumReturned is the number of documents that were fetched as a result of this operation.
	NumReturned int `json:"nreturned,omitempty"`

	// PlayedAt is the time that this operation was replayed
	PlayedAt *time.Time `json:"played_at,omitempty"`

	// PlayAt is the time that this operation is scheduled to be played. It represents the time
	// that it is supposed to be played by mongoreplay, but can be different from
	// PlayedAt if the playback is lagging for any reason
	PlayAt *time.Time `json:"play_at,omitempty"`

	// PlaybackLagMicros is the time difference in microseconds between the time
	// that the operation was supposed to be played, and the time it was actualy played.
	// High values indicate that playback is falling behind the intended rate.
	PlaybackLagMicros int64 `json:"playbacklag_us,omitempty"`

	// ConnectionNum represents the number of the connection that the op originated from.
	// This number does not correspond to any server-side connection IDs - it's simply an
	// auto-incrementing number representing the order in which the connection was created
	// during the playback phase.
	ConnectionNum int64 `json:"connection_num"`

	// LatencyMicros represents the time difference in microseconds between when the operation
	// was executed and when the reply from the server was received.
	LatencyMicros int64 `json:"latency_us,omitempty"`

	// Errors contains the error messages returned from the server populated in the $err field.
	// If unset, the operation did not receive any errors from the server.
	Errors []error `json:"errors,omitempty"`

	Message string `json:"msg,omitempty"`

	// Seen is the time that this operation was originally seen.
	Seen *time.Time `json:"seen,omitempty"`

	// RequestID is the ID of the mongodb operation as taken from the header.
	// The RequestID for a request operation is the same as the ResponseID for
	// the corresponding reply, so this field will be the same for request/reply pairs.
	RequestID int32 `json:"request_id, omitempty"`
}

OpStat is a set of metadata about an executed operation and its result which can be used for generating reports about the results of a playback command.

type OpStreamSettings

type OpStreamSettings struct {
	PcapFile         string `short:"f" description:"path to the pcap file to be read"`
	PacketBufSize    int    `short:"b" description:"Size of heap used to merge separate streams together"`
	CaptureBufSize   int    `long:"capSize" description:"Size in KiB of the PCAP capture buffer"`
	Expression       string `short:"e" long:"expr" description:"BPF filter expression to apply to packets"`
	NetworkInterface string `short:"i" description:"network interface to listen on"`
}

OpStreamSettings stores settings for any command which may listen to an opstream.

type Options

type Options struct {
	Verbosity []bool `` /* 201-byte string literal not displayed */
	Debug     []bool `` /* 202-byte string literal not displayed */
	Silent    bool   `short:"s" long:"silent" description:"silence all log output"`
}

Options stores settings for any mongoreplay command

func (*Options) SetLogging

func (opts *Options) SetLogging()

SetLogging sets the verbosity/debug level for log output.

type PacketHandler

type PacketHandler struct {
	Verbose bool
	// contains filtered or unexported fields
}

PacketHandler wraps pcap.Handle to maintain other useful information.

func NewPacketHandler

func NewPacketHandler(pcapHandle *pcap.Handle) *PacketHandler

NewPacketHandler initializes a new PacketHandler

func (*PacketHandler) Close

func (p *PacketHandler) Close()

Close stops the packetHandler

func (*PacketHandler) Handle

func (p *PacketHandler) Handle(streamHandler StreamHandler, numToHandle int) error

Handle reads the pcap file into assembled packets for the streamHandler

type PlayCommand

type PlayCommand struct {
	GlobalOpts *Options `no-flag:"true"`
	StatOptions
	PlaybackFile string  `description:"path to the playback file to play from" short:"p" long:"playback-file" required:"yes"`
	Speed        float64 `` /* 131-byte string literal not displayed */
	URL          string  `short:"h" long:"host" description:"Location of the host to play back against" default:"mongodb://localhost:27017"`
	Repeat       int     `long:"repeat" description:"Number of times to play the playback file" default:"1"`
	QueueTime    int     `long:"queueTime" description:"don't queue ops much further in the future than this number of seconds" default:"15"`
	NoPreprocess bool    `long:"no-preprocess" description:"don't preprocess the input file to premap data such as mongo cursorIDs"`
	Gzip         bool    `long:"gzip" description:"decompress gzipped input"`
	Collect      string  `` /* 152-byte string literal not displayed */
	FullSpeed    bool    `long:"fullSpeed" description:"run the playback as fast as possible"`
}

PlayCommand stores settings for the mongoreplay 'play' subcommand

func (*PlayCommand) Execute

func (play *PlayCommand) Execute(args []string) error

Execute runs the program for the 'play' subcommand

func (*PlayCommand) ValidateParams

func (play *PlayCommand) ValidateParams(args []string) error

ValidateParams validates the settings described in the PlayCommand struct.

type PlaybackFileMetadata

type PlaybackFileMetadata struct {
	PlaybackFileVersion int
	DriverOpsFiltered   bool
}

type PlaybackFileReader

type PlaybackFileReader struct {
	io.ReadSeeker
	// contains filtered or unexported fields
}

PlaybackFileReader stores the necessary information for a playback source, which is just an io.ReadCloser.

func NewPlaybackFileReader

func NewPlaybackFileReader(filename string, gzip bool) (*PlaybackFileReader, error)

NewPlaybackFileReader initializes a new PlaybackFileReader

func (*PlaybackFileReader) NextRecordedOp

func (file *PlaybackFileReader) NextRecordedOp() (*RecordedOp, error)

NextRecordedOp iterates through the PlaybackFileReader to yield the next RecordedOp. It returns io.EOF when successfully complete.

func (*PlaybackFileReader) OpChan

func (pfReader *PlaybackFileReader) OpChan(repeat int) (<-chan *RecordedOp, <-chan error)

OpChan runs a goroutine that will read and unmarshal recorded ops from a file and push them in to a recorded op chan. Any errors encountered are pushed to an error chan. Both the recorded op chan and the error chan are returned by the function. The error chan won't be readable until the recorded op chan gets closed.

type PlaybackFileWriter

type PlaybackFileWriter struct {
	io.WriteCloser
	// contains filtered or unexported fields
}

PlaybackFileWriter stores the necessary information for a playback destination, which is an io.WriteCloser and its location.

func NewPlaybackFileWriter

func NewPlaybackFileWriter(playbackFileName string, driverOpsFiltered, isGzipWriter bool) (*PlaybackFileWriter, error)

NewPlaybackFileWriter initializes a new PlaybackFileWriter

type PreciseTime

type PreciseTime struct {
	time.Time
}

PreciseTime wraps a time.Time with addition useful methods

func (*PreciseTime) GetBSON

func (b *PreciseTime) GetBSON() (interface{}, error)

GetBSON encodes the time into BSON

func (*PreciseTime) SetBSON

func (b *PreciseTime) SetBSON(raw bson.Raw) error

SetBSON decodes the BSON into a time

type Preprocessable

type Preprocessable interface {
	Preprocess()
}

Preprocessable presents a way that for additional data processing to be done on an operation before execution. Future iterations may include options that allow differential preprocessing of operations depending on desired results.

type QueryOp

type QueryOp struct {
	Header MsgHeader
	mgo.QueryOp
}

QueryOp is used to query the database for documents in a collection. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-query

func (*QueryOp) Abbreviated

func (op *QueryOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the QueryOp, abbreviated so it doesn't exceed the given number of characters.

func (*QueryOp) Execute

func (op *QueryOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the QueryOp on a given socket, yielding the reply when successful (and an error otherwise).

func (*QueryOp) FromReader

func (op *QueryOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized QueryOp into its concrete structure.

func (*QueryOp) Meta

func (op *QueryOp) Meta() OpMetadata

Meta returns metadata about the QueryOp, useful for analysis of traffic.

func (*QueryOp) OpCode

func (op *QueryOp) OpCode() OpCode

OpCode returns the OpCode for a QueryOp.

func (*QueryOp) String

func (op *QueryOp) String() string

type RawOp

type RawOp struct {
	Header MsgHeader
	Body   []byte
}

RawOp may be exactly the same as OpUnknown.

func (*RawOp) Abbreviated

func (op *RawOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the RawOp, abbreviated so it doesn't exceed the given number of characters.

func (*RawOp) FromReader

func (op *RawOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized op into its concrete structure.

func (*RawOp) OpCode

func (op *RawOp) OpCode() OpCode

OpCode returns the OpCode for the op.

func (*RawOp) Parse

func (op *RawOp) Parse() (Op, error)

Parse returns the underlying op from its given RawOp form.

func (*RawOp) ShortenReply

func (op *RawOp) ShortenReply() error

ShortReplyFromReader reads an op from the given reader. It only holds on to header-related information and the first document.

func (*RawOp) String

func (op *RawOp) String() string

type Reassembly

type Reassembly struct {
	// Bytes is the next set of bytes in the stream.  May be empty.
	Bytes []byte
	// Skip is set to non-zero if bytes were skipped between this and the last
	// Reassembly.  If this is the first packet in a connection and we didn't
	// see the start, we have no idea how many bytes we skipped, so we set it to
	// -1.  Otherwise, it's set to the number of bytes skipped.
	Skip int
	// Start is set if this set of bytes has a TCP SYN accompanying it.
	Start bool
	// End is set if this set of bytes has a TCP FIN or RST accompanying it.
	End bool
	// Seen is the timestamp this set of bytes was pulled off the wire.
	Seen time.Time
}

Reassembly objects are passed by an Assembler into Streams using the Reassembled call. Callers should not need to create these structs themselves except for testing.

type RecordCommand

type RecordCommand struct {
	GlobalOpts *Options `no-flag:"true"`
	OpStreamSettings
	Gzip         bool   `long:"gzip" description:"compress output file with Gzip"`
	FullReplies  bool   `long:"full-replies" description:"save full reply payload in playback file"`
	PlaybackFile string `short:"p" description:"path to playback file to record to" long:"playback-file" required:"yes"`
}

RecordCommand stores settings for the mongoreplay 'record' subcommand

func (*RecordCommand) Execute

func (record *RecordCommand) Execute(args []string) error

Execute runs the program for the 'record' subcommand

func (*RecordCommand) ValidateParams

func (record *RecordCommand) ValidateParams(args []string) error

ValidateParams validates the settings described in the RecordCommand struct.

type RecordedOp

type RecordedOp struct {
	RawOp
	Seen                *PreciseTime
	PlayAt              *PreciseTime `bson:",omitempty"`
	EOF                 bool         `bson:",omitempty"`
	SrcEndpoint         string
	DstEndpoint         string
	SeenConnectionNum   int64
	PlayedConnectionNum int64
	PlayedAt            *PreciseTime `bson:",omitempty"`
	Generation          int
	Order               int64
}

RecordedOp stores an op in addition to record/playback -related metadata

func (*RecordedOp) ConnectionString

func (op *RecordedOp) ConnectionString() string

ConnectionString gives a serialized representation of the endpoints

func (*RecordedOp) ReversedConnectionString

func (op *RecordedOp) ReversedConnectionString() string

ReversedConnectionString gives a serialized representation of the endpoints, in reversed order

type RegularStatGenerator

type RegularStatGenerator struct {
	PairedMode    bool
	UnresolvedOps map[opKey]UnresolvedOpInfo
}

RegularStatGenerator implements a StatGenerator which maintains which ops are unresolved, and is capable of handling PairedMode

func (*RegularStatGenerator) AddUnresolvedOp

func (gen *RegularStatGenerator) AddUnresolvedOp(op *RecordedOp, parsedOp Op, requestStat *OpStat)

AddUnresolvedOp takes an operation that is supposed to receive a reply and keeps it around so that its latency can be calculated using the incoming reply.

func (*RegularStatGenerator) Finalize

func (gen *RegularStatGenerator) Finalize(statStream chan *OpStat)

Finalize concludes any final stats that still need to be yielded by the RegularStatGenerator

func (*RegularStatGenerator) GenerateOpStat

func (gen *RegularStatGenerator) GenerateOpStat(recordedOp *RecordedOp, parsedOp Op, reply Replyable, msg string) *OpStat

GenerateOpStat creates an OpStat using the RegularStatGenerator

func (*RegularStatGenerator) ResolveOp

func (gen *RegularStatGenerator) ResolveOp(recordedReply *RecordedOp, reply Replyable, replyStat *OpStat) *OpStat

ResolveOp generates an OpStat from the pairing of a request with its reply. When running in paired mode ResolveOp returns an OpStat which contains the payload from both the request and the reply. Otherwise, it returns an OpStat with just the data from the reply along with the latency between the request and the reply.

recordedReply is the just received reply in the form of a RecordedOp, which contains additional metadata. parsedReply is the same reply, parsed so that the payload of the op can be accesssed. replyStat is the OpStat created by the GenerateOpStat function, containing computed metadata about the reply.

type ReplyOp

type ReplyOp struct {
	Header MsgHeader
	mgo.ReplyOp
	Docs    []bson.Raw
	Latency time.Duration
	// contains filtered or unexported fields
}

ReplyOp is sent by the database in response to an QueryOp or OpGetMore message. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-reply

func (*ReplyOp) Abbreviated

func (op *ReplyOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the ReplyOp, abbreviated so it doesn't exceed the given number of characters.

func (*ReplyOp) Execute

func (op *ReplyOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the ReplyOp on a given socket, yielding the reply when successful (and an error otherwise).

func (*ReplyOp) FromReader

func (op *ReplyOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized ReplyOp into its concrete structure.

func (*ReplyOp) Meta

func (op *ReplyOp) Meta() OpMetadata

Meta returns metadata about the ReplyOp, useful for analysis of traffic.

func (*ReplyOp) OpCode

func (op *ReplyOp) OpCode() OpCode

OpCode returns the OpCode for a ReplyOp.

func (*ReplyOp) String

func (op *ReplyOp) String() string

type ReplyPair

type ReplyPair struct {
	// contains filtered or unexported fields
}

ReplyPair contains both a live reply and a recorded reply when fully occupied.

type Replyable

type Replyable interface {
	Meta() OpMetadata
	// contains filtered or unexported methods
}

Replyable is an interface representing any operation that has the functionality of a reply from a mongodb server. This includes both ReplyOps and CommandOpReplies.

type Sequence

type Sequence int64

Sequence is a TCP sequence number. It provides a few convenience functions for handling TCP wrap-around. The sequence should always be in the range [0,0xFFFFFFFF]... its other bits are simply used in wrap-around calculations and should never be set.

func (Sequence) Add

func (s Sequence) Add(t int) Sequence

Add adds an integer to a sequence and returns the resulting sequence.

func (Sequence) Difference

func (s Sequence) Difference(t Sequence) int

Difference defines an ordering for comparing TCP sequences that's safe for roll-overs. It returns:

> 0 : if t comes after s
< 0 : if t comes before s
  0 : if t == s

The number returned is the sequence difference, so 4.Difference(8) will return 4.

It handles rollovers by considering any sequence in the first quarter of the uint32 space to be after any sequence in the last quarter of that space, thus wrapping the uint32 space.

type SessionStub

type SessionStub struct {
	mgo.MongoSession
	// contains filtered or unexported fields
}

SessionStub mocks an MongoSession by implementing the AcquireSocketPrivate method. It allows for tests to pass around a struct with stubbed fields that can then be read later for testing.

func (*SessionStub) AcquireSocketPrivate

func (session *SessionStub) AcquireSocketPrivate(slaveOk bool) (*mgo.MongoSocket, error)

AcquireSocketPrivate is an implementation of MongoSession's function that allows for the a stubbed connection to the passed to the other operations of llmgo for testing

type SetFirstSeener

type SetFirstSeener interface {
	SetFirstSeen(t time.Time)
}

SetFirstSeener is an interface for anything which maintains the appropriate 'first seen' information.

type StatCollector

type StatCollector struct {
	sync.Once

	StatGenerator
	StatRecorder
	// contains filtered or unexported fields
}

StatCollector is a struct that handles generation and recording of statistics about operations mongoreplay performs. It contains a StatGenerator and a StatRecorder that allow for differing implementations of the generating and recording functions

func (*StatCollector) Close

func (statColl *StatCollector) Close() error

Close implements the basic close method, stopping stat collection.

func (*StatCollector) Collect

func (statColl *StatCollector) Collect(op *RecordedOp, replayedOp Op, reply Replyable, msg string)

Collect formats the operation statistics as specified by the contained StatGenerator and writes it to some form of storage as specified by the contained StatRecorder

type StatGenerator

type StatGenerator interface {
	GenerateOpStat(op *RecordedOp, replayedOp Op, reply Replyable, msg string) *OpStat
	Finalize(chan *OpStat)
}

StatGenerator is an interface that specifies how to accept operation information to be recorded

type StatOptions

type StatOptions struct {
	Buffered   bool   `hidden:"yes"`
	BufferSize int    `long:"stats-buffer-size" description:"the size (in events) of the stat collector buffer" default:"1024"`
	Report     string `long:"report" description:"Write report on execution to given output path"`
	NoTruncate bool   `long:"no-truncate" description:"Disable truncation of large payload data in log output"`
	Format     string `` /* 958-byte string literal not displayed */
	NoColors   bool   `long:"no-colors" description:"Remove colors from the default format"`
}

StatOptions stores settings for the mongoreplay subcommands which have stat output

type StatRecorder

type StatRecorder interface {
	RecordStat(stat *OpStat)
	Close() error
}

StatRecorder is an interface that specifies how to take OpStats to be recorded

type Stream

type Stream interface {
	// Reassembled is called zero or more times. Assembly guarantees that the
	// set of all Reassembly objects passed in during all calls are presented in
	// the order they appear in the TCP stream. Reassembly objects are reused
	// after each Reassembled call, so it's important to copy anything you need
	// out of them (specifically out of Reassembly.Bytes) that you need to stay
	// around after you return from the Reassembled call.
	Reassembled([]Reassembly)
	// ReassemblyComplete is called when assembly decides there is no more data
	// for this Stream, either because a FIN or RST packet was seen, or because
	// the stream has timed out without any new packet data (due to a call to
	// FlushOlderThan).
	ReassemblyComplete()
}

Stream is implemented by the caller to handle incoming reassembled TCP data. Callers create a StreamFactory, then StreamPool uses it to create a new Stream for every TCP stream.

assembly will, in order:

  1. Create the stream via StreamFactory.New
  2. Call Reassembled 0 or more times, passing in reassembled TCP data in order
  3. Call ReassemblyComplete one time, after which the stream is dereferenced by assembly.

type StreamFactory

type StreamFactory interface {
	// New should return a new stream for the given TCP key.
	New(netFlow, tcpFlow gopacket.Flow) tcpassembly.Stream
}

StreamFactory is used by assembly to create a new stream for each new TCP session.

type StreamHandler

type StreamHandler interface {
	tcpassembly.StreamFactory
	io.Closer
}

StreamHandler is an io.Closer for a tcpassembly.StreamFactory

type StreamPool

type StreamPool struct {
	// contains filtered or unexported fields
}

StreamPool stores all streams created by Assemblers, allowing multiple assemblers to work together on stream processing while enforcing the fact that a single stream receives its data serially. It is safe for concurrency, usable by multiple Assemblers at once.

StreamPool handles the creation and storage of Stream objects used by one or more Assembler objects. When a new TCP stream is found by an Assembler, it creates an associated Stream by calling its StreamFactory's New method. Thereafter (until the stream is closed), that Stream object will receive assembled TCP data via Assembler's calls to the stream's Reassembled function.

Like the Assembler, StreamPool attempts to minimize allocation. Unlike the Assembler, though, it does have to do some locking to make sure that the connection objects it stores are accessible to multiple Assemblers.

func NewStreamPool

func NewStreamPool(factory StreamFactory) *StreamPool

NewStreamPool creates a new connection pool. Streams will be created as necessary using the passed-in StreamFactory.

type TerminalStatRecorder

type TerminalStatRecorder struct {
	// contains filtered or unexported fields
}

TerminalStatRecorder records stats for terminal output

func (*TerminalStatRecorder) Close

func (dsr *TerminalStatRecorder) Close() error

Close closes the TerminalStatRecorder

func (*TerminalStatRecorder) RecordStat

func (dsr *TerminalStatRecorder) RecordStat(stat *OpStat)

RecordStat records the stat into the terminal

type UnknownOp

type UnknownOp struct {
	Header MsgHeader
	Body   []byte
}

UnknownOp is not a real mongo Op but represents an unrecognized or corrupted op

func (*UnknownOp) Abbreviated

func (op *UnknownOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the UnknownOp, abbreviated so it doesn't exceed the given number of characters.

func (*UnknownOp) Execute

func (op *UnknownOp) Execute(session *mgo.Session) (*ReplyOp, error)

Execute doesn't do anything for an UnknownOp

func (*UnknownOp) FromReader

func (op *UnknownOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized UnknownOp into its concrete structure.

func (*UnknownOp) Meta

func (op *UnknownOp) Meta() OpMetadata

Meta returns metadata about the UnknownOp, for which there is none

func (*UnknownOp) OpCode

func (op *UnknownOp) OpCode() OpCode

OpCode returns the OpCode for an UnknownOp.

func (*UnknownOp) String

func (op *UnknownOp) String() string

type UnresolvedOpInfo

type UnresolvedOpInfo struct {
	Stat     *OpStat
	ParsedOp Op
	Op       *RecordedOp
}

UnresolvedOpInfo holds information about an op

type UpdateOp

type UpdateOp struct {
	Header MsgHeader
	mgo.UpdateOp
}

UpdateOp is used to update a document in a collection. http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-update

func (*UpdateOp) Abbreviated

func (op *UpdateOp) Abbreviated(chars int) string

Abbreviated returns a serialization of the UpdateOp, abbreviated so it doesn't exceed the given number of characters.

func (*UpdateOp) Execute

func (op *UpdateOp) Execute(socket *mgo.MongoSocket) (Replyable, error)

Execute performs the UpdateOp on a given socket, yielding the reply when successful (and an error otherwise).

func (*UpdateOp) FromReader

func (op *UpdateOp) FromReader(r io.Reader) error

FromReader extracts data from a serialized UpdateOp into its concrete structure.

func (*UpdateOp) Meta

func (op *UpdateOp) Meta() OpMetadata

Meta returns metadata about the UpdateOp, useful for analysis of traffic.

func (*UpdateOp) OpCode

func (op *UpdateOp) OpCode() OpCode

OpCode returns the OpCode for an UpdateOp.

func (*UpdateOp) String

func (op *UpdateOp) String() string

type VersionOptions

type VersionOptions struct {
	Version bool `long:"version" description:"display the version and exit"`
}

func (*VersionOptions) PrintVersion

func (o *VersionOptions) PrintVersion() bool

Print the tool version to stdout. Returns whether or not the version flag is specified.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL