Documentation
¶
Overview ¶
Package snappy implements decompression for the Hadoop format of snappy; a compression scheme internal to the Hadoop ecosystem and HDFS.
Example ¶
package main import ( "bytes" "fmt" "io" snappy "github.com/qualtrics/hadoop-snappy" ) func main() { // encodedData is the string "Hello, world!" encoded using the hadoop-snappy compression format. encodedData := []byte{0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00, 0x0F, 0x0D, 0x30, 0x48, 0x65, 0x6C, 0x6C, 0x6F, 0x2C, 0x20, 0x77, 0x6F, 0x72, 0x6C, 0x64, 0x21} r := snappy.NewReader(bytes.NewReader(encodedData)) output, err := io.ReadAll(r) if err != nil { panic(err) } fmt.Printf("%s\n", output) }
Output: Hello, world!
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader wraps a hadoop-snappy compressed data stream and decompresses the stream as it is read by the caller.
func NewReader ¶
NewReader returns a Reader that reads the hadoop-snappy compressed data stream provided by the input reader.
Reading from an input stream that is not hadoop-snappy compressed will result in undefined behavior. Because there is no data signature to detect the compression format, the reader can only try to read the stream and will likely return an error, but it may return garbage data instead.