csvutil

package module
v1.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 9, 2018 License: MIT Imports: 12 Imported by: 250

README

csvutil GoDoc Build Status Build status Go Report Card codecov

Package csvutil provides fast and idiomatic mapping between CSV and Go values.

This package does not provide a CSV parser itself, it is based on the Reader and Writer interfaces which are implemented by eg. std csv package. This gives a possibility of choosing any other CSV writer or reader which may be more performant.

Installation

go get github.com/jszwec/csvutil

Requirements

  • Go1.7+

Index

  1. Examples
    1. Unmarshal
    2. Marshal
    3. Unmarshal and metadata
    4. But my CSV file has no header...
    5. Decoder.Map - data normalization
    6. Different separator/delimiter
    7. Decoder and interface values
    8. Custom time.Time format
  2. Performance
    1. Unmarshal
    2. Marshal

Example

Unmarshal

Nice and easy Unmarshal is using the std csv.Reader with its default options. Use Decoder for streaming and more advanced use cases.

	var csvInput = []byte(`
name,age,CreatedAt
jacek,26,2012-04-01T15:00:00Z
john,,0001-01-01T00:00:00Z`,
	)

	type User struct {
		Name      string `csv:"name"`
		Age       int    `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	var users []User
	if err := csvutil.Unmarshal(csvInput, &users); err != nil {
		fmt.Println("error:", err)
	}

	for _, u := range users {
		fmt.Printf("%+v\n", u)
	}

	// Output:
	// {Name:jacek Age:26 CreatedAt:2012-04-01 15:00:00 +0000 UTC}
	// {Name:john Age:0 CreatedAt:0001-01-01 00:00:00 +0000 UTC}
Marshal

Marshal is using the std csv.Writer with its default options. Use Encoder for streaming or to use a different Writer.

	type Address struct {
		City    string
		Country string
	}

	type User struct {
		Name string
		Address
		Age       int `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	users := []User{
		{
			Name:      "John",
			Address:   Address{"Boston", "USA"},
			Age:       26,
			CreatedAt: time.Date(2010, 6, 2, 12, 0, 0, 0, time.UTC),
		},
		{
			Name:    "Alice",
			Address: Address{"SF", "USA"},
		},
	}

	b, err := csvutil.Marshal(users)
	if err != nil {
		fmt.Println("error:", err)
	}
	fmt.Println(string(b))

	// Output:
	// Name,City,Country,age,CreatedAt
	// John,Boston,USA,26,2010-06-02T12:00:00Z
	// Alice,SF,USA,,0001-01-01T00:00:00Z
Unmarshal and metadata

It may happen that your CSV input will not always have the same header. In addition to your base fields you may get extra metadata that you would still like to store. Decoder provides Unused method, which after each call to Decode can report which header indexes were not used during decoding. Based on that, it is possible to handle and store all these extra values.

	type User struct {
		Name      string            `csv:"name"`
		City      string            `csv:"city"`
		Age       int               `csv:"age"`
		OtherData map[string]string `csv:"-"`
	}

	csvReader := csv.NewReader(strings.NewReader(`
name,age,city,zip
alice,25,la,90005
bob,30,ny,10005`))

	dec, err := csvutil.NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	header := dec.Header()
	var users []User
	for {
		u := User{OtherData: make(map[string]string)}

		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		for _, i := range dec.Unused() {
			u.OtherData[header[i]] = dec.Record()[i]
		}
		users = append(users, u)
	}

	fmt.Println(users)

	// Output:
	// [{alice la 25 map[zip:90005]} {bob ny 30 map[zip:10005]}]
But my CSV file has no header...

Some CSV files have no header, but if you know how it should look like, it is possible to define a struct and generate it. All that is left to do, is to pass it to a decoder.

	type User struct {
		ID   int
		Name string
		Age  int `csv:",omitempty"`
		City string
	}

	csvReader := csv.NewReader(strings.NewReader(`
1,John,27,la
2,Bob,,ny`))

	// in real application this should be done once in init function.
	userHeader, err := csvutil.Header(User{}, "csv")
	if err != nil {
		log.Fatal(err)
	}

	dec, err := csvutil.NewDecoder(csvReader, userHeader...)
	if err != nil {
		log.Fatal(err)
	}

	var users []User
	for {
		var u User
		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}
		users = append(users, u)
	}

	fmt.Printf("%+v", users)

	// Output:
	// [{ID:1 Name:John Age:27 City:la} {ID:2 Name:Bob Age:0 City:ny}]
Decoder.Map - data normalization

The Decoder's Map function is a powerful tool that can help clean up or normalize the incoming data before the actual decoding takes place.

Lets say we want to decode some floats and the csv input contains some NaN values, but these values are represented by the 'n/a' string. An attempt to decode 'n/a' into float will end up with error, because strconv.ParseFloat expects 'NaN'. Knowing that, we can implement a Map function that will normalize our 'n/a' string and turn it to 'NaN' only for float types.

	dec, err := NewDecoder(r)
	if err != nil {
		log.Fatal(err)
	}

	dec.Map = func(field, column string, v interface{}) string {
		if _, ok := v.(float64); ok && field == "n/a" {
			return "NaN"
		}
		return field
	}

Now our float64 fields will be decoded properly into NaN. What about float32, float type aliases and other NaN formats? Look at the full example here.

Different separator/delimiter

Some files may use different value separators, for example TSV files would use \t. The following examples show how to set up a Decoder and Encoder for such use case.

Decoder:
	csvReader := csv.NewReader(r)
	csvReader.Comma = '\t'

	dec, err := NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	var users []User
	for {
		var u User
		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}
		users = append(users, u)
	}

Encoder:
	var buf bytes.Buffer

	w := csv.NewWriter(&buf)
        w.Comma = '\t'
	enc := csvutil.NewEncoder(w)

	for _, u := range users {
		if err != enc.Encode(u); err != nil {
			log.Fatal(err)
		}
        }

	w.Flush()
	if err := w.Error(); err != nil {
		log.Fatal(err)
	}
Decoder and interface values

In the case of interface struct fields data is decoded into strings. However, if Decoder finds out that these fields were initialized with pointer values of a specific type prior to decoding, it will try to decode data into that type.

Why only pointer values? Because these values must be both addressable and settable, otherwise Decoder will have to initialize these types on its own, which could result in losing some unexported information.

If interface stores a non-pointer value it will be replaced with a string.

This example will show how this feature could be useful:

package main

import (
	"bytes"
	"encoding/csv"
	"fmt"
	"io"
	"log"

	"github.com/jszwec/csvutil"
)

// Value defines one record in the csv input. In this example it is important
// that Type field is defined before Value. Decoder reads headers and values
// in the same order as struct fields are defined.
type Value struct {
	Type  string      `csv:"type"`
	Value interface{} `csv:"value"`
}

func main() {
	// lets say our csv input defines variables with their types and values.
	data := []byte(`
type,value
string,string_value
int,10
`)

	dec, err := csvutil.NewDecoder(csv.NewReader(bytes.NewReader(data)))
	if err != nil {
		log.Fatal(err)
	}

	// we would like to read every variable and store their already parsed values
	// in the interface field. We can use Decoder.Map function to initialize
	// interface with proper values depending on the input.
	var value Value
	dec.Map = func(field, column string, v interface{}) string {
		if column == "type" {
			switch field {
			case "int": // csv input tells us that this variable contains an int.
				var n int
				value.Value = &n // lets initialize interface with an initialized int pointer.
			default:
				return field
			}
		}
		return field
	}

	for {
		value = Value{}
		if err := dec.Decode(&value); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		if value.Type == "int" {
			// our variable type is int, Map func already initialized our interface
			// as int pointer, so we can safely cast it and use it.
			n, ok := value.Value.(*int)
			if !ok {
				log.Fatal("expected value to be *int")
			}
			fmt.Printf("value_type: %s; value: (%T) %d\n", value.Type, value.Value, *n)
		} else {
			fmt.Printf("value_type: %s; value: (%T) %v\n", value.Type, value.Value, value.Value)
		}
	}

	// Output:
	// value_type: string; value: (string) string_value
	// value_type: int; value: (*int) 10
}
Custom time.Time format

Type time.Time can be used as is in the struct fields by both Decoder and Encoder due to the fact that both have builtin support for encoding.TextUnmarshaler and encoding.TextMarshaler. This means that by default Time has a specific format; look at MarshalText and UnmarshalText. This example shows how to override it.

type Time struct {
	time.Time
}

const format = "2006/01/02 15:04:05"

func (t Time) MarshalCSV() ([]byte, error) {
	var b [len(format)]byte
	return t.AppendFormat(b[:0], format), nil
}

func (t *Time) UnmarshalCSV(data []byte) error {
	tt, err := time.Parse(format, string(data))
	if err != nil {
		return err
	}
	*t = Time{Time: tt}
	return nil
}

Performance

csvutil provides the best encoding and decoding performance with small memory usage.

Unmarshal

benchmark code

csvutil:
BenchmarkUnmarshal/csvutil.Unmarshal/1_record-8         	  300000	      5852 ns/op	    6900 B/op	      32 allocs/op
BenchmarkUnmarshal/csvutil.Unmarshal/10_records-8       	  100000	     13946 ns/op	    7924 B/op	      41 allocs/op
BenchmarkUnmarshal/csvutil.Unmarshal/100_records-8      	   20000	     95234 ns/op	   18100 B/op	     131 allocs/op
BenchmarkUnmarshal/csvutil.Unmarshal/1000_records-8     	    2000	    903502 ns/op	  120652 B/op	    1031 allocs/op
BenchmarkUnmarshal/csvutil.Unmarshal/10000_records-8    	     200	   9273741 ns/op	 1134694 B/op	   10031 allocs/op
BenchmarkUnmarshal/csvutil.Unmarshal/100000_records-8   	      20	  94125839 ns/op	11628908 B/op	  100031 allocs/op
gocsv:
BenchmarkUnmarshal/gocsv.Unmarshal/1_record-8           	  200000	     10363 ns/op	    7651 B/op	      96 allocs/op
BenchmarkUnmarshal/gocsv.Unmarshal/10_records-8         	   50000	     31308 ns/op	   13747 B/op	     306 allocs/op
BenchmarkUnmarshal/gocsv.Unmarshal/100_records-8        	   10000	    237417 ns/op	   72499 B/op	    2379 allocs/op
BenchmarkUnmarshal/gocsv.Unmarshal/1000_records-8       	     500	   2264064 ns/op	  650135 B/op	   23082 allocs/op
BenchmarkUnmarshal/gocsv.Unmarshal/10000_records-8      	      50	  24189980 ns/op	 7023592 B/op	  230091 allocs/op
BenchmarkUnmarshal/gocsv.Unmarshal/100000_records-8     	       5	 264797120 ns/op	75483184 B/op	 2300104 allocs/op
easycsv:
BenchmarkUnmarshal/easycsv.ReadAll/1_record-8           	  100000	     13287 ns/op	    8855 B/op	      81 allocs/op
BenchmarkUnmarshal/easycsv.ReadAll/10_records-8         	   20000	     66767 ns/op	   24072 B/op	     391 allocs/op
BenchmarkUnmarshal/easycsv.ReadAll/100_records-8        	    3000	    586222 ns/op	  170537 B/op	    3454 allocs/op
BenchmarkUnmarshal/easycsv.ReadAll/1000_records-8       	     300	   5630293 ns/op	 1595662 B/op	   34057 allocs/op
BenchmarkUnmarshal/easycsv.ReadAll/10000_records-8      	      20	  60513920 ns/op	18870410 B/op	  340068 allocs/op
BenchmarkUnmarshal/easycsv.ReadAll/100000_records-8     	       2	 623618489 ns/op	190822456 B/op	 3400084 allocs/op
Marshal

benchmark code

csvutil:
BenchmarkMarshal/csvutil.Marshal/1_record-8         	  300000	      5501 ns/op	    6336 B/op	      26 allocs/op
BenchmarkMarshal/csvutil.Marshal/10_records-8       	  100000	     20647 ns/op	    7248 B/op	      36 allocs/op
BenchmarkMarshal/csvutil.Marshal/100_records-8      	   10000	    174656 ns/op	   24656 B/op	     127 allocs/op
BenchmarkMarshal/csvutil.Marshal/1000_records-8     	    1000	   1697202 ns/op	  164961 B/op	    1029 allocs/op
BenchmarkMarshal/csvutil.Marshal/10000_records-8    	     100	  16995940 ns/op	 1522412 B/op	   10032 allocs/op
BenchmarkMarshal/csvutil.Marshal/100000_records-8   	      10	 172411108 ns/op	22363382 B/op	  100036 allocs/op
gocsv:
BenchmarkMarshal/gocsv.Marshal/1_record-8           	  200000	      7202 ns/op	    5922 B/op	      83 allocs/op
BenchmarkMarshal/gocsv.Marshal/10_records-8         	   50000	     31821 ns/op	    9427 B/op	     390 allocs/op
BenchmarkMarshal/gocsv.Marshal/100_records-8        	    5000	    285885 ns/op	   52773 B/op	    3451 allocs/op
BenchmarkMarshal/gocsv.Marshal/1000_records-8       	     500	   2806405 ns/op	  452517 B/op	   34053 allocs/op
BenchmarkMarshal/gocsv.Marshal/10000_records-8      	      50	  28682052 ns/op	 4412157 B/op	  340065 allocs/op
BenchmarkMarshal/gocsv.Marshal/100000_records-8     	       5	 286836492 ns/op	51969227 B/op	 3400083 allocs/op

Documentation

Overview

Package csvutil provides fast and idiomatic mapping between CSV and Go values.

This package does not provide a CSV parser itself, it is based on the Reader and Writer interfaces which are implemented by eg. std csv package. This gives a possibility of choosing any other CSV writer or reader which may be more performant.

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrFieldCount = errors.New("wrong number of fields in record")

ErrFieldCount is returned when header's length doesn't match the length of the read record.

Functions

func Header(v interface{}, tag string) ([]string, error)

Header scans the provided struct type and generates a CSV header for it.

Field names are written in the same order as struct fields are defined. Embedded struct's fields are treated as if they were part of the outer struct. Fields that are embedded types and that are tagged are treated like any other field.

Unexported fields and fields with tag "-" are ignored.

Tagged fields have the priority over non tagged fields with the same name.

Following the Go visibility rules if there are multiple fields with the same name (tagged or not tagged) on the same level and choice between them is ambiguous, then all these fields will be ignored.

It is a good practice to call Header once for each type. The suitable place for calling it is init function. Look at Decoder.DecodingDataWithNoHeader example.

If tag is left empty the default "csv" will be used.

Header will return UnsupportedTypeError if the provided value is nil or is not a struct.

Example
package main

import (
	"fmt"
	"log"

	"github.com/jszwec/csvutil"
)

func main() {
	type User struct {
		ID    int
		Name  string
		Age   int `csv:",omitempty"`
		State int `csv:"-"`
		City  string
		ZIP   string `csv:"zip_code"`
	}

	header, err := csvutil.Header(User{}, "csv")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(header)
}
Output:

[ID Name Age City zip_code]

func Marshal

func Marshal(v interface{}) ([]byte, error)

Marshal returns the CSV encoding of slice v. If v is not a slice or elements are not structs then Marshal returns InvalidMarshalError.

Marshal uses the std encoding/csv.Writer with its default settings for csv encoding.

Marshal will always encode the CSV header even for the empty slice.

For the exact encoding rules look at Encoder.Encode method.

Example
package main

import (
	"fmt"
	"time"

	"github.com/jszwec/csvutil"
)

func main() {
	type Address struct {
		City    string
		Country string
	}

	type User struct {
		Name string
		Address
		Age       int `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	users := []User{
		{
			Name:      "John",
			Address:   Address{"Boston", "USA"},
			Age:       26,
			CreatedAt: time.Date(2010, 6, 2, 12, 0, 0, 0, time.UTC),
		},
		{
			Name:    "Alice",
			Address: Address{"SF", "USA"},
		},
	}

	b, err := csvutil.Marshal(users)
	if err != nil {
		fmt.Println("error:", err)
	}
	fmt.Println(string(b))

}
Output:

Name,City,Country,age,CreatedAt
John,Boston,USA,26,2010-06-02T12:00:00Z
Alice,SF,USA,,0001-01-01T00:00:00Z
Example (CustomMarshalCSV)
package main

import (
	"fmt"

	"github.com/jszwec/csvutil"
)

type Status uint8

const (
	Unknown = iota
	Success
	Failure
)

func (s Status) MarshalCSV() ([]byte, error) {
	switch s {
	case Success:
		return []byte("success"), nil
	case Failure:
		return []byte("failure"), nil
	default:
		return []byte("unknown"), nil
	}
}

type Job struct {
	ID     int
	Status Status
}

func main() {
	jobs := []Job{
		{1, Success},
		{2, Failure},
	}

	b, err := csvutil.Marshal(jobs)
	if err != nil {
		fmt.Println("error:", err)
	}
	fmt.Println(string(b))

}
Output:

ID,Status
1,success
2,failure

func Unmarshal

func Unmarshal(data []byte, v interface{}) error

Unmarshal parses the CSV-encoded data and stores the result in the slice pointed to by v. If v is nil or not a pointer to a slice, Unmarshal returns an InvalidUnmarshalError.

Unmarshal uses the std encoding/csv.Reader for parsing and csvutil.Decoder for populating the struct elements in the provided slice. For exact decoding rules look at the Decoder's documentation.

The first line in data is treated as a header. Decoder will use it to map csv columns to struct's fields.

In case of success the provided slice will be reinitialized and its content fully replaced with decoded data.

Example
package main

import (
	"fmt"
	"time"

	"github.com/jszwec/csvutil"
)

func main() {
	var csvInput = []byte(`
name,age,CreatedAt
jacek,26,2012-04-01T15:00:00Z
john,,0001-01-01T00:00:00Z`,
	)

	type User struct {
		Name      string `csv:"name"`
		Age       int    `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	var users []User
	if err := csvutil.Unmarshal(csvInput, &users); err != nil {
		fmt.Println("error:", err)
	}

	for _, u := range users {
		fmt.Printf("%+v\n", u)
	}

}
Output:

{Name:jacek Age:26 CreatedAt:2012-04-01 15:00:00 +0000 UTC}
{Name:john Age:0 CreatedAt:0001-01-01 00:00:00 +0000 UTC}

Types

type Decoder

type Decoder struct {
	// Tag defines which key in the struct field's tag to scan for names and
	// options (Default: 'csv').
	Tag string

	// If not nil, Map is a function that is called for each field in the csv
	// record before decoding the data. It allows mapping certain string values
	// for specific columns or types to a known format. Decoder calls Map with
	// the current column name (taken from header) and a zero non-pointer value
	// of a type to which it is going to decode data into. Implementations
	// should use type assertions to recognize the type.
	//
	// The good example of use case for Map is if NaN values are represented by
	// eg 'n/a' string, implementing a specific Map function for all floats
	// could map 'n/a' back into 'NaN' to allow successful decoding.
	//
	// Use Map with caution. If the requirements of column or type are not met
	// Map should return 'field', since it is the original value that was
	// read from the csv input, this would indicate no change.
	//
	// If struct field is an interface v will be of type string, unless the
	// struct field contains a settable pointer value - then v will be a zero
	// value of that type.
	//
	// Map must be set before the first call to Decode and not changed after it.
	Map func(field, col string, v interface{}) string
	// contains filtered or unexported fields
}

A Decoder reads and decodes string records into structs.

Example (CustomUnmarshalCSV)
package main

import (
	"fmt"
	"strconv"

	"github.com/jszwec/csvutil"
)

type Bar int

func (b *Bar) UnmarshalCSV(data []byte) error {
	n, err := strconv.Atoi(string(data))
	*b = Bar(n)
	return err
}

type Foo struct {
	Int int `csv:"int"`
	Bar Bar `csv:"bar"`
}

func main() {
	var csvInput = []byte(`
int,bar
5,10
6,11`)

	var foos []Foo
	if err := csvutil.Unmarshal(csvInput, &foos); err != nil {
		fmt.Println("error:", err)
	}

	fmt.Printf("%+v", foos)

}
Output:

[{Int:5 Bar:10} {Int:6 Bar:11}]
Example (DecodeEmbedded)
package main

import (
	"encoding/csv"
	"fmt"
	"io"
	"log"
	"strings"

	"github.com/jszwec/csvutil"
)

func main() {
	type Address struct {
		ID    int    `csv:"id"` // same field as in User - this one will be empty
		City  string `csv:"city"`
		State string `csv:"state"`
	}

	type User struct {
		Address
		ID   int    `csv:"id"` // same field as in Address - this one wins
		Name string `csv:"name"`
		Age  int    `csv:"age"`
	}

	csvReader := csv.NewReader(strings.NewReader(
		"id,name,age,city,state\n" +
			"1,alice,25,la,ca\n" +
			"2,bob,30,ny,ny"))

	dec, err := csvutil.NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	var users []User
	for {
		var u User

		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		users = append(users, u)
	}

	fmt.Println(users)

}
Output:

[{{0 la ca} 1 alice 25} {{0 ny ny} 2 bob 30}]
Example (DecodingDataWithNoHeader)
package main

import (
	"bytes"
	"encoding/csv"
	"fmt"
	"io"
	"log"

	"github.com/jszwec/csvutil"
)

type User struct {
	ID    int
	Name  string
	Age   int `csv:",omitempty"`
	State int `csv:"-"`
	City  string
	ZIP   string `csv:"zip_code"`
}

var userHeader []string

func init() {
	h, err := csvutil.Header(User{}, "csv")
	if err != nil {
		log.Fatal(err)
	}
	userHeader = h
}

func main() {
	data := []byte(`
1,John,27,la,90005
2,Bob,,ny,10005`)

	r := csv.NewReader(bytes.NewReader(data))

	dec, err := csvutil.NewDecoder(r, userHeader...)
	if err != nil {
		log.Fatal(err)
	}

	var users []User
	for {
		var u User

		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}
		users = append(users, u)
	}

	fmt.Printf("%+v", users)

}
Output:

[{ID:1 Name:John Age:27 State:0 City:la ZIP:90005} {ID:2 Name:Bob Age:0 State:0 City:ny ZIP:10005}]
Example (InterfaceValues)
package main

import (
	"bytes"
	"encoding/csv"
	"fmt"
	"io"
	"log"

	"github.com/jszwec/csvutil"
)

// Value defines one record in the csv input. In this example it is important
// that Type field is defined before Value. Decoder reads headers and values
// in the same order as struct fields are defined.
type Value struct {
	Type  string      `csv:"type"`
	Value interface{} `csv:"value"`
}

func main() {
	// lets say our csv input defines variables with their types and values.
	data := []byte(`
type,value
string,string_value
int,10
`)

	dec, err := csvutil.NewDecoder(csv.NewReader(bytes.NewReader(data)))
	if err != nil {
		log.Fatal(err)
	}

	// we would like to read every variable and store their already parsed values
	// in the interface field. We can use Decoder.Map function to initialize
	// interface with proper values depending on the input.
	var value Value
	dec.Map = func(field, column string, v interface{}) string {
		if column == "type" {
			switch field {
			case "int": // csv input tells us that this variable contains an int.
				var n int
				value.Value = &n // lets initialize interface with an initialized int pointer.
			default:
				return field
			}
		}
		return field
	}

	for {
		value = Value{}
		if err := dec.Decode(&value); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		if value.Type == "int" {
			// our variable type is int, Map func already initialized our interface
			// as int pointer, so we can safely cast it and use it.
			n, ok := value.Value.(*int)
			if !ok {
				log.Fatal("expected value to be *int")
			}
			fmt.Printf("value_type: %s; value: (%T) %d\n", value.Type, value.Value, *n)
		} else {
			fmt.Printf("value_type: %s; value: (%T) %v\n", value.Type, value.Value, value.Value)
		}
	}

}
Output:

value_type: string; value: (string) string_value
value_type: int; value: (*int) 10

func NewDecoder

func NewDecoder(r Reader, header ...string) (dec *Decoder, err error)

NewDecoder returns a new decoder that reads from r.

Decoder will match struct fields according to the given header.

If header is empty NewDecoder will read one line and treat it as a header.

Records coming from r must be of the same length as the header.

NewDecoder may return io.EOF if there is no data in r and no header was provided by the caller.

func (*Decoder) Decode

func (d *Decoder) Decode(v interface{}) (err error)

Decode reads the next string record from its input and stores it in the value pointed to by v which must be a non-nil struct pointer.

Decode matches all exported struct fields based on the header. Struct fields can be adjusted by using tags.

The "omitempty" option specifies that the field should be omitted from the decoding if record's field is an empty string.

Examples of struct field tags and their meanings:

// Decode matches this field with "myName" header column.
Field int `csv:"myName"`

// Decode matches this field with "Field" header column.
Field int

// Decode matches this field with "myName" header column and decoding is not
// called if record's field is an empty string.
Field int `csv:"myName,omitempty"`

// Decode matches this field with "Field" header column and decoding is not
// called if record's field is an empty string.
Field int `csv:",omitempty"`

// Decode ignores this field.
Field int `csv:"-"`

By default decode looks for "csv" tag, but this can be changed by setting Decoder.Tag field.

To Decode into a custom type v must implement csvutil.Unmarshaler or encoding.TextUnmarshaler.

Anonymous struct fields with tags are treated like normal fields and they must implement csvutil.Unmarshaler or encoding.TextUnmarshaler.

Anonymous struct fields without tags are populated just as if they were part of the main struct. However, fields in the main struct have bigger priority and they are populated first. If main struct and anonymous struct field have the same fields, the main struct's fields will be populated.

Fields of type []byte expect the data to be base64 encoded strings.

Float fields are decoded to NaN if a string value is 'NaN'. This check is case insensitive.

Interface fields are decoded to strings unless they contain settable pointer value.

Example
package main

import (
	"encoding/csv"
	"fmt"
	"io"
	"log"
	"strings"

	"github.com/jszwec/csvutil"
)

func main() {
	type User struct {
		ID   *int   `csv:"id,omitempty"`
		Name string `csv:"name"`
		City string `csv:"city"`
		Age  int    `csv:"age"`
	}

	csvReader := csv.NewReader(strings.NewReader(`
id,name,age,city
,alice,25,la
,bob,30,ny`))

	dec, err := csvutil.NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	var users []User
	for {
		var u User
		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}
		users = append(users, u)
	}

	fmt.Println(users)

}
Output:

[{<nil> alice la 25} {<nil> bob ny 30}]

func (*Decoder) Header

func (d *Decoder) Header() []string

Header returns the first line that came from the reader, or returns the defined header by the caller.

func (*Decoder) Record

func (d *Decoder) Record() []string

Record returns the most recently read record. The slice is valid until the next call to Decode.

func (*Decoder) Unused

func (d *Decoder) Unused() []int

Unused returns a list of column indexes that were not used during decoding due to lack of matching struct field.

Example
package main

import (
	"encoding/csv"
	"fmt"
	"io"
	"log"
	"strings"

	"github.com/jszwec/csvutil"
)

func main() {
	type User struct {
		Name      string            `csv:"name"`
		City      string            `csv:"city"`
		Age       int               `csv:"age"`
		OtherData map[string]string `csv:"-"`
	}

	csvReader := csv.NewReader(strings.NewReader(`
name,age,city,zip
alice,25,la,90005
bob,30,ny,10005`))

	dec, err := csvutil.NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	header := dec.Header()
	var users []User
	for {
		u := User{OtherData: make(map[string]string)}

		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		for _, i := range dec.Unused() {
			u.OtherData[header[i]] = dec.Record()[i]
		}
		users = append(users, u)
	}

	fmt.Println(users)

}
Output:

[{alice la 25 map[zip:90005]} {bob ny 30 map[zip:10005]}]

type Encoder

type Encoder struct {
	// Tag defines which key in the struct field's tag to scan for names and
	// options (Default: 'csv').
	Tag string

	// If AutoHeader is true, a struct header is encoded during the first call
	// to Encode automatically (Default: true).
	AutoHeader bool
	// contains filtered or unexported fields
}

Encoder writes structs CSV representations to the output stream.

func NewEncoder

func NewEncoder(w Writer) *Encoder

NewEncoder returns a new encoder that writes to w.

func (*Encoder) Encode

func (e *Encoder) Encode(v interface{}) error

Encode writes the CSV encoding of v to the output stream. The provided argument v must be a non-nil struct.

Only the exported fields will be encoded.

First call to Encode will write a header unless EncodeHeader was called first or AutoHeader is false. Header names can be customized by using tags ('csv' by default), otherwise original Field names are used.

Header and fields are written in the same order as struct fields are defined. Embedded struct's fields are treated as if they were part of the outer struct. Fields that are embedded types and that are tagged are treated like any other field, but they have to implement Marshaler or encoding.TextMarshaler interfaces.

Marshaler interface has the priority over encoding.TextMarshaler.

Tagged fields have the priority over non tagged fields with the same name.

Following the Go visibility rules if there are multiple fields with the same name (tagged or not tagged) on the same level and choice between them is ambiguous, then all these fields will be ignored.

Nil values will be encoded as empty strings. Same will happen if 'omitempty' tag is set, and the value is a default value like 0, false or nil interface.

Bool types are encoded as 'true' or 'false'.

Float types are encoded using strconv.FormatFloat with precision -1 and 'G' format. NaN values are encoded as 'NaN' string.

Fields of type []byte are being encoded as base64-encoded strings.

Fields can be excluded from encoding by using '-' tag option.

Examples of struct tags:

// Field appears as 'myName' header in CSV encoding.
Field int `csv:"myName"`

// Field appears as 'Field' header in CSV encoding.
Field int

// Field appears as 'myName' header in CSV encoding and is an empty string
// if Field is 0.
Field int `csv:"myName,omitempty"`

// Field appears as 'Field' header in CSV encoding and is an empty string
// if Field is 0.
Field int `csv:",omitempty"`

// Encode ignores this field.
Field int `csv:"-"`

Encode doesn't flush data. The caller is responsible for calling Flush() if the used Writer supports it.

Example
package main

import (
	"bytes"
	"encoding/csv"
	"fmt"

	"github.com/jszwec/csvutil"
)

func main() {
	type Address struct {
		City    string
		Country string
	}

	type User struct {
		Name string
		Address
		Age int `csv:"age,omitempty"`
	}

	users := []User{
		{Name: "John", Address: Address{"Boston", "USA"}, Age: 26},
		{Name: "Bob", Address: Address{"LA", "USA"}, Age: 27},
		{Name: "Alice", Address: Address{"SF", "USA"}},
	}

	var buf bytes.Buffer
	w := csv.NewWriter(&buf)
	enc := csvutil.NewEncoder(w)

	for _, u := range users {
		if err := enc.Encode(u); err != nil {
			fmt.Println("error:", err)
		}
	}

	w.Flush()
	if err := w.Error(); err != nil {
		fmt.Println("error:", err)
	}

	fmt.Println(buf.String())

}
Output:

Name,City,Country,age
John,Boston,USA,26
Bob,LA,USA,27
Alice,SF,USA,

func (*Encoder) EncodeHeader added in v1.1.0

func (e *Encoder) EncodeHeader(v interface{}) error

EncodeHeader writes the CSV header of the provided struct value to the output stream. The provided argument v must be a struct value.

The first Encode method call will not write header if EncodeHeader was called before it. This method can be called in cases when a data set could be empty, but header is desired.

EncodeHeader is like Header function, but it works with the Encoder and writes directly to the output stream. Look at Header documentation for the exact header encoding rules.

Example
package main

import (
	"bytes"
	"encoding/csv"
	"fmt"

	"github.com/jszwec/csvutil"
)

func main() {
	type User struct {
		Name string
		Age  int `csv:"age,omitempty"`
	}

	var buf bytes.Buffer
	w := csv.NewWriter(&buf)
	enc := csvutil.NewEncoder(w)

	if err := enc.EncodeHeader(User{}); err != nil {
		fmt.Println("error:", err)
	}

	w.Flush()
	if err := w.Error(); err != nil {
		fmt.Println("error:", err)
	}

	fmt.Println(buf.String())

}
Output:

Name,age

type InvalidDecodeError

type InvalidDecodeError struct {
	Type reflect.Type
}

An InvalidDecodeError describes an invalid argument passed to Decode. (The argument to Decode must be a non-nil struct pointer)

func (*InvalidDecodeError) Error

func (e *InvalidDecodeError) Error() string

type InvalidEncodeError

type InvalidEncodeError struct {
	Type reflect.Type
}

InvalidEncodeError is returned by Encode when the provided value was invalid.

func (*InvalidEncodeError) Error

func (e *InvalidEncodeError) Error() string

type InvalidMarshalError

type InvalidMarshalError struct {
	Type reflect.Type
}

InvalidMarshalError is returned by Marshal when the provided value was invalid.

func (*InvalidMarshalError) Error

func (e *InvalidMarshalError) Error() string

type InvalidUnmarshalError

type InvalidUnmarshalError struct {
	Type reflect.Type
}

An InvalidUnmarshalError describes an invalid argument passed to Unmarshal. (The argument to Unmarshal must be a non-nil slice of structs pointer)

func (*InvalidUnmarshalError) Error

func (e *InvalidUnmarshalError) Error() string

type Marshaler

type Marshaler interface {
	MarshalCSV() ([]byte, error)
}

Marshaler is the interface implemented by types that can marshal themselves into valid string.

type MarshalerError

type MarshalerError struct {
	Type          reflect.Type
	MarshalerType string
	Err           error
}

MarshalerError is returned by Encoder when MarshalCSV or MarshalText returned an error.

func (*MarshalerError) Error

func (e *MarshalerError) Error() string

type Reader

type Reader interface {
	Read() ([]string, error)
}

Reader provides the interface for reading a single CSV record.

If there is no data left to be read, Read returns (nil, io.EOF).

It is implemented by csv.Reader.

type UnmarshalTypeError

type UnmarshalTypeError struct {
	Value string       // string value
	Type  reflect.Type // type of Go value it could not be assigned to
}

An UnmarshalTypeError describes a string value that was not appropriate for a value of a specific Go type.

func (*UnmarshalTypeError) Error

func (e *UnmarshalTypeError) Error() string

type Unmarshaler

type Unmarshaler interface {
	UnmarshalCSV([]byte) error
}

Unmarshaler is the interface implemented by types that can unmarshal a single record's field description of themselves.

type UnsupportedTypeError

type UnsupportedTypeError struct {
	Type reflect.Type
}

An UnsupportedTypeError is returned when attempting to encode or decode a value of an unsupported type.

func (*UnsupportedTypeError) Error

func (e *UnsupportedTypeError) Error() string

type Writer

type Writer interface {
	Write([]string) error
}

Writer provides the interface for writing a single CSV record.

It is implemented by csv.Writer.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL