Documentation
¶
Overview ¶
Package csvmap wraps the standard library's encoding/csv package to provide reading and writing maps to csv files.
Because this package only wraps encoding/csv, it only supports the csv file format specified in RFC 4180. Please see the documentation for encoding/csv for more details.
This package assumes that the first row of the csv file contains header data which provides the keys for the following map access. Reading:
Header1,Header2,Header3 Field1,Field2,Field3
results in:
{"Header1":"Field1", "Header2":"Field2", "Header3":"Field3"}
For files with lines of additional header information, the Discard function is provided to remove these before reading the header row.
A Writer method is also provided for writing mapped data to file as well.
w := NewWriter(..., []string{"Header1", "Header", "Header3"}) w.Write({"Header1":"Field1", "Header2":"Field2", "Header3":"Field3"}) w.Flush()
results in:
Header1,Header2,Header3 Field1,Field2,Field3
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrDuplicateHeaders is returned when there are duplicated items in the // header row. ErrDuplicateHeaders = errors.New("duplicate headers found") // ErrHeaderSet is returned when trying to discard lines after the headers // have already been read - either through Headers() or Read/ReadAll. ErrHeaderSet = errors.New("headers set, can't discard lines") )
Functions ¶
This section is empty.
Types ¶
type Reader ¶
type Reader struct { // Comma is the field delimiter. // It is set to comma (',') by NewReader. Comma rune // Comment, if not 0, is the comment character. Lines beginning with the // Comment character without preceding whitespace are ignored. // With leading whitespace the Comment character becomes part of the // field, even if TrimLeadingSpace is true. Comment rune // If LazyQuotes is true, a quote may appear in an unquoted field and a // non-doubled quote may appear in a quoted field. LazyQuotes bool // If TrimLeadingSpace is true, leading white space in a field is ignored. // This is done even if the field delimiter, Comma, is white space. TrimLeadingSpace bool // contains filtered or unexported fields }
A Reader returns records (a map of values) from a csv-encoded file.
As returned by NewReader, a Reader expects input conforming to RFC 4180. The exported fields can be changed to customize the details before the first call to Headers/Read/ReadAll.
The header row will be read on the first call to Headers/Read/ReadAll. If there are duplicated keys in the header, an ErrDuplicateHeaders error will be returned at this point.
Example ¶
package main import ( "fmt" "io" "log" "strings" "github.com/andrewcharlton/csvmap" ) func main() { in := `name,alias,superpower Logan,Wolverine,"Super healing and adamantium claws" Charles Xavier,Professor X,Telepathy ` r := csvmap.NewReader(strings.NewReader(in)) for { record, err := r.Read() if err == io.EOF { break } if err != nil { log.Fatal(err) } fmt.Println("Name:", record["name"]) fmt.Println("Alias:", record["alias"]) fmt.Println("Superpower:", record["superpower"]) fmt.Println("") } }
Output: Name: Logan Alias: Wolverine Superpower: Super healing and adamantium claws Name: Charles Xavier Alias: Professor X Superpower: Telepathy
func (*Reader) Discard ¶
Discard ignores the first n lines of the input reader before reading the headers. If should be called before the first call to Headers/Read/ReadAll, otherwise it will return an ErrHeaderSet error.
If there are insufficient lines to discard, it will return an io.EOF error.
func (*Reader) Read ¶
Read reads one record (a map of fields to values). If the record has the unexpected number of fields, Read returns a map of the values present, along with a csv.ErrFieldCount error. Except for that case, Read always returns either a non-nil record or a non-nil error, but not both. If there is no data left to be read, Read returns nil, io.EOF.
On the first call of Read/ReadAll, if headers have not been set by ReadHeaders, this will be done automatically.
type Writer ¶
type Writer struct { Comma rune // Field delimiter (set to ',' by NewWriter) UseCRLF bool // True to use \r\n as the line terminator // contains filtered or unexported fields }
A Writer writes records to a csv-encoded file.
As returned by NewWriter, a Writer writes records terminated by a newline and uses ',' as the field delimiter. The exported fields can be changed to customize the details before the call to WriteHeaders
Comma is the field delimiter.
If UseCRLF is true, the Writer ends each record with \r\n instead of \n.
Example ¶
package main import ( "bytes" "fmt" "log" "github.com/andrewcharlton/csvmap" ) func main() { headers := []string{"Name", "Alias", "Superpower"} data := []map[string]string{ {"Name": "Logan", "Alias": "Wolverine", "Superpower": "Super healing"}, {"Name": "Charles Xavier", "Alias": "Professor X", "Superpower": "Telepathy"}, } out := &bytes.Buffer{} w := csvmap.NewWriter(out, headers) err := w.WriteAll(data) if err != nil { log.Fatal(err) } fmt.Println(out.String()) }
Output: Name,Alias,Superpower Logan,Wolverine,Super healing Charles Xavier,Professor X,Telepathy
func NewWriter ¶
NewWriter returns a new writer that writes to w.
The file headers must be specified to provide the order in which record fields will be written to the writer. If headers are provided which don't match the fields in the records, these columns will be left blank.
func (*Writer) Flush ¶
func (w *Writer) Flush()
Flush writes any buffered data to the underlying io.Writer. To check if an error occurred during the Flush, call Error.