Documentation
¶
Index ¶
- Variables
- type Instructions
- func (i Instructions) ConvertClob(clob string) [][]string
- func (in *Instructions) ConvertFile(filename string)
- func (i Instructions) Init()
- func (i Instructions) LearnFile(delimiter string)
- func (in *Instructions) LearningClob(dataflowname, content, delimiter string)
- func (i Instructions) StartFiles()
Constants ¶
This section is empty.
Variables ¶
var ( // Testing - set this bool to 'true' if you want to test the conversion of flows step by step (in case you spot a bug or something) Testing = false )
Functions ¶
This section is empty.
Types ¶
type Instructions ¶
type Instructions struct {
Dataflow string
Delimiter string
DataItems []string
Spaces []int
Headers []string
Outputname string
// contains filtered or unexported fields
}
Instructions is an exported struct that contains the necessary configuration to convert an MRASCO flow into a CSV file. These instructions are stored as a JSON When you generate instructions, you can adjust the headers (which is the columns of the output CSV) names to match the contents of that column.
func (Instructions) ConvertClob ¶
func (i Instructions) ConvertClob(clob string) [][]string
ConvertClob begins the conversion of clob strings into a 2D array which can either be written to CSV or passed to a database/warehouse. Before using ConvertClob, you'll want to use the LearningClob function.
func (*Instructions) ConvertFile ¶
func (in *Instructions) ConvertFile(filename string)
ConvertFile begins the process of converting the dataflow (file) into a CSV file - depending on what instruction has been loaded. Usually you just need to use 'Start' to convert all dataflows, but this is exported so you can target one file if necessary.
func (Instructions) Init ¶
func (i Instructions) Init()
Init ensures the three folders that are necessary are created in same folder as the executable/main package always start with intructions.Init() flowcrunch_instructions - where JSON instructions are kept flowcrunch_inputfiles - where any dataflows that need to be converted are kept flowcrunch_outputfiles - where any generated CSVs are placed init also clears any files contained within the 'flowcrunch_outputfiles' folder every time
func (Instructions) LearnFile ¶
func (i Instructions) LearnFile(delimiter string)
LearnFile takes in all dataflows with no duplicate items but at least one of every significant item so that it can learn the dataflow structure for converting to CSV All you need to do is save one energy indsutry dataflow with the filename as the dataflow identifier (i.e D0150001.txt) in the flowcrunch_learn folder. Then it saves the instructions to a JSON file saved in a folder named 'flowcrunch_instructions'. It requires you to insert the delimiter of the files to learn i.e a pipe '|' or comma.
func (*Instructions) LearningClob ¶
func (in *Instructions) LearningClob(dataflowname, content, delimiter string)
LearningClob takes in a dataflow name i.e D0150 (string), dataflow clob (string) and delimiter (string) and creates instructions on how to convert that clob into a CSV version of the dataflow. instructions are saved to flowcrunch_instructions folder as JSON
func (Instructions) StartFiles ¶
func (i Instructions) StartFiles()
Start begins the conversion of energy industry dataflows into CSV files. Before using start, you'll want to use the Learn function.