Documentation
¶
Overview ¶
Simple non-performant recursive descent parser for purposes of sqlcode; currently only supports the special @Enum declarations used by sqlcode. We only allow these on the top, and parsing will stop without any errors at the point hitting anything else.
Index ¶
- Variables
- func AdvanceAndCopy(s *Scanner, target *[]Unparsed)
- func CopyToken(s *Scanner, target *[]Unparsed)
- func IsSqlcodeConstVariable(varname string) bool
- func NextTokenCopyingWhitespace(s *Scanner, target *[]Unparsed)
- func Parse(s *Scanner, result *Document)
- func TopologicalSort(input []Create) (output []Create, errpos Pos, err error)
- type Create
- func (c Create) DependsOnStrings() (result []string)
- func (c Create) DocstringAsString() string
- func (c Create) DocstringYamldoc() (string, error)
- func (c Create) ParseYamlInDocstring(out any) error
- func (c Create) Serialize(w io.StringWriter) error
- func (c Create) SerializeBytes(w io.Writer) error
- func (c Create) String() string
- func (c Create) WithoutPos() Create
- type Declare
- type Document
- type Error
- type FileRef
- type NotFoundError
- type Pos
- type PosString
- type Scanner
- func (s Scanner) Clone() *Scanner
- func (s *Scanner) NextNonWhitespaceCommentToken() TokenType
- func (s *Scanner) NextNonWhitespaceToken() TokenType
- func (s *Scanner) NextToken() TokenType
- func (s *Scanner) ReservedWord() string
- func (s *Scanner) SkipWhitespace()
- func (s *Scanner) SkipWhitespaceComments()
- func (s *Scanner) Start() Pos
- func (s *Scanner) Stop() Pos
- func (s *Scanner) Token() string
- func (s *Scanner) TokenLower() string
- func (s *Scanner) TokenType() TokenType
- type TokenType
- type Type
- type Unparsed
Constants ¶
This section is empty.
Variables ¶
var CycleError = errors.New("Detected a dependency cycle")
Functions ¶
func AdvanceAndCopy ¶
AdvanceAndCopy is like NextToken; advance to next token that is not whitespace and return Note: The 'go' and EOF tokens are *not* copied
func IsSqlcodeConstVariable ¶
func NextTokenCopyingWhitespace ¶
NextTokenCopyingWhitespace is like s.NextToken(), but if whitespace is encountered it is simply copied into `target`. Upon return, the scanner is located at a non-whitespace token, and target is either unmodified or filled with some whitespace nodes.
Types ¶
type Create ¶
type Create struct { CreateType string // "procedure", "function" or "type" QuotedName PosString // proc/func/type name, including [] Body []Unparsed DependsOn []PosString Docstring []PosString // comment lines before the create statement. Note: this is also part of Body }
func (Create) DependsOnStrings ¶
func (Create) DocstringAsString ¶ added in v0.4.0
func (Create) DocstringYamldoc ¶ added in v0.4.0
func (Create) ParseYamlInDocstring ¶ added in v0.4.0
func (Create) WithoutPos ¶
type Declare ¶
func (Declare) WithoutPos ¶
type Document ¶
type Document struct { PragmaIncludeIf []string Creates []Create Declares []Declare Errors []Error }
func ParseFilesystems ¶
func ParseFilesystems(fslst []fs.FS, includeTags []string) (filenames []string, result Document, err error)
ParseFileystems iterates through a list of filesystems and parses all files matching `*.sql`, determines which one are sqlcode files from the contents, and returns the combination of all of them.
err will only return errors related to filesystems/reading. Errors related to parsing/sorting will be in result.Errors.
ParseFilesystems will also sort create statements topologically.
func ParseString ¶
func (Document) WithoutPos ¶
Transform a Document to remove all Position information; this is used to 'unclutter' a DOM to more easily write assertions on it.
type Error ¶
func (Error) WithoutPos ¶
type FileRef ¶
type FileRef string
dedicated type for reference to file, in case we need to refactor this later..
type NotFoundError ¶
type NotFoundError struct {
Name string
}
func (NotFoundError) Error ¶
func (n NotFoundError) Error() string
type Scanner ¶
type Scanner struct {
// contains filtered or unexported fields
}
We don't do the lexer/parser split / token stream, but simply use the Scanner directly from the recursive descent parser; it is simply a cursor in the buffer with associated utility methods
func (*Scanner) NextNonWhitespaceCommentToken ¶ added in v0.4.0
func (*Scanner) NextNonWhitespaceToken ¶
func (*Scanner) NextToken ¶
NextToken scans the NextToken token and advances the Scanner's position to after the token
func (*Scanner) ReservedWord ¶
func (*Scanner) SkipWhitespace ¶
func (s *Scanner) SkipWhitespace()
func (*Scanner) SkipWhitespaceComments ¶ added in v0.4.0
func (s *Scanner) SkipWhitespaceComments()
func (*Scanner) TokenLower ¶
type TokenType ¶
type TokenType int
const ( WhitespaceToken TokenType = iota + 1 LeftParenToken RightParenToken SemicolonToken EqualToken CommaToken DotToken VarcharLiteralToken NVarcharLiteralToken MultilineCommentToken SinglelineCommentToken // PragmaToken is like SinglelineCommentToken but starting with `--sqlcode:`. // It is useful to scan this as a separate token type because then this comment // anywhere else than the top of the file will not be treated as whitespace, // but give an error. PragmaToken NumberToken // Note: A lot of stuff pass as identifiers that should really have been // reserved words ReservedWordToken VariableIdentifierToken QuotedIdentifierToken UnquotedIdentifierToken OtherToken UnterminatedVarcharLiteralErrorToken UnterminatedQuotedIdentifierErrorToken DoubleQuoteErrorToken // we don't want to support double quotes, for simplicity, so that is an error and stops parsing UnexpectedCharacterToken NonUTF8ErrorToken BatchSeparatorToken MalformedBatchSeparatorToken EOFToken )