Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
View Source
var TokenKindNames = map[TokenKind]string{ TokenKindIllegal: "Illegal", TokenKindEOF: "EOF", TokenKindPunctuator: "Punctuator", TokenKindName: "Name", TokenKindIntValue: "IntValue", TokenKindFloatValue: "FloatValue", TokenKindStringValue: "StringValue", TokenKindUnicodeBOM: "UnicodeBOM", TokenKindWhiteSpace: "WhiteSpace", TokenKindLineTerminator: "LineTerminator", TokenKindComment: "Comment", TokenKindComma: "Comma", }
TokenKindNames is a map of token types to their names as strings.
Functions ¶
This section is empty.
Types ¶
type Lexer ¶
type Lexer struct {
// contains filtered or unexported fields
}
Lexer holds the state of a state machine for lexically analysing GraphQL queries.
type Parser ¶
type Parser struct {
// contains filtered or unexported fields
}
Parser is a parser for GraphQL documents.
type Token ¶
type Token struct { Kind TokenKind // The token type. Literal string // The literal value consumed. Line int // The line number at the start of this item. Column int // The starting position, in runes, of this token on this line. }
Token represents a small, easily categorisable data structure that is fed to the parser to produce the abstract syntax tree (AST).
type TokenKind ¶
type TokenKind int
TokenKind represents a type of token. The types are predefined as constants.
const ( TokenKindIllegal TokenKind = iota - 1 TokenKindEOF // Lexical Tokens. TokenKindPunctuator TokenKindName TokenKindIntValue TokenKindFloatValue TokenKindStringValue // Ignored tokens. TokenKindUnicodeBOM TokenKindWhiteSpace TokenKindLineTerminator TokenKindComment TokenKindComma )
All different lexical token types.
Click to show internal directories.
Click to hide internal directories.