Documentation
¶
Overview ¶
Package csslexer provides a lexer for CSS (Cascading Style Sheets) files.
It implements a tokenizer algorithm inspired by Blink, closely mirroring the parsing logic used by modern browsers.
For more information, see the README.md file.
GitHub repository: https://github.com/renbaoshuo/go-css-lexer
Index ¶
- Constants
- type Input
- func (z *Input) Current() []rune
- func (z *Input) CurrentOffset() int
- func (z *Input) CurrentString() string
- func (z *Input) CurrentSuffix(offset int) []rune
- func (z *Input) CurrentSuffixString(offset int) string
- func (z *Input) Err() error
- func (z *Input) Move(n int)
- func (z *Input) MoveWhilePredicate(pred func(rune) bool)
- func (z *Input) Peek(n int) rune
- func (z *Input) PeekErr(pos int) error
- func (z *Input) Shift()
- func (z *Input) State() InputState
- type InputState
- type Lexer
- type Token
- type TokenType
Constants ¶
const EOF = rune(0)
EOF is a special rune that represents the end of the input.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Input ¶
type Input struct {
// contains filtered or unexported fields
}
Input represents a stream of runes read from a source.
func NewInputBytes ¶
NewInputBytes creates a new Input instance from the given byte slice.
func NewInputReader ¶
NewInputReader creates a new Input instance from the given io.Reader.
func NewInputRunes ¶
NewInputRunes creates a new Input instance from the given slice of runes.
func (*Input) CurrentOffset ¶ added in v0.1.0
CurrentOffset returns the current offset in the input stream.
It calculates the offset as the difference between the current position and the start position.
func (*Input) CurrentString ¶ added in v0.1.0
CurrentString returns the current token as a string.
func (*Input) CurrentSuffix ¶ added in v0.1.0
CurrentSuffix returns the current token after applying the offset.
If the offset is greater than the current position, it returns an empty slice.
func (*Input) CurrentSuffixString ¶ added in v0.1.0
CurrentSuffixString returns the current token as a string after applying the offset.
func (*Input) MoveWhilePredicate ¶
MoveWhilePredicate advances the position while the predicate function returns true for the current rune.
func (*Input) PeekErr ¶
PeekErr checks if there is an error at the current position plus the specified offset.
func (*Input) Shift ¶
func (z *Input) Shift()
Shift resets the start position to the current position.
func (*Input) State ¶ added in v0.0.7
func (z *Input) State() InputState
State returns the current input state. It captures the current position and start position in the input stream. This is useful for saving the state of the lexer and restoring it later.
type InputState ¶ added in v0.0.7
type InputState struct {
// contains filtered or unexported fields
}
func (*InputState) Pos ¶ added in v0.0.7
func (s *InputState) Pos() int
Pos returns the current position in the input stream.
func (*InputState) Restore ¶ added in v0.1.0
func (s *InputState) Restore()
Restore restores the input state from the given InputState.
This method is used to restore the input state after parsing a token. It allows the lexer to backtrack to a previous state if needed.
func (*InputState) Start ¶ added in v0.0.7
func (s *InputState) Start() int
Start returns the start position of the current token being read.
type Lexer ¶
type Lexer struct {
// contains filtered or unexported fields
}
Lexer is the state for the CSS lexer.
type Token ¶ added in v0.0.8
type Token struct {
Type TokenType // Type of the token
Value string // Value of the token (unescaped string data)
Raw []rune // Raw rune data of the token
}
Token represents a token in the CSS lexer.
type TokenType ¶
type TokenType int
TokenType represents the type of a token in the CSS lexer.
const ( // DefaultToken is the default token type, used when no // specific type is matched. // // It is not being used in the lexer. DefaultToken TokenType = iota IdentToken // <ident-token> FunctionToken // <function-token> AtKeywordToken // <at-keyword-token> HashToken // <hash-token> StringToken // <string-token> BadStringToken // <bad-string-token> UrlToken // <url-token> BadUrlToken // <bad-url-token> DelimiterToken // <delim-token> NumberToken // <number-token> PercentageToken // <percentage-token> DimensionToken // <dimension-token> WhitespaceToken // <whitespace-token> CDOToken // <CDO-token> CDCToken // <CDC-token> ColonToken // <colon-token> SemicolonToken // <semicolon-token> CommaToken // <comma-token> LeftParenthesisToken // <(-token> RightParenthesisToken // <)-token> LeftBracketToken // <[-token> RightBracketToken // <]-token> LeftBraceToken // <{-token> RightBraceToken // <}-token> EOFToken // <EOF-token> CommentToken IncludeMatchToken // ~= DashMatchToken // |= PrefixMatchToken // ^= (starts with) SuffixMatchToken // $= (ends with) SubstringMatchToken // *= (contains) ColumnToken // || UnicodeRangeToken )