csslexer

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 16, 2025 License: MIT Imports: 5 Imported by: 0

README

Go CSS Lexer

A lexer for CSS (Cascading Style Sheets) files written in Go.

The library implements a tokenizer algorithm inspired by Blink, closely mirroring the parsing logic used by modern browsers.

Target version of CSS syntax: CSS Syntax Module Level 3 (W3C Candidate Recommendation Draft; Dec 24, 2021), which is the latest stable version of the CSS syntax specification as of July 2025.

Installation

go get go.baoshuo.dev/csslexer

API

Input

You can create an Input instance using one of the following constructors:

  • NewInput(input string) *Input
  • NewInputRunes(runes []rune) *Input
  • NewInputBytes(input []byte) *Input
  • NewInputReader(r io.Reader) *Input

The lexer requires an Input instance to read the CSS content.

Lexer

Create a lexer:

lexer := csslexer.NewLexer(input)

Read next token:

token := lexer.Next()

The types of tokens can be found in the csslexer.TokenType type, and the definition of each token type is available in token.go.

Author

go-css-lexer © Baoshuo, Released under the MIT License.

Personal Homepage · Blog · GitHub @renbaoshuo

Documentation

Overview

Package csslexer provides a lexer for CSS (Cascading Style Sheets) files.

It implements a tokenizer algorithm inspired by Blink, closely mirroring the parsing logic used by modern browsers.

For more information, see the README.md file.

GitHub repository: https://github.com/renbaoshuo/go-css-lexer

Index

Constants

View Source
const EOF = rune(0)

EOF is a special rune that represents the end of the input.

Variables

This section is empty.

Functions

This section is empty.

Types

type Input

type Input struct {
	// contains filtered or unexported fields
}

Input represents a stream of runes read from a source.

func NewInput

func NewInput(input string) *Input

NewInput creates a new Input instance from the given string.

func NewInputBytes

func NewInputBytes(input []byte) *Input

NewInputBytes creates a new Input instance from the given byte slice.

func NewInputReader

func NewInputReader(r io.Reader) *Input

NewInputReader creates a new Input instance from the given io.Reader.

func NewInputRunes

func NewInputRunes(runes []rune) *Input

NewInputRunes creates a new Input instance from the given slice of runes.

func (*Input) Current

func (z *Input) Current() []rune

Current returns the current token as a slice of runes.

func (*Input) CurrentOffset added in v0.1.0

func (z *Input) CurrentOffset() int

CurrentOffset returns the current offset in the input stream.

It calculates the offset as the difference between the current position and the start position.

func (*Input) CurrentString added in v0.1.0

func (z *Input) CurrentString() string

CurrentString returns the current token as a string.

func (*Input) CurrentSuffix added in v0.1.0

func (z *Input) CurrentSuffix(offset int) []rune

CurrentSuffix returns the current token after applying the offset.

If the offset is greater than the current position, it returns an empty slice.

func (*Input) CurrentSuffixString added in v0.1.0

func (z *Input) CurrentSuffixString(offset int) string

CurrentSuffixString returns the current token as a string after applying the offset.

func (*Input) Err

func (z *Input) Err() error

Err returns the error at the current position.

func (*Input) Move

func (z *Input) Move(n int)

Move advances the position by the specified number of runes.

func (*Input) MoveWhilePredicate

func (z *Input) MoveWhilePredicate(pred func(rune) bool)

MoveWhilePredicate advances the position while the predicate function returns true for the current rune.

func (*Input) Peek

func (z *Input) Peek(n int) rune

Peek returns the next rune in the input stream without advancing the position.

func (*Input) PeekErr

func (z *Input) PeekErr(pos int) error

PeekErr checks if there is an error at the current position plus the specified offset.

func (*Input) Shift

func (z *Input) Shift()

Shift resets the start position to the current position.

func (*Input) State added in v0.0.7

func (z *Input) State() InputState

State returns the current input state. It captures the current position and start position in the input stream. This is useful for saving the state of the lexer and restoring it later.

type InputState added in v0.0.7

type InputState struct {
	// contains filtered or unexported fields
}

func (*InputState) Pos added in v0.0.7

func (s *InputState) Pos() int

Pos returns the current position in the input stream.

func (*InputState) Restore added in v0.1.0

func (s *InputState) Restore()

Restore restores the input state from the given InputState.

This method is used to restore the input state after parsing a token. It allows the lexer to backtrack to a previous state if needed.

func (*InputState) Start added in v0.0.7

func (s *InputState) Start() int

Start returns the start position of the current token being read.

type Lexer

type Lexer struct {
	// contains filtered or unexported fields
}

Lexer is the state for the CSS lexer.

func NewLexer

func NewLexer(r *Input) *Lexer

NewLexer creates a new Lexer instance with the given Input.

func (*Lexer) Next

func (l *Lexer) Next() Token

Next reads the next token from the input stream.

func (*Lexer) Peek added in v0.0.6

func (l *Lexer) Peek() Token

Peek returns the next token without advancing the position. It returns a copy of the token.

type Token added in v0.0.8

type Token struct {
	Type  TokenType // Type of the token
	Value string    // Value of the token (unescaped string data)
	Raw   []rune    // Raw rune data of the token
}

Token represents a token in the CSS lexer.

type TokenType

type TokenType int

TokenType represents the type of a token in the CSS lexer.

const (
	// DefaultToken is the default token type, used when no
	// specific type is matched.
	//
	// It is not being used in the lexer.
	DefaultToken TokenType = iota

	IdentToken            // <ident-token>
	FunctionToken         // <function-token>
	AtKeywordToken        // <at-keyword-token>
	HashToken             // <hash-token>
	StringToken           // <string-token>
	BadStringToken        // <bad-string-token>
	UrlToken              // <url-token>
	BadUrlToken           // <bad-url-token>
	DelimiterToken        // <delim-token>
	NumberToken           // <number-token>
	PercentageToken       // <percentage-token>
	DimensionToken        // <dimension-token>
	WhitespaceToken       // <whitespace-token>
	CDOToken              // <CDO-token>
	CDCToken              // <CDC-token>
	ColonToken            // <colon-token>
	SemicolonToken        // <semicolon-token>
	CommaToken            // <comma-token>
	LeftParenthesisToken  // <(-token>
	RightParenthesisToken // <)-token>
	LeftBracketToken      // <[-token>
	RightBracketToken     // <]-token>
	LeftBraceToken        // <{-token>
	RightBraceToken       // <}-token>
	EOFToken              // <EOF-token>

	CommentToken
	IncludeMatchToken   // ~=
	DashMatchToken      // |=
	PrefixMatchToken    // ^= (starts with)
	SuffixMatchToken    // $= (ends with)
	SubstringMatchToken // *= (contains)
	ColumnToken         // ||
	UnicodeRangeToken
)

func (TokenType) String

func (tt TokenType) String() string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL