Skip to content

Theemo / @theemo/sync / LexerConfig

Interface: LexerConfig

The lexer config is used for you to configure the tokens to what they mean for you and to further process them

Properties

classifyToken()?

ts
optional classifyToken: (token, tokens) => Token<"unknown">;

Describe your tokens:

  • What's the type?
  • What's the color scheme?

Parameters

ParameterType
tokenToken<"unknown">
tokensobject
tokens.normalizedTokenCollection<Token<"unknown">>
tokens.rawTokenCollection<Token<"unknown">>

Returns

Token<"unknown">

Defined in

lexer/config.ts:33


filterToken()?

ts
optional filterToken: (token, tokens) => boolean;

Filter callback to only keep the tokens you need.

Parameters

ParameterType
tokenToken<"unknown">
tokensobject
tokens.classifiedTokenCollection<Token<"unknown">>
tokens.normalizedTokenCollection<Token<"unknown">>
tokens.rawTokenCollection<Token<"unknown">>

Returns

boolean

Example

You may want to keep only purpose tokens, use this:

js
filterToken(token) {
  return token.type === 'purpose';
}

Defined in

lexer/config.ts:51


normalizeToken()?

ts
optional normalizeToken: (token, tokens) => Token<"unknown">;

This is to normalize tokens and remove some glibberish off of it. Comes with a default, if you don't provide one (see in the example)

Parameters

ParameterType
tokenToken<"unknown">
tokensobject
tokens.rawTokenCollection<Token<"unknown">>

Returns

Token<"unknown">

Example

Here is how to remove any whitespace from token names:

ts
normalizeToken(token: Token): Token {
  return {
    ...token,
    name: normalized.name.replace(/\s/g, '')
  };
}

Defined in

lexer/config.ts:25