Module: processTargets/modifiers/ItemStage/tokenizeRange
Namespaces
Functions
joinLexemesBySkippingMatchingPairs
▸ joinLexemesBySkippingMatchingPairs(lexemes
): string
[]
Takes a list of lexemes and joins them into a list of alternating items and separators, skipping matching pairs (), {}, etc
Parameters
Name | Type | Description |
---|---|---|
lexemes | string [] | List of lexemes to operate on |
Returns
string
[]
List of merged lexemes. Note that its length will be less than or equal to {@link lexemes}
Defined in
processTargets/modifiers/ItemStage/tokenizeRange.ts:99
tokenizeRange
▸ tokenizeRange(editor
, interior
, boundary?
): Token
[]
Given the iteration scope, returns a list of "tokens" within that collection
In this context, we define a "token" to be either an item in the collection,
a delimiter or a separator. For example, if {@link interior} is a range
containing foo(hello), bar, whatever
, and {@link boundary} consists of
two ranges containing (
and )
, then we'd return the following:
[
{ range: "(", type: "boundary" },
{ range: "foo(hello)", type: "item" },
{ range: ",", type: "separator" },
{ range: "bar", type: "item" },
{ range: ",", type: "separator" },
{ range: "whatever", type: "item" },
{ range: ")", type: "boundary" },
]
Where each range
isn't actually a string, but a range whose text is the
given string.
Parameters
Name | Type | Description |
---|---|---|
editor | TextEditor | The editor containing the range |
interior | Range | The range to look for tokens within |
boundary? | [Range , Range ] | Optional boundaries for collections. [], {} |
Returns
Token
[]
List of tokens
Defined in
processTargets/modifiers/ItemStage/tokenizeRange.ts:29