Add length and rawError as public parameters of ParseError to allow for richer error UIs in editors. This information was already there, but only encoded into the errors message. This change is purely additional, the existing parameters have not changed.
* Define the nested version of ParseNodes structs explicitly.
Passes test:jest, but fails test:flow.
* Fix additional type errors reported by flow.
* Migrate rebased code from master.
* Rename ParseNode.js to parseNode.js.
* Update defineEnvironment output type to fix the flow errors in environment/array.js.
* Replace ParseNode<*> with a more accurate AnyParseNode and fix flow errors.
* Allow "array" environment type spec to use any all symbol type.
Before this commit, it was constrained to use "mathord" and "textord", but a
recent change in HEAD resulted in a "rel" node being used in the spec for, e.g.
\begin{array}{|l||c:r::}\end{array}
* Address reviewer comments: rename `lastRow` to `row` in array.js.
* Make ParseNode `value` payload type-safe.
* Make defineFunction handlers aware of ParseNode data types.
* Add `type` to all function definitions to help determine handler return type.
* Added unit test for case caught only in screenshot test and fixed issue.
* Rename some symbol `Group`s to avoid conflicts with `ParseNode` groups.
Symbol `Group`s are also used as `ParseNode` types. However, `ParseNode`s of
these types always contain a raw text token as opposed to any structured
content. These `ParseNode`s are passed as arguments into function handlers to
create more semantical `ParseNode`s with more structure.
Before this change, "accent" and "op" were both symbol `Group`s and `ParseNode`
types. With this change, these two types (the raw accent token `ParseNode`, and
the structured semantical `ParseNode` are separated for better type-safety on
the `ParseNode` payload).
* stretchy: Remove FlowFixMe for a forced typecast that's no longer needed.
* Initial webpack config. Moving to ES6 modules. Some module cleanup.
* WIP
* WIP
* Removing commented out code.
* Removing old deps.
* Removing the build script (used for testing).
* Working tests.
* Switching to node api over cli.
* Updating per comments. Still need to fix server.js to properly run the selenium tests.
* Cleaning up the config.
* More cleanup.
* Bringing back server.js for selenium tests.
* Bringing back old dependencies.
* Adding back eslint rules for webpack config. Final cleanup for webpack config.
* Pointing to correct pre-existing module versions. Adding some extra logic to server.js to ensure it gets transpiled properly.
* Getting make build to work again. Updating package.json with some shortcut scripts.
* Resolving conflict.
* Reverting back to commonjs modules.
* Removing extra spaces in babelrc
* Add SourceLocation to encapsulate Token/ParseNode debug information.
* Specify concrete Token text type as it captures type mismatches.
* Responded to comments.
* Add eslint-plugin-flowtype. Fix#844
Using the recommended settings for plugin.
This involved adding some spaces to some existing union types.
* Upgrade to eslint@4, fix spotted bugs
Switched to indent-legacy to allow e.g. comments to have extra indents.
* Add babel transform-class-properties to have static class properties
* Upgrade Lexer and Parser files to use ES6 classes
* Update eslint max line length to 90 character (more indent because of using ES6 classes)
* Upgrade eslint and jasmin to support ES stage-2 features
* Use static properties to place constants near their functions
* Migrate all remaining sources to ES6 syntax
* Increase eslint max line length to 84
* Remove non-babelified endpoint in dev server.js
* Clean up server.js functions after removing browserified
* Make screenshotter not to use babel endpoint as we babelify everything now
* Introduce MacroExpander
The job of the MacroExpander is turning a stream of possibly expandable
tokens, as obtained from the Lexer, into a stream of non-expandable tokens
(in KaTeX, even though they may well be expandable in TeX) which can be
processed by the Parser. The challenge here is that we don't have
mode-specific lexer implementations any more, so we need to do everything on
the token level, including reassembly of sizes and colors.
* Make macros available in development server
Now one can specify macro definitions like \foo=bar as part of the query
string and use these macros in the formula being typeset.
* Add tests for macro expansions
* Handle end of input in special groups
This avoids an infinite loop if input ends prematurely.
* Simplify parseSpecialGroup
The parseSpecialGroup methos now returns a single token spanning the whole
special group, and leaves matching that string against a suitable regular
expression to whoever is calling the method. Suggested by @cbreeden.
* Incorporate review suggestions
Add improvements suggested by Kevin Barabash during review.
* Input range sanity checks
Ensure that both tokens of a token range come from the same lexer,
and that the range has a non-negative length.
* Improved wording of two comments