Performance Opportunities #6366
Replies: 2 comments
-
It's worth mentioning that we can and should leverage CPU profiling more in our investigations. $ node --cpu-prof ./node_modules/.bin/eslint . or you can isolate to a specific lint rule: $ node --cpu-prof ./node_modules/.bin/eslint --no-eslintrc --parser="@typescript-eslint/parser" --plugin "@typescript-eslint" --rule "@typescript-eslint/no-unused-vars: [error]" --ext=".ts,.tsx" . or with a slightly more complicated config in a new config file like: $ touch .temp.eslintrc.js
# edit file
$ node --cpu-prof ./node_modules/.bin/eslint --no-eslintrc -c .temp.eslintrcjs . This will generate a I've found that switching the graph to "left heavy" mode is best for holistically analysing the performance. |
Beta Was this translation helpful? Give feedback.
-
Took a CPU profile off of the latest v8 commit (56ef573):
CPU.20240728.151034.53658.0.001.cpuprofile.zip At a glance we can quickly see:
Of that parse time - most of it is spent within TS's functions -- ~20% (~2.9s) of it is spent in our Here are some disorganised notes from poking through the profile: It's easy to see that we spend a whole lot of time in TS's APIs across rules. There's probably some opportunities here to make our rules use TS's APIs less (eg by answering more questions with the AST ahead of time). In our codebase the biggest rule is our internal plugin formatting rule - which is understandable considering it uses prettier to check each and every test case in the codebase to ensure it's formatting. Let's ignore that for now as well. The second biggest block of time is spent in Looking at the code the function is specifically looking for cases like First if the variable doesn't have an annotation - then the type of the variable will be the type of the initialiser so the case is impossible. Simply exiting early if Second there's a number of types we can detect syntactically and we can guarantee are never function types. For example #9656 is the result of adding these checks, and this is the resulting profile: Sadly we can see the impact of what I mentioned above - even though we've managed to reduce the time spent in That being said it's still a good thing to do! If we can do this everywhere then we can slowly move the needle and reduce the cost of the lint run. |
Beta Was this translation helpful? Give feedback.
-
I just wanted to have a publicly visible, centralised list of these somewhere that I can easily reference and add to without having to file an in-depth issue straight away.
Parsing
ts.Program
for non-type-aware parses (feat: remove partial type-information program #6066 - will be released in v6)no-unused-vars-experimental
experiment. But it's wasted effort because a single-filets.Program
is literally useless. We can save time by just building ats.SourceFile
directly.parserOptions.project
glob resolution for a period of time (feat(typescript-estree): cache project glob resolution #6367).**
globs or a lot of matches!If we can cache this computation for a period of time, we can save a bunch of time in the type-aware parse.
For the persistent case we should be able to use a relatively short-lived cache.
For the single-run case we should be able to use an infinite cache.
.ts
files instead of.d.ts
files then we can save a lot of time and memory.ts.DocumentRegistry
to share resources betweents.Program
snode_modules/foo/index.d.ts
) then currently there's no sharing done and we'll get two parses and two copies of the types +ts.SourceFile
for that module.ts.DocumentRegistry
is intended to at least sharets.SourceFile
s - which could save a significant amount of memory for highly interdependent codebases.ts.LanguageService
.CLI Runs ("one-and-done" runs)
parserOptions.allowAutomaticSingleRunInference
by default for all users.process.argv
detection (causing issues like Support pnpm for single run detection #3811). feat: parsing session objects eslint/rfcs#102 would allow us to be infallible!IDE Runs ("persistent" runs)
vscode-eslint
. Examples of possible optimisations:fs.stat
s.ts.LanguageService
to manage the program instead of managing a builder/watch program (feat(typescript-estree): add experimental mode for type-aware linting that uses a language service instead of a builder #6172)ts.LanguageService
is designed to be as efficient as possible and share as much memory as possible so that IDEs are quick and low memory. We might be able to leverage this data-structure ourselves for the same result in persistent runs.Rules
naming-convention
is a really slow lint rule. We need to profile it to see where the bottlenecks are.eslint-plugin-import
There's a lot of things we don't recommend using from this plugin, but perhaps we can improve the performance so people don't have to manually opt-out of things to avoid perf pit-falls.
context.parserServices
to resolve an import to a module based on the file's type information.eslint-import-resolver-typescript
, or for more basic usecases the built-innode
resolver.These resolvers rely on disk operations which can slow things down.
If we've got type info - then we've already got all of the required info - so we can potentially help them save time!
context.parserServices
to resolve the exports for a file if there is type information.eslint-plugin-import
rules do out-of-band parsing of modules to determine their exports. If we've got type information then we don't need to parse - we already have that info.context.parserServices
to quickly parse a file if there is type information.If we've got type info then we could just directly fetch the
ts.SourceFile
and convert it without going through any of our other pre-parse logic, potentially saving a chunk of time.eslint-plugin-prettier
This is another case where we generally recommend against using this, however again we might be able to improve the status-quo.
@typescript-eslint/typescript-estree
version included in the package, it can just reuse the existing AST that ESLint gives if the parser versions align.TypeScript
One major path we're yet to leverage is building within TypeScript itself. Currently we're using the generic APIs that TS has for general purpose usecases, however it's there's an opportunity to work cloesly with the TS team to create a bespoke API for our usecase that more closely matches our constraints and requirements. By building into the TS codebase directly we could leverage many of the internal APIs and data structures that we cannot access externally.
ESLint
ESLint itself is unfortunately not well designed for our stateful parsing. There's opportunity to work closely with the ESLint team to improve the design so that the core logic can better support our stateful nature, which would in turn unlock other potential performance improvements internally for us.
There's two paths for us to take with this collaboration:
These paths are not mutually exclusive - we can and should improve the current state whilst also looking to the future as well.
Beta Was this translation helpful? Give feedback.
All reactions