-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Help prevent DOS attacks on graphql servers #2549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bbakerman
commented
Sep 15, 2021
| public ParserOptions getParserOptions() { | ||
| return parserOptions; | ||
| } | ||
|
|
Member
Author
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andimarek - we will want to be careful here with our Nadel hacks.
jord1e
reviewed
Sep 16, 2021
Contributor
jord1e
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saw this
Co-authored-by: Jordie <[email protected]>
andimarek
approved these changes
Sep 18, 2021
IvanGoncharov
added a commit
to IvanGoncharov/graphql-js
that referenced
this pull request
Jul 28, 2022
Motivation: Parser CPU and memory usage is linear to the number of tokens in a
document however in extreme cases it becomes quadratic due to memory exhaustion.
On my mashine it happens on queries with 2k tokens.
For example:
```
{ a a <repeat 2k times> a }
```
It takes 741ms on my machine.
But if we create document of the same size but smaller number of
tokens it would be a lot faster.
Example:
```
{ a(arg: "a <repeat 2k times> a" }
```
Now it takes only 17ms to process, which is 43 time faster.
That mean if we limit document size we should make this limit small
since it take only two bytes to create a token, e.g. ` a`.
But that will hart legit documents that have long tokens in them
(comments, describtions, strings, long names, etc.).
That's why this PR adds a mechanism to limit number of token in
parsed document.
Also exact same mechanism implemented in graphql-java, see:
graphql-java/graphql-java#2549
I also tried alternative approach of counting nodes and it gives
slightly better approximation of how many resources would be consumed.
However comparing to the tokens, AST nodes is implementation detail of graphql-js
so it's imposible to replicate in other implementation (e.g. to count
this number on a client).
IvanGoncharov
added a commit
to IvanGoncharov/graphql-js
that referenced
this pull request
Jul 28, 2022
Motivation: Parser CPU and memory usage is linear to the number of tokens in a
document however in extreme cases it becomes quadratic due to memory exhaustion.
On my mashine it happens on queries with 2k tokens.
For example:
```
{ a a <repeat 2k times> a }
```
It takes 741ms on my machine.
But if we create document of the same size but smaller number of
tokens it would be a lot faster.
Example:
```
{ a(arg: "a <repeat 2k times> a" }
```
Now it takes only 17ms to process, which is 43 time faster.
That mean if we limit document size we should make this limit small
since it take only two bytes to create a token, e.g. ` a`.
But that will hart legit documents that have long tokens in them
(comments, describtions, strings, long names, etc.).
That's why this PR adds a mechanism to limit number of token in
parsed document.
Also exact same mechanism implemented in graphql-java, see:
graphql-java/graphql-java#2549
I also tried alternative approach of counting nodes and it gives
slightly better approximation of how many resources would be consumed.
However comparing to the tokens, AST nodes is implementation detail of graphql-js
so it's imposible to replicate in other implementation (e.g. to count
this number on a client).
IvanGoncharov
added a commit
to graphql/graphql-js
that referenced
this pull request
Aug 8, 2022
* parser: limit maximum number of tokens
Motivation: Parser CPU and memory usage is linear to the number of tokens in a
document however in extreme cases it becomes quadratic due to memory exhaustion.
On my mashine it happens on queries with 2k tokens.
For example:
```
{ a a <repeat 2k times> a }
```
It takes 741ms on my machine.
But if we create document of the same size but smaller number of
tokens it would be a lot faster.
Example:
```
{ a(arg: "a <repeat 2k times> a" }
```
Now it takes only 17ms to process, which is 43 time faster.
That mean if we limit document size we should make this limit small
since it take only two bytes to create a token, e.g. ` a`.
But that will hart legit documents that have long tokens in them
(comments, describtions, strings, long names, etc.).
That's why this PR adds a mechanism to limit number of token in
parsed document.
Also exact same mechanism implemented in graphql-java, see:
graphql-java/graphql-java#2549
I also tried alternative approach of counting nodes and it gives
slightly better approximation of how many resources would be consumed.
However comparing to the tokens, AST nodes is implementation detail of graphql-js
so it's imposible to replicate in other implementation (e.g. to count
this number on a client).
* Apply suggestions from code review
Co-authored-by: Yaacov Rydzinski <[email protected]>
Co-authored-by: Yaacov Rydzinski <[email protected]>
IvanGoncharov
added a commit
to IvanGoncharov/graphql-js
that referenced
this pull request
Aug 16, 2022
Backport of graphql#3684 Motivation: Parser CPU and memory usage is linear to the number of tokens in a document however in extreme cases it becomes quadratic due to memory exhaustion. On my mashine it happens on queries with 2k tokens. For example: ``` { a a <repeat 2k times> a } ``` It takes 741ms on my machine. But if we create document of the same size but smaller number of tokens it would be a lot faster. Example: ``` { a(arg: "a <repeat 2k times> a" } ``` Now it takes only 17ms to process, which is 43 time faster. That mean if we limit document size we should make this limit small since it take only two bytes to create a token, e.g. ` a`. But that will hart legit documents that have long tokens in them (comments, describtions, strings, long names, etc.). That's why this PR adds a mechanism to limit number of token in parsed document. Also exact same mechanism implemented in graphql-java, see: graphql-java/graphql-java#2549 I also tried alternative approach of counting nodes and it gives slightly better approximation of how many resources would be consumed. However comparing to the tokens, AST nodes is implementation detail of graphql-js so it's imposible to replicate in other implementation (e.g. to count this number on a client). * Apply suggestions from code review Co-authored-by: Yaacov Rydzinski <[email protected]> Co-authored-by: Yaacov Rydzinski <[email protected]>
IvanGoncharov
added a commit
to IvanGoncharov/graphql-js
that referenced
this pull request
Aug 16, 2022
Backport of graphql#3684 Motivation: Parser CPU and memory usage is linear to the number of tokens in a document however in extreme cases it becomes quadratic due to memory exhaustion. On my mashine it happens on queries with 2k tokens. For example: ``` { a a <repeat 2k times> a } ``` It takes 741ms on my machine. But if we create document of the same size but smaller number of tokens it would be a lot faster. Example: ``` { a(arg: "a <repeat 2k times> a" } ``` Now it takes only 17ms to process, which is 43 time faster. That mean if we limit document size we should make this limit small since it take only two bytes to create a token, e.g. ` a`. But that will hart legit documents that have long tokens in them (comments, describtions, strings, long names, etc.). That's why this PR adds a mechanism to limit number of token in parsed document. Also exact same mechanism implemented in graphql-java, see: graphql-java/graphql-java#2549 I also tried alternative approach of counting nodes and it gives slightly better approximation of how many resources would be consumed. However comparing to the tokens, AST nodes is implementation detail of graphql-js so it's imposible to replicate in other implementation (e.g. to count this number on a client). * Apply suggestions from code review Co-authored-by: Yaacov Rydzinski <[email protected]> Co-authored-by: Yaacov Rydzinski <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR for limiting parsing tokens