Skip to content

Enable quant op to share quantization parameter between input and ouput#12408

Merged
yufenglee merged 6 commits intomasterfrom
yufeng/gather_qdq
Aug 4, 2022
Merged

Enable quant op to share quantization parameter between input and ouput#12408
yufenglee merged 6 commits intomasterfrom
yufeng/gather_qdq

Conversation

@yufenglee
Copy link
Member

Description: Describe your changes.
Quantization parameters for some ops input and output need to be same, like Gather, Transpose, Split. This PR adds support for sharing quantization parameters between input and output.

@lgtm-com
Copy link

lgtm-com bot commented Aug 1, 2022

This pull request introduces 2 alerts and fixes 19 when merging 455b394 into 6bb807e - view on LGTM.com

new alerts:

  • 2 for Unused local variable

fixed alerts:

  • 17 for Unused import
  • 1 for First parameter of a method is not named 'self'
  • 1 for Unused local variable

@lgtm-com
Copy link

lgtm-com bot commented Aug 1, 2022

This pull request introduces 2 alerts and fixes 19 when merging d36c456 into 62922f4 - view on LGTM.com

new alerts:

  • 2 for Unused local variable

fixed alerts:

  • 17 for Unused import
  • 1 for First parameter of a method is not named 'self'
  • 1 for Unused local variable

@lgtm-com
Copy link

lgtm-com bot commented Aug 1, 2022

This pull request introduces 2 alerts and fixes 19 when merging 9e084ea into 315e006 - view on LGTM.com

new alerts:

  • 2 for Unused local variable

fixed alerts:

  • 17 for Unused import
  • 1 for First parameter of a method is not named 'self'
  • 1 for Unused local variable

@lgtm-com
Copy link

lgtm-com bot commented Aug 2, 2022

This pull request introduces 2 alerts and fixes 19 when merging e74de32 into b39257a - view on LGTM.com

new alerts:

  • 2 for Unused local variable

fixed alerts:

  • 17 for Unused import
  • 1 for First parameter of a method is not named 'self'
  • 1 for Unused local variable

jchen351
jchen351 previously approved these changes Aug 3, 2022
Comment on lines 377 to 381
"""
Clean unused initializers including which is caused by quantizing the model.
return cleaned graph, and list of tensor names from this graph and all its subgraphes
that can not be found in this graph and its subgraphes
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""
Clean unused initializers including which is caused by quantizing the model.
return cleaned graph, and list of tensor names from this graph and all its subgraphes
that can not be found in this graph and its subgraphes
"""
"""Clean unused initializers including which is caused by quantizing the model.
Returns:
A cleaned graph, and a list of tensor names from this graph and all its subgraphs
that cannot be found in this graph and its subgraphs
"""

nit: formatting, wording

return ONNXModel.clean_initializers_helper(self.graph(), self.model)

@staticmethod
def clean_initializers_helper(graph, model):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer adding type annotations for new functions for clarity (for people and type checkers)


new_nodes = []
for node in graph.node:
node_2_add = node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naming: Is there a name for node_2_add that conveys what it is better to readers?

for attr in node.attribute
if attr.type == onnx.AttributeProto.GRAPH or attr.type == onnx.AttributeProto.GRAPHS
]
if len(graph_attrs) > 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if len(graph_attrs) > 0:
if graph_attrs:

pythonic checks

if len(graph_attrs) > 0:
kwargs = {}
for attr in node.attribute:
kv = {}
Copy link
Contributor

@justinchuby justinchuby Aug 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readability: Avoid abbreviations - they add cognitive load to readers. Here and elsewhere

Do not use abbreviations that are ambiguous or unfamiliar to readers outside the project, and do not abbreviate by deleting letters within a word.

https://google.github.io/styleguide/pyguide.html#316-naming
https://google.github.io/styleguide/cppguide.html#General_Naming_Rules

sub_requesting_tensor_names,
) = ONNXModel.clean_initializers_helper(attr.g, model)
kv = {attr.name: cleaned_sub_graph}
requesting_tensor_names.update({gn: 1 for gn in sub_requesting_tensor_names})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readability: Avoid abbreviations - they add cognitive load to readers.

Do not use abbreviations that are ambiguous or unfamiliar to readers outside the project, and do not abbreviate by deleting letters within a word.

https://google.github.io/styleguide/pyguide.html#316-naming
https://google.github.io/styleguide/cppguide.html#General_Naming_Rules


generated_names = {}
generated_names.update({output_name: 1 for node in graph.node for output_name in node.output if output_name})
for gn in generated_names:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readability: Avoid abbreviations - they add cognitive load to readers.

Do not use abbreviations that are ambiguous or unfamiliar to readers outside the project, and do not abbreviate by deleting letters within a word.

https://google.github.io/styleguide/pyguide.html#316-naming
https://google.github.io/styleguide/cppguide.html#General_Naming_Rules



class QDQGather(QDQOperatorBase):
def __init__(self, onnx_quantizer, onnx_node):
Copy link
Contributor

@justinchuby justinchuby Aug 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer adding type annotations to new code for clarity and correctness. Here and elsewhere

)
self.tensors_to_quantize = []
self.tensors_to_quantize_per_channel = []
self.tensors_to_quantize = dict()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

R1735: Consider using {} instead of dict() (use-dict-literal)

)

def quantize_tensor(self, tensor_name):
def _is_tensor_quantizable(self, tensor_name):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _is_tensor_quantizable(self, tensor_name):
def _is_tensor_quantizable(self, tensor_name: str) -> bool:

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will have a separate PR to update the type annotations for all functions under quantization.


def _quantize_sharing_param_tensors(self):
while self.tensors_to_quantize:
has_update = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: naming: updated, has_updated

)
self.quantized_value_map[tensor_name] = quantized_value
if not has_update:
raise ValueError("There is acyclic dependence in quantization parameter shared mode")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just making sure: do you mean a cyclic dependency (a cycle)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be helpful to also include an actionable suggestion in the error message

@lgtm-com
Copy link

lgtm-com bot commented Aug 3, 2022

This pull request fixes 20 alerts when merging 2b5df48 into 77cab7a - view on LGTM.com

fixed alerts:

  • 17 for Unused import
  • 2 for Unused local variable
  • 1 for First parameter of a method is not named 'self'

"""Find out if a node exists in a graph or a node is in the
new set of nodes created during quantization.

Return:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Quantize the input
initializer = find_by_name(tensor_name, self.model.initializer())
if initializer is not None:
if initializer:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

justinchuby
justinchuby previously approved these changes Aug 3, 2022
Copy link
Contributor

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@lgtm-com
Copy link

lgtm-com bot commented Aug 3, 2022

This pull request fixes 20 alerts when merging a897e79 into a3de1bb - view on LGTM.com

fixed alerts:

  • 17 for Unused import
  • 2 for Unused local variable
  • 1 for First parameter of a method is not named 'self'

@yufenglee yufenglee merged commit ac10f33 into master Aug 4, 2022
@yufenglee yufenglee deleted the yufeng/gather_qdq branch August 4, 2022 04:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants