Skip to content

Comments

Supoort bool tensor#11

Merged
Honry merged 1 commit intostable-diffusionfrom
support-uint8
May 18, 2023
Merged

Supoort bool tensor#11
Honry merged 1 commit intostable-diffusionfrom
support-uint8

Conversation

@Honry
Copy link
Owner

@Honry Honry commented May 17, 2023

No description provided.

@Honry
Copy link
Owner Author

Honry commented May 17, 2023

@fdwr, @BruceDai, PTAL, thanks!

1 similar comment
@Honry
Copy link
Owner Author

Honry commented May 17, 2023

@fdwr, @BruceDai, PTAL, thanks!

@BruceDai
Copy link

LGTM, thanks

std::string operand_type;
switch(to_type) {
case ONNX_NAMESPACE::TensorProto_DataType_BOOL:
operand_type = "uint8";
Copy link

@fdwr fdwr May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you all think about adding a boolean data type to the spec? I used uint8 because there was no bool8 in the WebNN spec, and uint8 is what DirectML uses for logical operators like Xor and ElementwiseIf (because there's really no point at the low level for a distinction between them). *This is not a blocking comment for the CR (please continue), just a discussion point, as I'm content either way.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree, this worth a discussion. Why not just creating an issue to the spec? :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree, this worth a discussion. Why not just creating an issue to the spec? :)

Just thought I'd ask at least one other person their opinion first before creating more work for the spec authors 😅.

break;
case ONNX_NAMESPACE::TensorProto_DataType_FLOAT16:
view = emscripten::val{emscripten::typed_memory_view(size / sizeof(uint16_t),
reinterpret_cast<const uint16_t*>(dest))};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider also adding int32/uint32 sooner than later, because it can be used in Slice and Gather (Tind : tensor(int32), tensor(int64)). It need not be in this CR, but I figured since we're updating this switch statement right now anyway?

https://github.com/onnx/onnx/blob/main/docs/Operators.md#inputs-3---5-1
https://github.com/onnx/onnx/blob/main/docs/Operators.md#outputs-53

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah sure, I will add int32/uint32 later.

Copy link

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Honry. Just considerations, nothing blocking.

@Honry Honry merged commit 384710a into stable-diffusion May 18, 2023
Honry pushed a commit that referenced this pull request Aug 7, 2024
… transient connection exceptions. (microsoft#21612)

### Description
Improve docker commands to make docker image layer caching works.
It can make docker building faster and more stable.
So far, A100 pool's system disk is too small to use docker cache.
We won't use pipeline cache for docker image and remove some legacy
code.

### Motivation and Context
There are often an exception of
```
64.58 + curl https://nodejs.org/dist/v18.17.1/node-v18.17.1-linux-x64.tar.gz -sSL --retry 5 --retry-delay 30 --create-dirs -o /tmp/src/node-v18.17.1-linux-x64.tar.gz --fail
286.4 curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
```
Because Onnxruntime pipeline have been sending too many requests to
download Nodejs in docker building.
Which is the major reason of pipeline failing now

In fact, docker image layer caching never works.
We can always see the scrips are still running
```
#9 [3/5] RUN cd /tmp/scripts && /tmp/scripts/install_centos.sh && /tmp/scripts/install_deps.sh && rm -rf /tmp/scripts
#9 0.234 /bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
#9 0.235 /bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
#9 0.235 /tmp/scripts/install_centos.sh: line 1: !/bin/bash: No such file or directory
#9 0.235 ++ '[' '!' -f /etc/yum.repos.d/microsoft-prod.repo ']'
#9 0.236 +++ tr -dc 0-9.
#9 0.236 +++ cut -d . -f1
#9 0.238 ++ os_major_version=8
....
#9 60.41 + curl https://nodejs.org/dist/v18.17.1/node-v18.17.1-linux-x64.tar.gz -sSL --retry 5 --retry-delay 30 --create-dirs -o /tmp/src/node-v18.17.1-linux-x64.tar.gz --fail
#9 60.59 + return 0
...
```

This PR is improving the docker command to make image layer caching
work.
Thus, CI won't send so many redundant request of downloading NodeJS.
```
#9 [2/5] ADD scripts /tmp/scripts
#9 CACHED

#10 [3/5] RUN cd /tmp/scripts && /tmp/scripts/install_centos.sh && /tmp/scripts/install_deps.sh && rm -rf /tmp/scripts
#10 CACHED

#11 [4/5] RUN adduser --uid 1000 onnxruntimedev
#11 CACHED

#12 [5/5] WORKDIR /home/onnxruntimedev
#12 CACHED
```

###Reference
https://docs.docker.com/build/drivers/

---------

Co-authored-by: Yi Zhang <[email protected]>
Honry pushed a commit that referenced this pull request Nov 6, 2025
(1) Not install cudnn9-cuda-12 since the base image has cudnn so no need
to install again.
(2) Upgrade ubuntu to 24.04

This fixes the following error:
```
#11 315.1  cudnn9-cuda-12 : Depends: cudnn9-cuda-12-9 (>= 9.10.2.21) but it is not going to be installed
#11 315.1 E: Unable to correct problems, you have held broken packages.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants