Skip to content

Commit f0494db

Browse files
committed
Update README
1 parent ab32a12 commit f0494db

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ dabnn_bireal18_imagenet 61809506 ns 61056865 ns 10 <--- Bi-
4040
dabnn_bireal18_imagenet_stem 43279353 ns 41533009 ns 14 <--- Bi-Real Net 18 with stem module (The network structure will be described in detail in the coming paper), 56.4% top-1 on ImageNet
4141
```
4242

43-
The following is the comparison between our dabnn and [Caffe](http://caffe.berkeleyvision.org) (full precision), [TensorFlow Lite](https://www.tensorflow.org/lite) (full precision) and [BMXNet](https://github.com/hpi-xnor/BMXNet) (binary). Note that "Conv 64", "Conv 128", "Conv 256" and "Conv 512" have the same meaning as in the above benchmark. We surprisingly observe that BMXNet is even slower than the full precision TensorFlow Lite. It suggests that the potential of binary neural networks is far from exploited until our dabnn is published.
43+
The following is the comparison between our dabnn and [Caffe](http://caffe.berkeleyvision.org) (full precision), [TensorFlow Lite](https://www.tensorflow.org/lite) (full precision) and [BMXNet](https://github.com/hpi-xnor/BMXNet) (binary). We surprisingly observe that BMXNet is even slower than the full precision TensorFlow Lite. It suggests that the potential of binary neural networks is far from exploited until our dabnn is published.
4444

4545
![Comparison](images/comparison_en.png)
4646

README_CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ dabnn_bireal18_imagenet 61809506 ns 61056865 ns 10 <--- B
4242
dabnn_bireal18_imagenet_stem 43279353 ns 41533009 ns 14 <--- 带有 stem 模块的 Bi-Real Net 18 (将在论文中描述), ImageNet top-1 为 56.4%
4343
```
4444

45-
在 Google Pixel 1 上与 [Caffe](http://caffe.berkeleyvision.org)(全精度), [TensorFlow Lite](https://www.tensorflow.org/lite)(全精度)和 [BMXNet](https://github.com/hpi-xnor/BMXNet)(二值)的对比如下,其中 Conv 64、Conv 128、Conv 256 和 Conv 512 和上面的 benchmark 中的含义相同。我们很惊讶的发现现有的二值 inference 框架 BMXNet 甚至比全精度的 TensorFlow Lite 还要慢,这表明,直到 dabnn 推出之前,二值网络的潜力都远远没有被挖掘出来。
45+
在 Google Pixel 1 上与 [Caffe](http://caffe.berkeleyvision.org)(全精度), [TensorFlow Lite](https://www.tensorflow.org/lite)(全精度)和 [BMXNet](https://github.com/hpi-xnor/BMXNet)(二值)的对比如下。我们很惊讶的发现现有的二值 inference 框架 BMXNet 甚至比全精度的 TensorFlow Lite 还要慢,这表明,直到 dabnn 推出之前,二值网络的潜力都远远没有被挖掘出来。
4646

4747
![Comparison](images/comparison_cn.png)
4848

0 commit comments

Comments
 (0)