• <i id='VdZSm'><tr id='VdZSm'><dt id='VdZSm'><q id='VdZSm'><span id='VdZSm'><b id='VdZSm'><form id='VdZSm'><ins id='VdZSm'></ins><ul id='VdZSm'></ul><sub id='VdZSm'></sub></form><legend id='VdZSm'></legend><bdo id='VdZSm'><pre id='VdZSm'><center id='VdZSm'></center></pre></bdo></b><th id='VdZSm'></th></span></q></dt></tr></i><div id='VdZSm'><tfoot id='VdZSm'></tfoot><dl id='VdZSm'><fieldset id='VdZSm'></fieldset></dl></div>
    <tfoot id='VdZSm'></tfoot>

    1. <small id='VdZSm'></small><noframes id='VdZSm'>

      <legend id='VdZSm'><style id='VdZSm'><dir id='VdZSm'><q id='VdZSm'></q></dir></style></legend>

      • <bdo id='VdZSm'></bdo><ul id='VdZSm'></ul>

        与 int by int 相比,为什么执行 float by float 矩阵乘

        时间:2023-09-18

          • <legend id='oharO'><style id='oharO'><dir id='oharO'><q id='oharO'></q></dir></style></legend>
          • <i id='oharO'><tr id='oharO'><dt id='oharO'><q id='oharO'><span id='oharO'><b id='oharO'><form id='oharO'><ins id='oharO'></ins><ul id='oharO'></ul><sub id='oharO'></sub></form><legend id='oharO'></legend><bdo id='oharO'><pre id='oharO'><center id='oharO'></center></pre></bdo></b><th id='oharO'></th></span></q></dt></tr></i><div id='oharO'><tfoot id='oharO'></tfoot><dl id='oharO'><fieldset id='oharO'></fieldset></dl></div>
              <tbody id='oharO'></tbody>
                • <bdo id='oharO'></bdo><ul id='oharO'></ul>
                  <tfoot id='oharO'></tfoot>
                • <small id='oharO'></small><noframes id='oharO'>

                  本文介绍了与 int by int 相比,为什么执行 float by float 矩阵乘法更快?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  有两个 int 矩阵 A 和 B,有超过 1000 行和 10K 列,我经常需要将它们转换为浮点矩阵以获得加速(4 倍或更多).

                  Having two int matrices A and B, with more than 1000 rows and 10K columns, I often need to convert them to float matrices to gain speedup (4x or more).

                  我想知道为什么会这样?我意识到浮点矩阵乘法有很多优化和矢量化,例如 AVX 等.但是,对于整数,有诸如 AVX2 之类的指令(如果我没记错的话).而且,不能将 SSE 和 AVX 用于整数吗?

                  I'm wondering why is this the case? I realize that there is a lot of optimization and vectorizations such as AVX, etc going on with float matrix multiplication. But yet, there are instructions such AVX2, for integers (if I'm not mistaken). And, can't one make use of SSE and AVX for integers?

                  为什么在矩阵代数库(例如 Numpy 或 Eigen)下没有启发式方法来捕获它并像 float 一样更快地执行整数矩阵乘法?

                  Why isn't there a heuristic underneath matrix algebra libraries such as Numpy or Eigen to capture this and perform integer matrix multiplication faster just like float?

                  关于接受的答案:虽然@sascha 的答案非常有用且相关,但@chatz 的答案是 int 乘以 int 乘法很慢的实际原因,而不管是否存在 BLAS 整数矩阵运算.>

                  About accepted answer: While @sascha's answer is very informative and relevant, @chatz's answer is the actual reason why the int by int multiplication is slow irrespective of whether BLAS integer matrix operations exist.

                  推荐答案

                  如果你编译这两个简单的函数,它们本质上只是计算一个乘积(使用 Eigen 库)

                  If you compile these two simple functions which essentially just calculate a product (using the Eigen library)

                  #include <Eigen/Core>
                  
                  int mult_int(const Eigen::MatrixXi& A, Eigen::MatrixXi& B)
                  {
                      Eigen::MatrixXi C= A*B;
                      return C(0,0);
                  }
                  
                  int mult_float(const Eigen::MatrixXf& A, Eigen::MatrixXf& B)
                  {
                      Eigen::MatrixXf C= A*B;
                      return C(0,0);
                  }
                  

                  使用标志 -mavx2 -S -O3 您将看到非常相似的汇编代码,用于整数和浮点版本.但是,主要区别在于 vpmulld 的延迟是 vmulps 的 2-3 倍,而吞吐量仅为 1/2 或 1/4.(在最近的 Intel 架构上)

                  using the flags -mavx2 -S -O3 you will see very similar assembler code, for the integer and the float version. The main difference however is that vpmulld has 2-3 times the latency and just 1/2 or 1/4 the throughput of vmulps. (On recent Intel architectures)

                  参考:Intel Intrinsics Guide,吞吐量"表示倒数吞吐量,即如果没有延迟发生(稍微简化),每次操作使用多少时钟周期.

                  Reference: Intel Intrinsics Guide, "Throughput" means the reciprocal throughput, i.e., how many clock-cycles are used per operation, if no latency happens (somewhat simplified).

                  这篇关于与 int by int 相比,为什么执行 float by float 矩阵乘法更快?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:将 Opencv 矩阵旋转 90、180、270 度 下一篇:现代 OpenGL:VBO、GLM 和矩阵堆栈

                  相关文章

                  最新文章

                  <small id='rh6x5'></small><noframes id='rh6x5'>

                • <i id='rh6x5'><tr id='rh6x5'><dt id='rh6x5'><q id='rh6x5'><span id='rh6x5'><b id='rh6x5'><form id='rh6x5'><ins id='rh6x5'></ins><ul id='rh6x5'></ul><sub id='rh6x5'></sub></form><legend id='rh6x5'></legend><bdo id='rh6x5'><pre id='rh6x5'><center id='rh6x5'></center></pre></bdo></b><th id='rh6x5'></th></span></q></dt></tr></i><div id='rh6x5'><tfoot id='rh6x5'></tfoot><dl id='rh6x5'><fieldset id='rh6x5'></fieldset></dl></div>

                        <bdo id='rh6x5'></bdo><ul id='rh6x5'></ul>

                      <tfoot id='rh6x5'></tfoot>

                    1. <legend id='rh6x5'><style id='rh6x5'><dir id='rh6x5'><q id='rh6x5'></q></dir></style></legend>