假设a1
、b1
、c1
和d1
指向堆内存,我的数字代码有以下核心循环.
const int n = 100000;for (int j = 0; j
此循环通过另一个外部 for
循环执行 10,000 次.为了加快速度,我将代码更改为:
for (int j = 0; j
在
经过进一步分析,我认为这是(至少部分)由四指针的数据对齐引起的.这会导致一定程度的缓存组/路冲突.
如果我猜对了您如何分配数组,它们很可能与页面行对齐.
这意味着您在每个循环中的所有访问都将落在相同的缓存方式上.然而,英特尔处理器已经有一段时间具有 8 路 L1 缓存关联性.但实际上,性能并不完全一致.访问 4 路仍然比说 2 路慢.
实际上看起来您正在分别分配所有数组.通常,当请求如此大的分配时,分配器将从操作系统请求新页面.因此,大量分配很可能出现在与页面边界相同的偏移量处.
测试代码如下:
int main(){const int n = 100000;#ifdef ALLOCATE_SEPERATEdouble *a1 = (double*)malloc(n * sizeof(double));double *b1 = (double*)malloc(n * sizeof(double));double *c1 = (double*)malloc(n * sizeof(double));double *d1 = (double*)malloc(n * sizeof(double));#别的double *a1 = (double*)malloc(n * sizeof(double) * 4);双*b1 = a1 + n;双*c1 = b1 + n;双*d1 = c1 + n;#万一//将数据归零以防止任何异常的机会.memset(a1,0,n * sizeof(double));memset(b1,0,n * sizeof(double));memset(c1,0,n * sizeof(double));memset(d1,0,n * sizeof(double));//打印地址cout<<a1<<结束;cout<<b1<<结束;cout<<c1<<结束;cout<<d1<<结束;时钟_t开始=时钟();int c = 0;而 (c++ < 10000){#if ONE_LOOPfor(int j=0;j
基准测试结果:
2 个英特尔至强 X5482 Harpertown @ 3.2 GHz:
#define ALLOCATE_SEPERATE#define ONE_LOOP00600020006D0020007A002000870020秒 = 6.206#define ALLOCATE_SEPERATE//#定义ONE_LOOP005E0020006B00200078002000850020秒 = 2.116//#define ALLOCATE_SEPERATE#define ONE_LOOP0057002000633520006F6A20007B9F20秒 = 1.894//#define ALLOCATE_SEPERATE//#定义ONE_LOOP008C00200098352000A46A2000B09F20秒 = 1.993
观察:
6.206 秒(一个循环)和 2.116 秒(两个循环).这完全再现了 OP 的结果.
在前两个测试中,数组是单独分配的.您会注意到它们都相对于页面具有相同的对齐方式.
在后两个测试中,数组被打包在一起以破坏对齐.在这里您会注意到两个循环都更快.此外,第二个(双)循环现在比您通常预期的更慢.
正如@Stephen Cannon 在评论中指出的那样,这种对齐很可能导致加载/存储单元或缓存中的错误别名.我在谷歌上搜索了一下,发现英特尔实际上有一个用于部分地址别名停顿的硬件计数器:
http:///software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html
区域 1:
这个很简单.数据集太小以至于性能受到循环和分支等开销的影响.
区域 2:
这里,随着数据大小的增加,相对开销的量会下降,性能饱和".这里的两个循环较慢,因为它有两倍的循环和分支开销.
我不确定这里到底发生了什么......正如 Agner Fog 提到的那样,对齐仍然可以发挥作用 缓存组冲突.(那个链接是关于 Sandy Bridge 的,但这个想法应该仍然适用于 Core 2.)
区域 3:
此时,数据不再适合 L1 缓存.因此,性能受到 L1 <-> 的限制.二级缓存带宽.
区域 4:
单循环中的性能下降是我们所观察到的.如前所述,这是由于对齐(最有可能)导致假混叠在处理器加载/存储单元中停滞.
然而,为了发生假混叠,数据集之间必须有足够大的步幅.这就是为什么您在区域 3 中看不到这一点.
区域 5:
此时,缓存中没有任何内容.所以你受到内存带宽的限制.
Suppose a1
, b1
, c1
, and d1
point to heap memory, and my numerical code has the following core loop.
const int n = 100000;
for (int j = 0; j < n; j++) {
a1[j] += b1[j];
c1[j] += d1[j];
}
This loop is executed 10,000 times via another outer for
loop. To speed it up, I changed the code to:
for (int j = 0; j < n; j++) {
a1[j] += b1[j];
}
for (int j = 0; j < n; j++) {
c1[j] += d1[j];
}
Compiled on Microsoft Visual C++ 10.0 with full optimization and SSE2 enabled for 32-bit on a Intel Core 2 Duo (x64), the first example takes 5.5 seconds and the double-loop example takes only 1.9 seconds.
Disassembly for the first loop basically looks like this (this block is repeated about five times in the full program):
movsd xmm0,mmword ptr [edx+18h]
addsd xmm0,mmword ptr [ecx+20h]
movsd mmword ptr [ecx+20h],xmm0
movsd xmm0,mmword ptr [esi+10h]
addsd xmm0,mmword ptr [eax+30h]
movsd mmword ptr [eax+30h],xmm0
movsd xmm0,mmword ptr [edx+20h]
addsd xmm0,mmword ptr [ecx+28h]
movsd mmword ptr [ecx+28h],xmm0
movsd xmm0,mmword ptr [esi+18h]
addsd xmm0,mmword ptr [eax+38h]
Each loop of the double loop example produces this code (the following block is repeated about three times):
addsd xmm0,mmword ptr [eax+28h]
movsd mmword ptr [eax+28h],xmm0
movsd xmm0,mmword ptr [ecx+20h]
addsd xmm0,mmword ptr [eax+30h]
movsd mmword ptr [eax+30h],xmm0
movsd xmm0,mmword ptr [ecx+28h]
addsd xmm0,mmword ptr [eax+38h]
movsd mmword ptr [eax+38h],xmm0
movsd xmm0,mmword ptr [ecx+30h]
addsd xmm0,mmword ptr [eax+40h]
movsd mmword ptr [eax+40h],xmm0
The question turned out to be of no relevance, as the behavior severely depends on the sizes of the arrays (n) and the CPU cache. So if there is further interest, I rephrase the question:
Could you provide some solid insight into the details that lead to the different cache behaviors as illustrated by the five regions on the following graph?
It might also be interesting to point out the differences between CPU/cache architectures, by providing a similar graph for these CPUs.
Here is the full code. It uses TBB Tick_Count
for higher resolution timing, which can be disabled by not defining the TBB_TIMING
Macro:
#include <iostream>
#include <iomanip>
#include <cmath>
#include <string>
//#define TBB_TIMING
#ifdef TBB_TIMING
#include <tbb/tick_count.h>
using tbb::tick_count;
#else
#include <time.h>
#endif
using namespace std;
//#define preallocate_memory new_cont
enum { new_cont, new_sep };
double *a1, *b1, *c1, *d1;
void allo(int cont, int n)
{
switch(cont) {
case new_cont:
a1 = new double[n*4];
b1 = a1 + n;
c1 = b1 + n;
d1 = c1 + n;
break;
case new_sep:
a1 = new double[n];
b1 = new double[n];
c1 = new double[n];
d1 = new double[n];
break;
}
for (int i = 0; i < n; i++) {
a1[i] = 1.0;
d1[i] = 1.0;
c1[i] = 1.0;
b1[i] = 1.0;
}
}
void ff(int cont)
{
switch(cont){
case new_sep:
delete[] b1;
delete[] c1;
delete[] d1;
case new_cont:
delete[] a1;
}
}
double plain(int n, int m, int cont, int loops)
{
#ifndef preallocate_memory
allo(cont,n);
#endif
#ifdef TBB_TIMING
tick_count t0 = tick_count::now();
#else
clock_t start = clock();
#endif
if (loops == 1) {
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++){
a1[j] += b1[j];
c1[j] += d1[j];
}
}
} else {
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
a1[j] += b1[j];
}
for (int j = 0; j < n; j++) {
c1[j] += d1[j];
}
}
}
double ret;
#ifdef TBB_TIMING
tick_count t1 = tick_count::now();
ret = 2.0*double(n)*double(m)/(t1-t0).seconds();
#else
clock_t end = clock();
ret = 2.0*double(n)*double(m)/(double)(end - start) *double(CLOCKS_PER_SEC);
#endif
#ifndef preallocate_memory
ff(cont);
#endif
return ret;
}
void main()
{
freopen("C:\test.csv", "w", stdout);
char *s = " ";
string na[2] ={"new_cont", "new_sep"};
cout << "n";
for (int j = 0; j < 2; j++)
for (int i = 1; i <= 2; i++)
#ifdef preallocate_memory
cout << s << i << "_loops_" << na[preallocate_memory];
#else
cout << s << i << "_loops_" << na[j];
#endif
cout << endl;
long long nmax = 1000000;
#ifdef preallocate_memory
allo(preallocate_memory, nmax);
#endif
for (long long n = 1L; n < nmax; n = max(n+1, long long(n*1.2)))
{
const long long m = 10000000/n;
cout << n;
for (int j = 0; j < 2; j++)
for (int i = 1; i <= 2; i++)
cout << s << plain(n, m, j, i);
cout << endl;
}
}
It shows FLOP/s for different values of n
.
Upon further analysis of this, I believe this is (at least partially) caused by the data alignment of the four-pointers. This will cause some level of cache bank/way conflicts.
If I've guessed correctly on how you are allocating your arrays, they are likely to be aligned to the page line.
This means that all your accesses in each loop will fall on the same cache way. However, Intel processors have had 8-way L1 cache associativity for a while. But in reality, the performance isn't completely uniform. Accessing 4-ways is still slower than say 2-ways.
EDIT: It does in fact look like you are allocating all the arrays separately. Usually when such large allocations are requested, the allocator will request fresh pages from the OS. Therefore, there is a high chance that large allocations will appear at the same offset from a page-boundary.
Here's the test code:
int main(){
const int n = 100000;
#ifdef ALLOCATE_SEPERATE
double *a1 = (double*)malloc(n * sizeof(double));
double *b1 = (double*)malloc(n * sizeof(double));
double *c1 = (double*)malloc(n * sizeof(double));
double *d1 = (double*)malloc(n * sizeof(double));
#else
double *a1 = (double*)malloc(n * sizeof(double) * 4);
double *b1 = a1 + n;
double *c1 = b1 + n;
double *d1 = c1 + n;
#endif
// Zero the data to prevent any chance of denormals.
memset(a1,0,n * sizeof(double));
memset(b1,0,n * sizeof(double));
memset(c1,0,n * sizeof(double));
memset(d1,0,n * sizeof(double));
// Print the addresses
cout << a1 << endl;
cout << b1 << endl;
cout << c1 << endl;
cout << d1 << endl;
clock_t start = clock();
int c = 0;
while (c++ < 10000){
#if ONE_LOOP
for(int j=0;j<n;j++){
a1[j] += b1[j];
c1[j] += d1[j];
}
#else
for(int j=0;j<n;j++){
a1[j] += b1[j];
}
for(int j=0;j<n;j++){
c1[j] += d1[j];
}
#endif
}
clock_t end = clock();
cout << "seconds = " << (double)(end - start) / CLOCKS_PER_SEC << endl;
system("pause");
return 0;
}
Benchmark Results:
2 x Intel Xeon X5482 Harpertown @ 3.2 GHz:
#define ALLOCATE_SEPERATE
#define ONE_LOOP
00600020
006D0020
007A0020
00870020
seconds = 6.206
#define ALLOCATE_SEPERATE
//#define ONE_LOOP
005E0020
006B0020
00780020
00850020
seconds = 2.116
//#define ALLOCATE_SEPERATE
#define ONE_LOOP
00570020
00633520
006F6A20
007B9F20
seconds = 1.894
//#define ALLOCATE_SEPERATE
//#define ONE_LOOP
008C0020
00983520
00A46A20
00B09F20
seconds = 1.993
Observations:
6.206 seconds with one loop and 2.116 seconds with two loops. This reproduces the OP's results exactly.
In the first two tests, the arrays are allocated separately. You'll notice that they all have the same alignment relative to the page.
In the second two tests, the arrays are packed together to break that alignment. Here you'll notice both loops are faster. Furthermore, the second (double) loop is now the slower one as you would normally expect.
As @Stephen Cannon points out in the comments, there is a very likely possibility that this alignment causes false aliasing in the load/store units or the cache. I Googled around for this and found that Intel actually has a hardware counter for partial address aliasing stalls:
http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html
Region 1:
This one is easy. The dataset is so small that the performance is dominated by overhead like looping and branching.
Region 2:
Here, as the data sizes increase, the amount of relative overhead goes down and the performance "saturates". Here two loops is slower because it has twice as much loop and branching overhead.
I'm not sure exactly what's going on here... Alignment could still play an effect as Agner Fog mentions cache bank conflicts. (That link is about Sandy Bridge, but the idea should still be applicable to Core 2.)
Region 3:
At this point, the data no longer fits in the L1 cache. So performance is capped by the L1 <-> L2 cache bandwidth.
Region 4:
The performance drop in the single-loop is what we are observing. And as mentioned, this is due to the alignment which (most likely) causes false aliasing stalls in the processor load/store units.
However, in order for false aliasing to occur, there must be a large enough stride between the datasets. This is why you don't see this in region 3.
Region 5:
At this point, nothing fits in the cache. So you're bound by memory bandwidth.
这篇关于为什么在单独循环中按元素添加比在组合循环中快得多?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!