<i id='pynsP'><tr id='pynsP'><dt id='pynsP'><q id='pynsP'><span id='pynsP'><b id='pynsP'><form id='pynsP'><ins id='pynsP'></ins><ul id='pynsP'></ul><sub id='pynsP'></sub></form><legend id='pynsP'></legend><bdo id='pynsP'><pre id='pynsP'><center id='pynsP'></center></pre></bdo></b><th id='pynsP'></th></span></q></dt></tr></i><div id='pynsP'><tfoot id='pynsP'></tfoot><dl id='pynsP'><fieldset id='pynsP'></fieldset></dl></div>
      <bdo id='pynsP'></bdo><ul id='pynsP'></ul>

    <small id='pynsP'></small><noframes id='pynsP'>

    <tfoot id='pynsP'></tfoot>

      1. <legend id='pynsP'><style id='pynsP'><dir id='pynsP'><q id='pynsP'></q></dir></style></legend>

        使用 glut 和 std::string 时 main() 之前的分段错误?

        时间:2023-09-18

                <tbody id='KFUP5'></tbody>
              <tfoot id='KFUP5'></tfoot>

              <small id='KFUP5'></small><noframes id='KFUP5'>

            • <legend id='KFUP5'><style id='KFUP5'><dir id='KFUP5'><q id='KFUP5'></q></dir></style></legend>
              <i id='KFUP5'><tr id='KFUP5'><dt id='KFUP5'><q id='KFUP5'><span id='KFUP5'><b id='KFUP5'><form id='KFUP5'><ins id='KFUP5'></ins><ul id='KFUP5'></ul><sub id='KFUP5'></sub></form><legend id='KFUP5'></legend><bdo id='KFUP5'><pre id='KFUP5'><center id='KFUP5'></center></pre></bdo></b><th id='KFUP5'></th></span></q></dt></tr></i><div id='KFUP5'><tfoot id='KFUP5'></tfoot><dl id='KFUP5'><fieldset id='KFUP5'></fieldset></dl></div>

                <bdo id='KFUP5'></bdo><ul id='KFUP5'></ul>
                1. 本文介绍了使用 glut 和 std::string 时 main() 之前的分段错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  在 64 位 Ubuntu 14.04 LTS 上,我试图编译一个使用 glut 的简单 OpenGL 程序.在 main 中执行任何代码行之前,我收到了一个分段错误 (SIGSEV);即使在一个非常精简的测试程序中.什么可能导致这种情况?

                  On 64-bit Ubuntu 14.04 LTS, I am trying to compile a simple OpenGL program that uses glut. I am getting a Segmentation Fault (SIGSEV) before any line of code is executed in main; even on a very stripped down test program. What could cause this?

                  我的命令行:

                  g++ -Wall -g main.cpp -lglut -lGL -lGLU -o main

                  g++ -Wall -g main.cpp -lglut -lGL -lGLU -o main

                  我的简单测试用例:

                  #include <GL/gl.h>                                                                                                                                         
                  #include <GL/glu.h>
                  #include <GL/glut.h>
                  
                  #include <string>
                  #include <cstdio>
                  
                  int main(int argc, char** argv){
                      printf("Started
                  ");                                                                                                   
                      std::string dummy = "hello";
                      glutInit(&argc, argv);
                      return 0;
                  }
                  

                  当我运行程序时,main 开头的 printf 在段错误之前无法执行.在 GDB 下,我在段错误之后得到了这个回溯

                  When I run the program, the printf at the beginning of main doesn't get to execute before the segfault. Under GDB, I get this back trace after the segfault is

                  #0  0x0000000000000000 in ?? ()
                  #1  0x00007ffff3488291 in init () at dlerror.c:177
                  #2  0x00007ffff34886d7 in _dlerror_run (operate=operate@entry=0x7ffff3488130 <dlsym_doit>, args=args@entry=0x7fffffffddf0) at dlerror.c:129
                  #3  0x00007ffff3488198 in __dlsym (handle=<optimized out>, name=<optimized out>) at dlsym.c:70
                  #4  0x00007ffff702628e in ?? () from /usr/lib/nvidia-352/libGL.so.1
                  #5  0x00007ffff6fd1aa7 in ?? () from /usr/lib/nvidia-352/libGL.so.1
                  #6  0x00007ffff7dea0fd in call_init (l=0x7ffff7fd39c8, argc=argc@entry=1, argv=argv@entry=0x7fffffffdf48, env=env@entry=0x7fffffffdf58) at dl-init.c:64
                  #7  0x00007ffff7dea223 in call_init (env=<optimized out>, argv=<optimized out>, argc=<optimized out>, l=<optimized out>) at dl-init.c:36
                  #8  _dl_init (main_map=0x7ffff7ffe1c8, argc=1, argv=0x7fffffffdf48, env=0x7fffffffdf58) at dl-init.c:126
                  #9  0x00007ffff7ddb30a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
                  #10 0x0000000000000001 in ?? ()
                  #11 0x00007fffffffe2ba in ?? ()
                  #12 0x0000000000000000 in ?? ()
                  

                  这是踢球者.如果我注释掉 gluInit 行或std::string 虚拟行,程序将编译并运行得很好.直到我注意到这一点,我才认为我的 GLUT 有问题(尽管我已经尝试了我正在调试的原始程序(我已经精简到这个例子))几个系统都没有成功.我在这里有点不知所措.

                  And here's the kicker. If I comment out either the gluInit line or the std::string dummy line, the program compiles and runs just fine. Up until I noticed this I assumed there was something wrong with my GLUT (though I've tried the original program I'm debugging on (that I stripped down to this example)) several systems with no success. I am at a bit of a loss here.

                  我已经尝试过 gmbeard 的建议.关闭优化 (-O0) 并没有改变 gdb 生成的调用堆栈.

                  I have tried gmbeard's suggestions. Turining off optimizations (-O0) didn't change anything about the callstack produced by gdb.

                  在程序上运行 ldd 给了我:

                  Running ldd on the program gives me:

                  linux-vdso.so.1 =>  (0x00007ffe3b7f1000)
                  libglut.so.3 => /usr/lib/x86_64-linux-gnu/libglut.so.3 (0x00007f04978fa000)
                  libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f04975f6000)
                  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f04973e0000)
                  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f049701b000)
                  libGL.so.1 => /usr/lib/nvidia-352/libGL.so.1 (0x00007f0496cec000)
                  libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f04969b7000)
                  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f04966b1000)
                  libXi.so.6 => /usr/lib/x86_64-linux-gnu/libXi.so.6 (0x00007f04964a1000)
                  libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f049629b000)
                  /lib64/ld-linux-x86-64.so.2 (0x00007f0497b44000)
                  libnvidia-tls.so.352.21 => /usr/lib/nvidia-352/tls/libnvidia-tls.so.352.21 (0x00007f0496098000)
                  libnvidia-glcore.so.352.21 => /usr/lib/nvidia-352/libnvidia-glcore.so.352.21 (0x00007f0493607000)
                  libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f04933f5000)
                  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f04931f1000)
                  libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f0492fd2000)
                  libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f0492dce000)
                  libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f0492bc8000)
                  

                  然后,确定了我使用的是哪个 libGL,我在它上面运行了 ldd

                  And then, having identified which libGL I am using, I ran ldd on it

                  linux-vdso.so.1 =>  (0x00007ffc55df8000)
                  libnvidia-tls.so.352.21 => /usr/lib/nvidia-352/tls/libnvidia-tls.so.352.21 (0x00007faa60d83000)
                  libnvidia-glcore.so.352.21 => /usr/lib/nvidia-352/libnvidia-glcore.so.352.21 (0x00007faa5e2f2000)
                  libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007faa5dfbd000)
                  libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007faa5ddab000)
                  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007faa5d9e6000)
                  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007faa5d7e2000)
                  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007faa5d4dc000)
                  libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007faa5d2bd000)
                  /lib64/ld-linux-x86-64.so.2 (0x00007faa612b5000)
                  libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007faa5d0b9000)
                  libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007faa5ceb3000)
                  

                  但快速浏览并没有发现任何问题.

                  But a quick glance doesn't reveal anything amiss.

                  推荐答案

                  所以你在 LD_DEBUG 输出中看到:

                  So you see in the LD_DEBUG output:

                  它打印出来的最后一件事是:" 20863: symbol=__pthread_key_create;在 file=/usr/lib/x86_64-linux-gnu/libXdmcp.so.6 [0]

                  The last thing it prints out is: " 20863: symbol=__pthread_key_create; lookup in file=/usr/lib/x86_64-linux-gnu/libXdmcp.so.6 [0]

                  这意味着 ld.so id 正在寻找 __pthread_key_create 因为它是你的图书馆之一需要的 [并且你最好找到这个符号需要什么图书馆,它可能会回答什么图书馆需要libpthread.so].

                  It means that ld.so id looking for __pthread_key_create since it is needed by one of your librarie [and you'd better find what library is needed this symbol, it possibly will answer what library need libpthread.so].

                  所以 __pthread_key_create 必须在 libpthread.so 中,但你的 ldd 输出中没有 libpthread.so.正如您在下面看到的,您的程序可能在 init() 中使用 __pthread_key_create 时崩溃.顺便说一句,你也可以试试

                  So __pthread_key_create must be in libpthread.so but you have no libpthread.so in your ldd output. As you can see below your program crashes possibly in using __pthread_key_create in init(). By the way you can try also

                  LD_PRELOAD=/lib64/libpthread.so.0 ./main
                  

                  为了确保pthread_key_create在其他符号之前加载.

                  in order to make sure that pthread_key_create is loaded before other symbols.

                  所以 lgut 不太可能成为问题.它只是在初始化时调用 dlsym,这是绝对正确的行为.但是程序崩溃了:

                  So lgut is unlikely to be a problem. It just calls dlsym in initialization and it is absolutely correct behaviour. But the program crashes:

                  #0  0x0000000000000000 in ?? ()
                  #1  0x00007ffff3488291 in init () at dlerror.c:177
                  #2  0x00007ffff34886d7 in _dlerror_run (operate=operate@entry=0x7ffff3488130 <dlsym_doit>, args=args@entry=0x7fffffffddf0) at dlerror.c:129
                  

                  这个回溯显示调用了一个地址为 0x00000000 的函数(我猜它是 __pthread_key_create 的未解析地址),这是一个错误.调用了什么函数?查看来源:

                  This backtrace shows that a function with 0x00000000 address (my guess it is yet unresolved address of __pthread_key_create) was called and that is an error. What function was called? Look at sources:

                  这是 dlerror.c:129(第 2 帧):

                  This is dlerror.c:129 (frame #2):

                  int
                  internal_function
                  _dlerror_run (void (*operate) (void *), void *args)
                  {
                    struct dl_action_result *result;
                  
                    /* If we have not yet initialized the buffer do it now.  */
                    __libc_once (once, init);
                  

                  (第 1 帧):

                  /* Initialize buffers for results.  */
                  static void
                  init (void)
                  {
                    if (__libc_key_create (&key, free_key_mem))
                      /* Creating the key failed.  This means something really went
                         wrong.  In any case use a static buffer which is better than
                         nothing.  */
                      static_buf = &last_result;
                  }
                  

                  它必须是 __libc_key_create 是一个宏并且它在 glibc 中有不同的定义.如果您为 POSIX 构建它被定义

                  It must be __libc_key_create that is a macro and it has in glibc different definitions. If you build for POSIX it is defined

                  /* Create thread-specific key.  */
                  #define __libc_key_create(KEY, DESTRUCTOR) 
                    __libc_ptf_call (__pthread_key_create, (KEY, DESTRUCTOR), 1)
                  

                  我让你用:

                  g++ -pthread -Wall -g main.cpp -lpthread -lglut -lGL -lGLU -o main
                  

                  为了确保__libc_key_create实际上调用了__pthread_key_create并且lpthread在-lglut之前被初始化.但是如果你不想使用 -pthread 那么你可能需要分析帧 #1

                  In order to make sure that __libc_key_create in fact calls __pthread_key_create and lpthread is initialized before -lglut. But if you do not want use -pthread then possibly you need to analyze frame #1

                  #1  0x00007ffff3488291 in init () at dlerror.c:177
                  

                  例如,您可以将第 1 帧的反汇编添加到您的问题中

                  For example you can add disasemble for frame #1 to your question

                  这篇关于使用 glut 和 std::string 时 main() 之前的分段错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:如何将 std::string 传递给 glShaderSource? 下一篇:如何在 GLUT 上加载 bmp 以将其用作纹理?

                  相关文章

                  最新文章

                  <i id='YiWrU'><tr id='YiWrU'><dt id='YiWrU'><q id='YiWrU'><span id='YiWrU'><b id='YiWrU'><form id='YiWrU'><ins id='YiWrU'></ins><ul id='YiWrU'></ul><sub id='YiWrU'></sub></form><legend id='YiWrU'></legend><bdo id='YiWrU'><pre id='YiWrU'><center id='YiWrU'></center></pre></bdo></b><th id='YiWrU'></th></span></q></dt></tr></i><div id='YiWrU'><tfoot id='YiWrU'></tfoot><dl id='YiWrU'><fieldset id='YiWrU'></fieldset></dl></div>

                    <legend id='YiWrU'><style id='YiWrU'><dir id='YiWrU'><q id='YiWrU'></q></dir></style></legend>
                    • <bdo id='YiWrU'></bdo><ul id='YiWrU'></ul>
                  1. <tfoot id='YiWrU'></tfoot>
                  2. <small id='YiWrU'></small><noframes id='YiWrU'>