当我尝试获取大型 SQL 文件(大型 INSERT 查询)时出现此错误.
I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
表中的任何内容都没有更新.我试过删除和取消删除表/数据库,以及重新启动 MySQL.这些都不能解决问题.
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
这是我的最大数据包大小:
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
这里是文件大小:
$ ls -s file.sql
79512 file.sql
当我尝试另一种方法时...
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
将此行添加到 my.cnf 文件中解决了我的问题.
Adding this line into my.cnf file solves my problem.
当列有很大的值时这很有用,这会导致问题,你可以找到解释 此处.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
在 Windows 上,此文件位于:C:\ProgramData\MySQL\MySQL Server5.6"
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server 5.6"
在 Linux (Ubuntu) 上:/etc/mysql
On Linux (Ubuntu): /etc/mysql
这篇关于ERROR 2006 (HY000): MySQL 服务器已经消失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为