我的 MySQL 数据库中的 BLOB 字段有问题 - 当上传大于约 1MB 的文件时,我收到一个错误 不允许大于 max_allowed_packet 的数据包.
I am having a problem with BLOB fields in my MySQL database - when uploading files larger than approx 1MB I get an error Packets larger than max_allowed_packet are not allowed.
这是我尝试过的:
在 MySQL 查询浏览器中,我运行了一个 show variables like 'max_allowed_packet' 这给了我 1048576.
In MySQL Query Browser I ran a show variables like 'max_allowed_packet' which gave me 1048576.
然后我执行查询 set global max_allowed_packet=33554432 后跟 show variables like 'max_allowed_packet' - 它按预期给了我 33554432.
Then I execute the query set global max_allowed_packet=33554432 followed by show variables like 'max_allowed_packet' - it gives me 33554432 as expected.
但是当我重新启动 MySQL 服务器时,它神奇地回到了 1048576.我在这里做错了什么?
But when I restart the MySQL server it magically goes back to 1048576. What am I doing wrong here?
额外的问题,是否可以压缩 BLOB 字段?
Bonus question, is it possible to compress a BLOB field?
在 my.ini 或 ~/.my.cnf 文件中更改,包括单行在文件中的 [mysqld] 或 [client] 部分下:
Change in the my.ini or ~/.my.cnf file by including the single line under [mysqld] or [client] section in your file:
max_allowed_packet=500M
然后重启 MySQL 服务就大功告成了.
then restart the MySQL service and you are done.
有关详细信息,请参阅文档.
See the documentation for further information.
这篇关于如何更改 max_allowed_packet 大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为