我在 MySQL 查询中使用 GROUP_CONCAT() 将多行转换为单个字符串.但是,这个函数的结果的最大长度是1024个字符.
I'm using GROUP_CONCAT() in a MySQL query to convert multiple rows into a single string.
However, the maximum length of the result of this function is 1024 characters.
我非常清楚我可以更改参数 group_concat_max_len 以增加此限制:
I'm very well aware that I can change the param group_concat_max_len to increase this limit:
SET SESSION group_concat_max_len = 1000000;
但是,在我使用的服务器上,我无法更改任何参数.不是通过使用前面的查询,也不是通过编辑任何配置文件.
However, on the server I'm using, I can't change any param. Not by using the preceding query and not by editing any configuration file.
所以我的问题是:有没有其他方法可以将多行查询的输出转换为单个字符串?
So my question is: Is there any other way to get the output of a multiple row query into a single string?
CREATE TABLE some_table (
field1 int(11) NOT NULL AUTO_INCREMENT,
field2 varchar(10) NOT NULL,
field3 varchar(10) NOT NULL,
PRIMARY KEY (`field1`)
);
INSERT INTO `some_table` (field1, field2, field3) VALUES
(1, 'text one', 'foo'),
(2, 'text two', 'bar'),
(3, 'text three', 'data'),
(4, 'text four', 'magic');
这个查询有点奇怪但是它不需要另一个查询来初始化变量;它可以嵌入到更复杂的查询中.它返回所有以分号分隔的field2".
This query is a bit strange but it does not need another query to initialize the variable; and it can be embedded in a more complex query. It returns all the 'field2's separated by a semicolon.
SELECT result
FROM (SELECT @result := '',
(SELECT result
FROM (SELECT @result := CONCAT_WS(';', @result, field2) AS result,
LENGTH(@result) AS blength
FROM some_table
ORDER BY blength DESC
LIMIT 1) AS sub1) AS result) AS sub2;
这篇关于MySQL 和 GROUP_CONCAT() 最大长度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为