假设我有三个不同的 MySQL 表:
Lets say I have three different MySQL tables:
表格products:
Table products:
id | name
1 Product A
2 Product B
表格partners:
Table partners:
id | name
1 Partner A
2 Partner B
表格sales:
partners_id | products_id
1 2
2 5
1 5
1 3
1 4
1 5
2 2
2 4
2 3
1 1
我想要一张表格,行中包含合作伙伴,列中包含产品.到目前为止,我能够得到这样的输出:
I would like to get a table with partners in the rows and products as columns. So far I was able to get an output like this:
name | name | COUNT( * )
Partner A Product A 1
Partner A Product B 1
Partner A Product C 1
Partner A Product D 1
Partner A Product E 2
Partner B Product B 1
Partner B Product C 1
Partner B Product D 1
Partner B Product E 1
使用此查询:
SELECT partners.name, products.name, COUNT( * )
FROM sales
JOIN products ON sales.products_id = products.id
JOIN partners ON sales.partners_id = partners.id
GROUP BY sales.partners_id, sales.products_id
LIMIT 0 , 30
但我想改为:
partner_name | Product A | Product B | Product C | Product D | Product E
Partner A 1 1 1 1 2
Partner B 0 1 1 1 1
问题是我无法确定我将拥有多少产品,因此列号需要根据产品表中的行动态更改.
The problem is that I cannot tell how many products I will have so the column number needs to change dynamically depending on the rows in the products table.
这个很好的答案似乎不适用于 mysql:T-SQL 数据透视表?从行值创建表列的可能性
This very good answer does not seem to work with mysql: T-SQL Pivot? Possibility of creating table columns from row values
不幸的是,MySQL 没有 PIVOT 功能,这基本上是您要尝试执行的操作.因此,您需要使用带有 CASE 语句的聚合函数:
Unfortunately MySQL does not have a PIVOT function which is basically what you are trying to do. So you will need to use an aggregate function with a CASE statement:
select pt.partner_name,
count(case when pd.product_name = 'Product A' THEN 1 END) ProductA,
count(case when pd.product_name = 'Product B' THEN 1 END) ProductB,
count(case when pd.product_name = 'Product C' THEN 1 END) ProductC,
count(case when pd.product_name = 'Product D' THEN 1 END) ProductD,
count(case when pd.product_name = 'Product E' THEN 1 END) ProductE
from partners pt
left join sales s
on pt.part_id = s.partner_id
left join products pd
on s.product_id = pd.prod_id
group by pt.partner_name
参见 SQL 演示
由于您不知道产品,您可能希望动态执行此操作.这可以使用准备好的语句来完成.
Since you do not know the Products you will probably want to perform this dynamically. This can be done using prepared statements.
使用动态数据透视表(将行转换为列),您的代码将如下所示:
With dynamic pivot tables (transform rows to columns) your code would look like this:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'count(case when Product_Name = ''',
Product_Name,
''' then 1 end) AS ',
replace(Product_Name, ' ', '')
)
) INTO @sql
from products;
SET @sql = CONCAT('SELECT pt.partner_name, ', @sql, ' from partners pt
left join sales s
on pt.part_id = s.partner_id
left join products pd
on s.product_id = pd.prod_id
group by pt.partner_name');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
参见 SQL 演示
可能值得注意的是,GROUP_CONCAT 默认限制为 1024 字节.您可以通过在您的程序期间将其设置得更高来解决此问题,即.SET @@group_concat_max_len = 32000;
It's probably worth noting that GROUP_CONCAT is by default limited to 1024 bytes. You can work around this by setting it higher for the duration of your procedure, ie. SET @@group_concat_max_len = 32000;
这篇关于MySQL 将行转换为动态列数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为