我有一个从数据库请求数据的 SQL 语句.
I have a SQL statement which requests data from the database.
SELECT `ID`, `To`, `Poster`, `Content`, `Time`, ifnull(`Aura`,0) as `Aura` FROM (
SELECT * FROM (
SELECT DISTINCT * FROM messages m
INNER JOIN
(
SELECT Friend2 as Friend FROM friends WHERE Friend1 = '1'
UNION ALL
SELECT Friend1 as Friend FROM friends WHERE Friend2 = '1'
) friends ON m.Poster = friends.`Friend`
UNION ALL SELECT DISTINCT *, '1' FROM messages where `Poster`='1'
) var
LEFT JOIN
(
select `ID` as `AuraID`, `Status` as `AuraStatus`, count(*) as `Aura`
from messages_aura
) aura ON (var.Poster = aura.AuraID AND var.ID = aura.AuraStatus)
) final
GROUP BY `ID`, `Poster`
ORDER BY `Time` DESC LIMIT 10
这是我的 messages_aura 表格布局.它显示 ID、Status 和 UserID.
Here is my messages_aura table layout. It shows ID, Status and UserID.
这是上述语句的输出.
(上面截图中的ID指的是下面的Poster,上面截图中的Status指的是ID代码> 下面)
(The ID from the above screenshot refers to Poster below and the Status from the above screenshot refers to ID below)
该语句应为底行提供 1 的 Aura 计数,并为顶行提供 2 的 Aura 计数.怎么了?
The statement should give the bottom row a Aura count of 1 and the top row an Aura count of 2. What's wrong?
您缺少 GROUP BY,因此它会计算所有内容,而不是按某些列分组.
You're missing GROUP BY, so it's counting everything instead of grouping by some columns.
LEFT JOIN
(
select `ID` as `AuraID`, `Status` as `AuraStatus`, count(*) as `Aura`
from messages_aura
GROUP BY AuraID, AuraStatus
) aura ON (var.Poster = aura.AuraID AND var.ID = aura.AuraStatus)
这篇关于SQL 计算所有行而不是计算单个行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为