我有这个 MySQL 查询:
I have this MySQL query:
SELECT DAYOFYEAR(`date`) AS d, COUNT(*)
FROM `orders`
WHERE `hasPaid` > 0
GROUP BY d
ORDER BY d
返回如下内容:
d | COUNT(*) |
20 | 5 |
21 | 7 |
22 | 12 |
23 | 4 |
我真正想要的是最后的另一列来显示运行总数:
What I'd really like is another column on the end to show the running total:
d | COUNT(*) | ??? |
20 | 5 | 5 |
21 | 7 | 12 |
22 | 12 | 24 |
23 | 4 | 28 |
这可能吗?
也许对您来说是一个更简单的解决方案,并且可以防止数据库执行大量查询.这仅执行一个查询,然后在一次传递中对结果进行一些数学运算.
Perhaps a simpler solution for you and prevents the database having to do a ton of queries. This executes just one query then does a little math on the results in a single pass.
SET @runtot:=0;
SELECT
q1.d,
q1.c,
(@runtot := @runtot + q1.c) AS rt
FROM
(SELECT
DAYOFYEAR(`date`) AS d,
COUNT(*) AS c
FROM `orders`
WHERE `hasPaid` > 0
GROUP BY d
ORDER BY d) AS q1
这将为您提供一个额外的 RT(运行总计)列.不要错过顶部的 SET 语句来首先初始化运行总计变量,否则您只会得到一列 NULL 值.
This will give you an additional RT (running total) column. Don't miss the SET statement at the top to initialize the running total variable first or you will just get a column of NULL values.
这篇关于在 MySQL 中计算运行总数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为