为什么需要在 HAVING 之后而不是 WHERE 中放置自己创建的列(例如 select 1 as "number")MySQL?
Why do you need to place columns you create yourself (for example select 1 as "number") after HAVING and not WHERE in MySQL?
与执行 WHERE 1(编写整个定义而不是列名)相比,是否有任何缺点?
And are there any downsides instead of doing WHERE 1 (writing the whole definition instead of a column name)?
为什么需要在 HAVING 之后放置自己创建的列(例如选择 1 作为数字")而不是在 MySQL 中的 WHERE 中?
Why is it that you need to place columns you create yourself (for example "select 1 as number") after HAVING and not WHERE in MySQL?
WHERE 在 GROUP BY 之前应用,HAVING 在之后应用(并且可以过滤聚合).
WHERE is applied before GROUP BY, HAVING is applied after (and can filter on aggregates).
通常,您不能在这两个子句中引用别名,但是 MySQL 允许在 GROUP BY、 中引用 和 SELECT 级别的别名>ORDER BYHAVING.
In general, you can reference aliases in neither of these clauses, but MySQL allows referencing SELECT level aliases in GROUP BY, ORDER BY and HAVING.
与执行WHERE 1"(编写整个定义而不是列名)相比,是否有任何缺点
And are there any downsides instead of doing "WHERE 1" (writing the whole definition instead of a column name)
如果您的计算表达式不包含任何聚合,则将其放入 WHERE 子句中很可能会更有效.
If your calculated expression does not contain any aggregates, putting it into the WHERE clause will most probably be more efficient.
这篇关于哪里 vs 有的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为