好像把两个或多个表组合起来,我们可以使用join或where.一个比另一个有什么优势?
It seems like to combine two or more tables, we can either use join or where. What are the advantages of one over the other?
任何涉及多个表的查询都需要某种形式的关联来将表A"的结果链接到表B".这样做的传统 (ANSI-89) 方法是:
Any query involving more than one table requires some form of association to link the results from table "A" to table "B". The traditional (ANSI-89) means of doing this is to:
在 WHERE 子句中写出表之间的关联
Write the association between the tables in the WHERE clause
SELECT *
FROM TABLE_A a,
TABLE_B b
WHERE a.id = b.id
这是使用 ANSI-92 JOIN 语法重写的查询:
Here's the query re-written using ANSI-92 JOIN syntax:
SELECT *
FROM TABLE_A a
JOIN TABLE_B b ON b.id = a.id
在支持(Oracle 9i+、PostgreSQL 7.2+、MySQL 3.23+、SQL Server 2000+)的情况下,使用任何一种语法都没有性能优势.优化器将它们视为相同的查询.但是更复杂的查询可以从使用 ANSI-92 语法中受益:
Where supported (Oracle 9i+, PostgreSQL 7.2+, MySQL 3.23+, SQL Server 2000+), there is no performance benefit to using either syntax over the other. The optimizer sees them as the same query. But more complex queries can benefit from using ANSI-92 syntax:
使用 ANSI-92 JOIN 语法而不是 ANSI-89 的原因有很多:
There are numerous reasons to use ANSI-92 JOIN syntax over ANSI-89:
ANSI-92 JOIN 语法是模式,而不是反模式:
ANSI-92 JOIN syntax is pattern, not anti-pattern:
由于不熟悉和/或舒适,我认为继续使用 ANSI-89 WHERE 子句而不是 ANSI-92 JOIN 语法没有任何好处.有些人可能会抱怨 ANSI-92 语法更冗长,但这正是它明确的原因.越明确,越容易理解和维护.
Short of familiarity and/or comfort, I don't see any benefit to continuing to use the ANSI-89 WHERE clause instead of the ANSI-92 JOIN syntax. Some might complain that ANSI-92 syntax is more verbose, but that's what makes it explicit. The more explicit, the easier it is to understand and maintain.
这篇关于在 MySQL 查询中,为什么使用 join 而不是 where?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为