为简单起见,假设所有相关字段都是NOT NULL.
For simplicity, assume all relevant fields are NOT NULL.
你可以这样做:
SELECT
table1.this, table2.that, table2.somethingelse
FROM
table1, table2
WHERE
table1.foreignkey = table2.primarykey
AND (some other conditions)
否则:
SELECT
table1.this, table2.that, table2.somethingelse
FROM
table1 INNER JOIN table2
ON table1.foreignkey = table2.primarykey
WHERE
(some other conditions)
这两个在MySQL中是否以相同的方式工作?
Do these two work on the same way in MySQL?
INNER JOIN 是您应该使用的 ANSI 语法.
INNER JOIN is ANSI syntax that you should use.
它通常被认为更具可读性,尤其是当您加入大量表格时.
It is generally considered more readable, especially when you join lots of tables.
也可以在需要时轻松替换为 OUTER JOIN.
It can also be easily replaced with an OUTER JOIN whenever a need arises.
WHERE 语法更面向关系模型.
The WHERE syntax is more relational model oriented.
两个表的结果 JOINed 是应用过滤器的表的笛卡尔积,过滤器只选择连接列匹配的那些行.
A result of two tables JOINed is a cartesian product of the tables to which a filter is applied which selects only those rows with joining columns matching.
使用 WHERE 语法更容易看到这一点.
It's easier to see this with the WHERE syntax.
就您的示例而言,在 MySQL(以及通常的 SQL)中,这两个查询是同义词.
As for your example, in MySQL (and in SQL generally) these two queries are synonyms.
另外,请注意 MySQL 还有一个 STRAIGHT_JOIN 子句.
Also, note that MySQL also has a STRAIGHT_JOIN clause.
使用这个子句,可以控制JOIN顺序:外循环扫描哪个表,内循环扫描哪个表.
Using this clause, you can control the JOIN order: which table is scanned in the outer loop and which one is in the inner loop.
您无法在 MySQL 中使用 WHERE 语法来控制这一点.
You cannot control this in MySQL using WHERE syntax.
这篇关于INNER JOIN ON vs WHERE 子句的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为