我有 2 张桌子 - reservation:
I have 2 tables - reservation:
id | some_other_column
----+------------------
1 | value
2 | value
3 | value
第二个表 - reservation_log:
id | reservation_id | change_type
----+----------------+-------------
1 | 1 | create
2 | 2 | create
3 | 3 | create
4 | 1 | cancel
5 | 2 | cancel
我只需要选择未取消的预订(在本例中仅为 ID 3).我可以使用简单的 WHERE change_type = cancel 条件轻松选择取消,但我正在努力解决未取消的问题,因为简单的 WHERE 在这里不起作用.
I need to select only reservations NOT cancelled (it is only ID 3 in this example).
I can easily select cancelled with a simple WHERE change_type = cancel condition, but I'm struggling with NOT cancelled, since the simple WHERE doesn't work here.
SELECT *
FROM reservation
WHERE id NOT IN (select reservation_id
FROM reservation_log
WHERE change_type = 'cancel')
或:
SELECT r.*
FROM reservation r
LEFT JOIN reservation_log l ON r.id = l.reservation_id AND l.change_type = 'cancel'
WHERE l.id IS NULL
第一个版本更直观,但我认为第二个版本通常会获得更好的性能(假设您在连接中使用的列上有索引).
The first version is more intuitive, but I think the second version usually gets better performance (assuming you have indexes on the columns used in the join).
第二个版本有效,因为 LEFT JOIN 为第一个表中的所有行返回一行.当 ON 条件成功时,这些行将包含第二个表中的列,就像 INNER JOIN 一样.当条件失败时,返回的行将包含第二个表中所有列的 NULL.WHERE l.id IS NULL 测试然后匹配这些行,因此它会找到表之间不匹配的所有行.
The second version works because LEFT JOIN returns a row for all rows in the first table. When the ON condition succeeds, those rows will include the columns from the second table, just like INNER JOIN. When the condition fails, the returned row will contain NULL for all the columns in the second table. The WHERE l.id IS NULL test then matches those rows, so it finds all the rows that don't have a match between the tables.
这篇关于仅当值不存在时才返回行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为