我需要检查(从同一张表中)基于日期-时间的两个事件之间是否存在关联.
I need to check (from the same table) if there is an association between two events based on date-time.
一组数据将包含某些事件的结束日期时间,另一组数据将包含其他事件的开始日期时间.
One set of data will contain the ending date-time of certain events and the other set of data will contain the starting date-time for other events.
如果第一个事件在第二个事件之前完成,那么我想将它们链接起来.
If the first event completes before the second event then I would like to link them up.
到目前为止我所拥有的是:
What I have so far is:
SELECT name as name_A, date-time as end_DTS, id as id_A
FROM tableA WHERE criteria = 1
SELECT name as name_B, date-time as start_DTS, id as id_B
FROM tableA WHERE criteria = 2
然后我加入他们:
SELECT name_A, name_B, id_A, id_B,
if(start_DTS > end_DTS,'VALID','') as validation_check
FROM tableA
LEFT JOIN tableB ON name_A = name_B
然后,我可以根据我的validation_check 字段运行带有 SELECT 嵌套的 UPDATE 查询吗?
Can I then, based on my validation_check field, run a UPDATE query with the SELECT nested?
您实际上可以通过以下两种方式之一执行此操作:
You can actually do this one of two ways:
MySQL 更新连接语法:
MySQL update join syntax:
UPDATE tableA a
INNER JOIN tableB b ON a.name_a = b.name_b
SET validation_check = if(start_dts > end_dts, 'VALID', '')
-- where clause can go here
ANSI SQL 语法:
ANSI SQL syntax:
UPDATE tableA SET validation_check =
(SELECT if(start_DTS > end_DTS, 'VALID', '') AS validation_check
FROM tableA
INNER JOIN tableB ON name_A = name_B
WHERE id_A = tableA.id_A)
选择你认为最自然的那个.
Pick whichever one seems most natural to you.
这篇关于MySQL - 基于 SELECT 查询的 UPDATE 查询的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为