我试图了解如何使用不同的值更新多行,但我不明白.解决方案无处不在,但在我看来很难理解.
I am trying to understand how to UPDATE multiple rows with different values and I just don't get it. The solution is everywhere but to me it looks difficult to understand.
例如,三个更新为 1 个查询:
For instance, three updates into 1 query:
UPDATE table_users
SET cod_user = '622057'
, date = '12082014'
WHERE user_rol = 'student'
AND cod_office = '17389551';
UPDATE table_users
SET cod_user = '2913659'
, date = '12082014'
WHERE user_rol = 'assistant'
AND cod_office = '17389551';
UPDATE table_users
SET cod_user = '6160230'
, date = '12082014'
WHERE user_rol = 'admin'
AND cod_office = '17389551';
我 阅读 一个例子,但我真的不明白如何进行查询.即:
I read an example, but I really don't understand how to make the query. i.e:
UPDATE table_to_update
SET cod_user= IF(cod_office = '17389551','622057','2913659','6160230')
,date = IF(cod_office = '17389551','12082014')
WHERE ?? IN (??) ;
如果 WHERE 和 IF 条件中有多个条件,我不完全清楚如何进行查询..有什么想法吗?
I'm not entirely clear how to do the query if there are multiple condition in the WHERE and in the IF condition..any ideas?
你可以这样做:
UPDATE table_users
SET cod_user = (case when user_role = 'student' then '622057'
when user_role = 'assistant' then '2913659'
when user_role = 'admin' then '6160230'
end),
date = '12082014'
WHERE user_role in ('student', 'assistant', 'admin') AND
cod_office = '17389551';
我不明白你的日期格式.日期应使用本机日期和时间类型存储在数据库中.
I don't understand your date format. Dates should be stored in the database using native date and time types.
这篇关于MySQL - 在一个查询中更新具有不同值的多行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为