这个查询有什么问题:
INSERT INTO Users( weight, desiredWeight ) VALUES ( 160, 145 ) WHERE id = 1;
它可以在没有 WHERE 子句的情况下工作.我似乎忘记了我的 SQL.
It works without the WHERE clause. I've seemed to have forgot my SQL.
MySQL INSERT 语法 不支持 WHERE 子句,因此您的查询将失败.假设您的 id 列是唯一的或主键:
MySQL INSERT Syntax does not support the WHERE clause so your query as it stands will fail. Assuming your id column is unique or primary key:
如果您尝试插入 ID 为 1 的新行,您应该使用:
If you're trying to insert a new row with ID 1 you should be using:
INSERT INTO Users(id, weight, desiredWeight) VALUES(1, 160, 145);
如果您尝试更改 ID 为 1 的现有行的 weight/desiredWeight 值,您应该使用:
If you're trying to change the weight/desiredWeight values for an existing row with ID 1 you should be using:
UPDATE Users SET weight = 160, desiredWeight = 145 WHERE id = 1;
如果你愿意,你也可以像这样使用 INSERT .. ON DUPLICATE KEY 语法:
If you want you can also use INSERT .. ON DUPLICATE KEY syntax like so:
INSERT INTO Users (id, weight, desiredWeight) VALUES(1, 160, 145) ON DUPLICATE KEY UPDATE weight=160, desiredWeight=145
或者甚至像这样:
INSERT INTO Users SET id=1, weight=160, desiredWeight=145 ON DUPLICATE KEY UPDATE weight=160, desiredWeight=145
同样重要的是要注意,如果你的 id 列是一个自动增量列,那么你最好从你的 INSERT 中省略它,让 mysql 像往常一样增加它.
It's also important to note that if your id column is an autoincrement column then you might as well omit it from your INSERT all together and let mysql increment it as normal.
这篇关于MySQL 插入查询不适用于 WHERE 子句的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为