我有一个 MySQL 问题,我认为它一定很简单.当我运行以下 MySql 查询时,我需要从 table1 返回 LAST INSERTED ID:
I have a MySQL question that I think must be quite easy. I need to return the LAST INSERTED ID from table1 when I run the following MySql query:
INSERT INTO table1 (title,userid) VALUES ('test',1);
INSERT INTO table2 (parentid,otherid,userid) VALUES (LAST_INSERT_ID(),4,1);
SELECT LAST_INSERT_ID();
如您所知,当前代码只会返回 table2 的 LAST INSERT ID 而不是 table1,即使我插入 table2 之间,我如何从 table1 获取 ID?
As you can understand the current code will just return the LAST INSERT ID of table2 instead of table1, how can I get the id from table1 even if I insert into table2 between?
您可以将最后一个插入 id 存储在一个变量中:
You could store the last insert id in a variable :
INSERT INTO table1 (title,userid) VALUES ('test', 1);
SET @last_id_in_table1 = LAST_INSERT_ID();
INSERT INTO table2 (parentid,otherid,userid) VALUES (@last_id_in_table1, 4, 1);
或者从 table1 中获取最大 id(警告.请参阅 Rob Starling 评论中的注释,关于使用最大 id 时竞争条件可能导致的错误)
Or get the max id from table1 ( Warning. See note in comments from Rob Starling about possible errors from race conditions when using the max id)
INSERT INTO table1 (title,userid) VALUES ('test', 1);
INSERT INTO table2 (parentid,otherid,userid) VALUES (LAST_INSERT_ID(), 4, 1);
SELECT MAX(id) FROM table1;
(警告:正如 Rob Starling 在
(Warning: as Rob Starling points out in the
这篇关于LAST_INSERT_ID() MySQL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
使用 SparkSQL 删除 MySQL 表Dropping MySQL table with SparkSQL(使用 SparkSQL 删除 MySQL 表)
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
Spark SQL/Hive 查询永远需要加入Spark SQL/Hive Query Takes Forever With Join(Spark SQL/Hive 查询永远需要加入)
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
在 Spark 中找不到适合 jdbc 的驱动程序No suitable driver found for jdbc in Spark(在 Spark 中找不到适合 jdbc 的驱动程序)