我试图找出表中是否存在一行.使用 MySQL,做这样的查询是否更好:
I'm trying to find out if a row exists in a table. Using MySQL, is it better to do a query like this:
SELECT COUNT(*) AS total FROM table1 WHERE ...
并检查总数是否为非零,或者执行这样的查询是否更好:
and check to see if the total is non-zero or is it better to do a query like this:
SELECT * FROM table1 WHERE ... LIMIT 1
并检查是否返回了任何行?
and check to see if any rows were returned?
在两个查询中,WHERE 子句都使用索引.
In both queries, the WHERE clause uses an index.
你也可以试试EXISTS:
SELECT EXISTS(SELECT * FROM table1 WHERE ...)
并且根据 文档,你可以SELECT任何东西.
and per the documentation, you can SELECT anything.
传统上,EXISTS 子查询以 SELECT * 开头,但它可以以 SELECT 5 或 SELECT column1 或任何东西开头.MySQL忽略此类子查询中的 SELECT 列表,因此没有区别.
Traditionally, an EXISTS subquery starts with SELECT *, but it could begin with SELECT 5 or SELECT column1 or anything at all. MySQL ignores the SELECT list in such a subquery, so it makes no difference.
这篇关于测试 MySQL 表中是否存在行的最佳方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为