我想知道以下在性能方面是否有任何差异
I am wondering if there is any difference in regards to performance between the following
SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4)
SELECT ... FROM ... WHERE someFIELD between 0 AND 5
SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ...
或者 MySQL 会像编译器优化代码一样优化 SQL 吗?
or will MySQL optimize the SQL in the same way compilers will optimize code ?
将 AND 更改为 OR 的原因在评论中说明.
Changed the AND's to OR's for the reason stated in the comments.
接受的答案没有解释原因.
The accepted answer doesn't explain the reason.
以下引用自高性能 MySQL,第 3 版.
Below are quoted from High Performance MySQL, 3rd Edition.
在许多数据库服务器中,IN() 只是多个 OR 子句的同义词,因为两者在逻辑上是等价的.在 MySQL 中不是这样,它对 IN() 列表中的值进行排序并使用快速二进制搜索来查看某个值是否在列表中.这是列表大小的 O(Log n),而等效的一系列 OR 子句在列表的大小上是 O(n)(即,对于大列表要慢得多)
In many database servers, IN() is just a synonym for multiple OR clauses, because the two are logically equivalent. Not so in MySQL, which sorts the values in the IN() list and uses a fast binary search to see whether a value is in the list. This is O(Log n) in the size of the list, whereas an equivalent series of OR clauses is O(n) in the size of the list (i.e., much slower for large lists)
这篇关于MYSQL OR 与 IN 性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为