当你限制一个SQL查询返回的行数时,通常用于分页,有两种方法可以确定总记录数:
When you limit the number of rows to be returned by a SQL query, usually used in paging, there are two methods to determine the total number of records:
在原SELECT中包含SQL_CALC_FOUND_ROWS选项,然后通过运行SELECT FOUND_ROWS()得到总行数:>
Include the SQL_CALC_FOUND_ROWS option in the original SELECT, and then get the total number of rows by running SELECT FOUND_ROWS():
SELECT SQL_CALC_FOUND_ROWS * FROM table WHERE id > 100 LIMIT 10;
SELECT FOUND_ROWS();
正常运行查询,然后通过运行SELECT COUNT(*)
SELECT * FROM table WHERE id > 100 LIMIT 10;
SELECT COUNT(*) FROM table WHERE id > 100;
哪种方法最好/最快?
视情况而定.请参阅有关此主题的 MySQL 性能博客文章:要SQL_CALC_FOUND_ROWS 还是不SQL_CALC_FOUND_ROWS?
It depends. See the MySQL Performance Blog post on this subject: To SQL_CALC_FOUND_ROWS or not to SQL_CALC_FOUND_ROWS?
只是一个简短的总结:彼得说这取决于您的索引和其他因素.该帖子的许多评论似乎都说 SQL_CALC_FOUND_ROWS 几乎总是比运行两个查询慢 - 有时最多慢 10 倍.
Just a quick summary: Peter says that it depends on your indexes and other factors. Many of the comments to the post seem to say that SQL_CALC_FOUND_ROWS is almost always slower - sometimes up to 10x slower - than running two queries.
这篇关于哪个最快?SELECT SQL_CALC_FOUND_ROWS FROM `table`,或 SELECT COUNT(*)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为