查询:
SELECT COUNT(online.account_id) cnt from online;
但是在线表也被一个事件修改,所以我经常可以通过运行show processlist看到锁.
But online table is also modified by an event, so frequently I can see lock by running show processlist.
MySQL 中有没有什么语法可以让 select 语句不引起锁?
Is there any grammar in MySQL that can make select statement not causing locks?
我忘记在上面提到它在 MySQL 从数据库上.
And I've forgotten to mention above that it's on a MySQL slave database.
在我添加到 my.cnf:transaction-isolation = READ-UNCOMMITTED 之后从站会遇到错误:
After I added into my.cnf:transaction-isolation = READ-UNCOMMITTED
the slave will meet with error:
错误无法进行二进制日志记录.消息:InnoDB 中的事务级别READ-UNCOMMITTED"对于查询时的二进制日志模式STATEMENT"不安全
Error 'Binary logging not possible. Message: Transaction level 'READ-UNCOMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'' on query
那么,有没有兼容的方法来做到这一点?
So, is there a compatible way to do this?
找到一篇名为MYSQL WITH NOLOCK"的文章
Found an article titled "MYSQL WITH NOLOCK"
https://web.archive.org/web/20100814144042/http://sqldba.org/articles/22-mysql-with-nolock.aspx
在 MS SQL Server 中,您将执行以下操作:
in MS SQL Server you would do the following:
SELECT * FROM TABLE_NAME WITH (nolock)
和 MYSQL 等价的是
and the MYSQL equivalent is
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
编辑
Michael Mior 提出以下建议(来自评论)
Michael Mior suggested the following (from the comments)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
COMMIT ;
这篇关于有什么方法可以在不导致 MySQL 锁定的情况下进行选择?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为