我有一个记录在线用户的 innoDB 表.它会在用户每次刷新页面时更新,以跟踪他们所在的页面以及他们上次访问站点的日期.然后我有一个 cron,它每 15 分钟运行一次以删除旧记录.
I have a innoDB table which records online users. It gets updated on every page refresh by a user to keep track of which pages they are on and their last access date to the site. I then have a cron that runs every 15 minutes to DELETE old records.
我在尝试获取锁时发现了死锁";昨晚尝试重新启动事务大约 5 分钟,似乎是在将 INSERT 运行到此表中时.有人可以建议如何避免此错误吗?
I got a 'Deadlock found when trying to get lock; try restarting transaction' for about 5 minutes last night and it appears to be when running INSERTs into this table. Can someone suggest how to avoid this error?
=== 编辑 ===
以下是正在运行的查询:
Here are the queries that are running:
首次访问网站:
INSERT INTO onlineusers SET
ip = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
在每次页面刷新时:
UPDATE onlineusers SET
ips = 123.456.789.123,
datetime = now(),
userid = 321,
page = '/thispage',
area = 'thisarea',
type = 3
WHERE id = 888
每 15 分钟 Cron:
DELETE FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND
然后它会记录一些统计数据(即:在线成员、在线访问者).
It then does some counts to log some stats (ie: members online, visitors online).
一个可以帮助解决大多数死锁的简单技巧是按特定顺序对操作进行排序.
One easy trick that can help with most deadlocks is sorting the operations in a specific order.
当两个事务试图以相反的顺序锁定两个锁时,您会遇到死锁,即:
You get a deadlock when two transactions are trying to lock two locks at opposite orders, ie:
如果两者同时运行,连接 1 将锁定 key(1),连接 2 将锁定 key(2),并且每个连接都将等待对方释放密钥 ->死锁.
If both run at the same time, connection 1 will lock key(1), connection 2 will lock key(2) and each connection will wait for the other to release the key -> deadlock.
现在,如果您更改查询,使连接以相同的顺序锁定密钥,即:
Now, if you changed your queries such that the connections would lock the keys at the same order, ie:
不可能陷入僵局.
所以这就是我的建议:
确保除了删除语句之外,没有其他查询会一次锁定多个键的访问.如果您这样做(我怀疑您这样做),请按升序排列 (k1,k2,..kn) 中的 WHERE.
Make sure you have no other queries that lock access more than one key at a time except for the delete statement. if you do (and I suspect you do), order their WHERE in (k1,k2,..kn) in ascending order.
修复您的删除语句以按升序工作:
Fix your delete statement to work in ascending order:
改变
DELETE FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
到
DELETE FROM onlineusers
WHERE id IN (
SELECT id FROM onlineusers
WHERE datetime <= now() - INTERVAL 900 SECOND
ORDER BY id
) u;
要记住的另一件事是 MySQL 文档建议,在出现死锁的情况下,客户端应自动重试.您可以将此逻辑添加到您的客户端代码中.(比如说,在放弃之前对这个特定错误重试 3 次).
Another thing to keep in mind is that MySQL documentation suggest that in case of a deadlock the client should retry automatically. you can add this logic to your client code. (Say, 3 retries on this particular error before giving up).
这篇关于如何避免 MySQL '尝试获取锁时发现死锁;尝试重新启动事务'的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为