我们正在创建从 RDS 中的 Mysql 到用于创建搜索索引的弹性搜索的数据管道,为此,使用 debezium cdc 及其 mysql 源和弹性接收器连接器.
We are creating a data pipeline from Mysql in RDS to elastic search for creating search indexes, and for this using debezium cdc with its mysql source and elastic sink connector.
现在,由于 mysql 在 rds 中,我们必须授予 mysql 用户对两个我们想要 cdc 表的 LOCK TABLE 权限,如文档中所述.
Now as the mysql is in rds we have to give the mysql user LOCK TABLE permission for two tables we wanted cdc, as mentioned in docs.
我们还有其他各种 mysql 用户执行可能需要两个表中任何一个的事务.
We also have various other mysql users performing transactions which may require any of the two tables.
一旦我们将 mysql 连接器连接到我们的生产数据库,就会创建一个锁,我们的整个系统就宕机了,在意识到这一点后,我们很快停止了 kafka 并移除了连接器,但锁仍然在增加,而且只是在我们通过停止运行生产代码并手动终止进程来停止所有新查询后解决.
As soon as we connected the mysql connector to our production database there was a lock created and our whole system went down, after realising this we soon stopped the kafka and also removed the connector, but the locks where still increasing and it only solved after we stop all the new queries by stopping our production code from running and manually killing the processes.
造成这种情况的潜在原因是什么,我们如何防止这种情况发生?
What could be the potential cause for this, and how could we prevent this ?
如果锁定有问题,并且您无法在锁定与一致性之间进行权衡,那么请查看 snapshot.locking.mode 配置选项.
If the locking is problem and you cannot afford to tradeoff locking vs consistency then please take a look at snapshot.locking.mode config option.
这篇关于用于生产中 rds 的 Mysql debezium 连接器导致死锁的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为