我的解决方案是:
这个解决方案好吗?
完成这项工作的最佳策略是什么?
Is this solution OK?
And what's the best strategy to do this job?
你不需要破解任何东西 ;)
You don't need to hack anything ;)
我不完全确定您为什么需要 mysql 上的数据.如果我知道,也许会有更合适的答案.在任何情况下,作为通用答案,您可以使用 redis 键空间通知
I am not entirely sure why you need the data on mysql. If I knew, maybe there would be a more suitable answer. In any case, as a generic answer you can use redis keyspace notifications
您可以在您的密钥上订阅命令 HSET、HMSET、HDEL 和 DEL,这样每次删除密钥或设置或删除哈希值时您都会收到通知.
You could subscribe to the commands HSET, HMSET, HDEL and DEL on your keys, so you would get a notification everytime a key is deleted or a hash value is set or removed.
请注意,如果您错过任何通知,就会出现不一致的情况.因此,偶尔您可以使用 SCAN 命令查看所有密钥并检查 mysql 是否需要更新.
Note if you miss any notification you would have an inconsistency. So once in a while you could just use the SCAN command to go through all your keys and check on mysql if they need to be updated.
另一种策略可能是维护两个独立的结构.一个是带有值的散列,另一个是按更新时间戳排序的所有值的 ZSET.保持这两种结构最新的最好方法是编写两个或三个 lua 脚本(插入/更新和删除),它们将原子地操作散列和 zset.
Another strategy could be maintaining two separate structures. One would be the hash with the values, and the other would be a ZSET of all the values sorted by timestamp of update. The best way to keep both structures up to date would be to write two or three lua scripts (insert/update and delete) that would operate on the hash and the zset atomically.
然后,您可以定期查询 ZSET 中时间戳高于上次同步操作的元素,获取所有更新的键(它将包括已删除的键,除非您想为这些键保留第二个 ZSET)然后只需通过键检索所有元素并同步到mysql.
Then you can just periodically query the ZSET for the elements with a timestamp higher than your last sync operation, get all the keys that were updated (it would include deleted keys, unless you want to keep a second ZSET exclusively for those) and then just retrieve all the elements by key and sync to mysql.
希望它对你有用!
这篇关于将 Redis 数据同步到 MySQL 的最佳策略是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为