日志序列号是什么意思?我知道它是二进制类型和 10 字节长,它对应于事务在 DB 中发生的时间.但这是一个以某种有效二进制格式存储的高精度日期时间值,还是日期时间和其他东西的函数(例如,在同一毫秒内发生的事务序列号).我做了很多搜索,但找不到一个好的答案.
What is the meaning of Log Sequence Number? I know that it is of type binary and 10bytes long and it corresponds to the time the transaction happen in DB. But is this a high precision date-time value that is stored in some efficient binary format or is this a function of date-time and something else (for example the serial number of transactions that happen at the same milli second). I did a lot of searching but couldn't find a good answer to this.
任何人都可以用一个公式或函数来解释,该公式或函数用于从日期-时间或其他任何东西推导出 LSN.
Can any one explain with a formula or function that is used to derive the LSN from date-time or anything.
SQL Server 中的每条记录事务日志是唯一标识的通过日志序列号 (LSN).LSN排序使得如果 LSN2 是大于 LSN1,变化由引用的日志记录描述到由 LSN2 发生更改后由日志记录 LSN 描述.
Every record in the SQL Server transaction log is uniquely identified by a log sequence number (LSN). LSNs are ordered such that if LSN2 is greater than LSN1, the change described by the log record referred to by LSN2 occurred after the change described by the log record LSN.
来自这里.
您不应该关心这些是如何生成的.
You should not be concerned with how these are generated.
这篇关于LSN 在 SQL Server 中是什么意思?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为