是否可以每周在 mysql 表中自动将 3 天前的行移动到另一个名为Table_Archive"的表中?
Is it possible to move rows that are 3 days old into an other table called "Table_Archive" automatically in mysql ones a week?
表A例如:
ID | stringvalue | Timestamp
1 | abc | 2011-10-01
2 | abc2 | 2011-10-02
3 | abc3 | 2011-10-05
4 | abc4 | 2011-10-10
5 | abc5 | 2011-10-11
搬家后
表A:
ID | stringvalue | Timestamp
4 | abc4 | 2011-10-10
5 | abc5 | 2011-10-11
表_档案:
ID | stringvalue | Timestamp
1 | abc | 2011-10-01
2 | abc2 | 2011-10-02
3 | abc3 | 2011-10-05
当新的输入进入tableA时,下一步的ID(PK)不会有任何问题吗?
And when new input comes into tableA it wont be any problems with ID (PK) in the next move?
我得到了什么:
CREATE PROCEDURE clean_tables ()
BEGIN
BEGIN TRANSACTION;
DECLARE _now DATETIME;
SET _now := NOW();
INSERT
INTO Table_Archive
SELECT *
FROM TableA
WHERE timestamp < _now - 3;
FOR UPDATE;
DELETE
FROM TableA
WHERE timestamp < _now - 3;
COMMIT;
END
如何将 _now 更改为 3 天前的日期?
How do I change _now to be the date 3 days ago?
就个人而言,我会使用 MySQL 事件调度程序.这是一个内置的事件调度器,类似于 Linux 中的 CRON.
Personally, I would make use of the MySQL Event Scheduler. This is a built in event scheduler rather like CRON in Linux.
您可以指定它以指定的时间间隔调用一个过程、过程或函数或运行一些 SQL.
You can specify it to call a procedure, procedures or functions or run a bit of SQL at designated intervals.
阅读 MySQL 文档,但一个例子是:
Read the MySQL docs but an example would be:
CREATE EVENT mydatabase.myevent
ON SCHEDULE EVERY 1 WEEK STARTS CURRENT_TIMESTAMP + INTERVAL 10 MINUTE
DO
call clean_tables();
所以这是说每周调用一次 clean_tables() 并在 10 分钟后进行第一次调用"
So this is saying "call clean_tables() once a week and make the first call in 10 minutes' time"
一个问题是(我认为)默认情况下禁用了事件调度程序.要打开它运行:
One gotcha is that the event scheduler is (I think) disabled by default. To turn it on run:
SET GLOBAL event_scheduler = ON;
然后您可以运行:
SHOW PROCESSLIST;
查看事件调度程序线程是否正在运行.
To see whether the event scheduler thread is running.
至于保留您的表 A ID 列(如果必须).我会将 Table_Archive 上的 ID 保留为该表的唯一标识,即使其成为主键 &auto_increment 然后有一个 'Original_TableA_ID' 列来存储 TableA ID.如果需要,您可以在其上放置唯一索引.
As for preserving your Table A ID column (if you must). I would keep the ID on Table_Archive as unique to that table i.e make it the primary key & auto_increment and then have a 'Original_TableA_ID' column in which to store the TableA ID. You can put a unique index on this if you want.
所以 Table_Archive 应该是这样的:
So Table_Archive would be like:
create table `Table_Archive` (
ID int unsigned primary key auto_increment, -- < primary key auto increment
tableAId unsigned int not null, -- < id column from TableA
stringValue varchar(100),
timestamp datetime,
UNIQUE KEY `archiveUidx1` (`tableAId`) -- < maintain uniqueness of TableA.ID column in Archive table
);
似乎没有人回答您最初的问题如何将 _now 更改为 3 天前的日期?".您可以使用 INTERVAL 来做到这一点:
Nobody seems to have answered your original question "How do I change _now to be the date 3 days ago?". You do that using INTERVAL:
DELIMITER $
CREATE PROCEDURE clean_tables ()
BEGIN
BEGIN TRANSACTION;
DECLARE _now DATETIME;
SET _now := NOW();
INSERT
INTO Table_Archive
SELECT *
FROM TableA
WHERE timestamp < _now - interval 3 day;
FOR UPDATE;
DELETE
FROM TableA
WHERE timestamp < _now - interval 3 day;
COMMIT;
END$
DELIMITER ;
最后一点是,您应该考虑在 TableA 的时间戳列上创建索引,以提高 clean_tables() 过程的性能.
One final point is that you should consider creating an index on the timestamp column on TableA to improve the performance of you clean_tables() procedure.
这篇关于将行从 TableA 移动到 Table-Archive的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为