给定下表:
id | value
--------------
1 6
2 70
有没有办法添加一个基于同一个表中的另一列自动计算的列?类似于 VIEW,但属于同一个表的一部分.例如,calculated 将是 value 的一半.Calculated 应该在 value 更改时自动更新,就像 VIEW 一样.
Is there a way to add a column that is automatically calculated based on another column in the same table? Like a VIEW, but part of the same table. As an example, calculated would be half of value. Calculated should be automatically updated when value changes, just like a VIEW would be.
结果是:
id | value | calculated
-----------------------
1 6 3
2 70 35
Generated Column 是 MySql 5.7.6 及以上版本的好方法之一.
Generated Column is one of the good approach for MySql version which is 5.7.6 and above.
生成的列有两种:
两种类型都可以有 NOT NULL 限制,但只有存储的 Generated Column 可以是索引的一部分.
Both types can have NOT NULL restrictions, but only a stored Generated Column can be a part of an index.
对于当前情况,我们将使用存储的生成列.为了实现,我认为计算所需的两个值都存在于表中
For current case, we are going to use stored generated column. To implement I have considered that both of the values required for calculation are present in table
CREATE TABLE order_details (price DOUBLE, quantity INT, amount DOUBLE AS (price * quantity));
INSERT INTO order_details (price, quantity) VALUES(100,1),(300,4),(60,8);
amount 会自动弹出到表格中,您可以直接访问它,另外请注意,每当您更新任何列时,amount 也会更新.
amount will automatically pop up in table and you can access it directly, also please note that whenever you will update any of the columns, amount will also get updated.
这篇关于从另一列计算的列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为