我希望能够将值的数组"传递给我的存储过程,而不是连续调用添加值"过程.
I want to be able to pass an "array" of values to my stored procedure, instead of calling "Add value" procedure serially.
有人可以建议一种方法吗?我在这里遗漏了什么吗?
Can anyone suggest a way to do it? am I missing something here?
我将使用 PostgreSQL/MySQL,我还没有决定.
I will be using PostgreSQL / MySQL, I haven't decided yet.
正如 Chris 所指出的,在 PostgreSQL 中这没问题 - 任何基本类型(如 int、text)都有自己的数组子类型,您还可以创建自定义类型,包括复合的.例如:
As Chris pointed, in PostgreSQL it's no problem - any base type (like int, text) has it's own array subtype, and you can also create custom types including composite ones. For example:
CREATE TYPE test as (
n int4,
m int4
);
现在您可以轻松创建测试数组:
Now you can easily create array of test:
select ARRAY[
row(1,2)::test,
row(3,4)::test,
row(5,6)::test
];
您可以编写一个函数,将数组中的每一项乘以 n*m,并返回乘积之和:
You can write a function that will multiply n*m for each item in array, and return sum of products:
CREATE OR REPLACE FUNCTION test_test(IN work_array test[]) RETURNS INT4 as $$
DECLARE
i INT4;
result INT4 := 0;
BEGIN
FOR i IN SELECT generate_subscripts( work_array, 1 ) LOOP
result := result + work_array[i].n * work_array[i].m;
END LOOP;
RETURN result;
END;
$$ language plpgsql;
并运行它:
# SELECT test_test(
ARRAY[
row(1, 2)::test,
row(3,4)::test,
row(5,6)::test
]
);
test_test
-----------
44
(1 row)
这篇关于我怎样才能传递一个“数组"?我的存储过程的值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为