我有一个具有以下结构的csv文件,
i have csv file which having following structure.,
Alfreds,Centro,Ernst,Island,Bacchus
Germany,Mexico,Austria,UK,Canada
01,02,03,04,05
现在我必须将这些数据移动到数据库中,如下所示.
Now i have to move that data into database like below.
Name,City,ID
Alfreds,Germay,01
Centro,Mexico,02
Ernst,Austria,03
Island,UK,04
Bacchus,Canda,05
我尝试映射这些列,但我无法按列提取数据.
i try to map those colums but i can't able to extract the data in column wise.
这里我按列输入数据,但我需要在 SQLServer 中按行插入数据
Here my input data in column wise but i need to insert those in row wise in SQLServer
任何人都可以建议在 sql server 中将列数据转换为行数据的方法吗?.
Can anyone suggest way to transfer column wise data into row wise in sql server?.
谢谢
@Andy,
在 NiFi 中也可以不使用 ExecuteScript.
It could be possible in NiFi also without using ExecuteScript.
我在 ExtractText 中提取了 3 个输入行作为 input.1,input.2,input.3.然后使用 表达式语言 并将其存储在 "TotalCount" 属性中.
I have extract the 3 input rows as input.1,input.2,input.3 in ExtractText. And then count number of columns in "input.1" using AnydelinateValues in expression language and store that in "TotalCount" Attribute.
最初制作Count=1".
Initially made "Count=1".
使用循环概念通过使用Count"获取第一列,然后在RouteOnAttribute中增加Count"检查Count""le(totalcount)"
Using Loop Concept to get the first column by using "Count" and then increment "Count" Check "Count" in RouteOnAttribute "le(totalcount)"
现在使用 "Count" 属性形成插入查询.
Now form insert Query with "Count" Attribute.
它对我来说效果很好.它可能对某人有用.
It worked well for me.It could be useful for someone.
这篇关于如何在 NiFi 中映射流文件中的列数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为