我有一个带有 points 的表格,它是一个 LINESTRING.我在那里有一排,在所述列中有一些点.
我有一个字符串 a 形式的第二组点,我想将这些点附加到现有行.有没有办法在 MySQL 中做到这一点而不选择点作为文本,手动合并字符串然后更新行中的 points?
MYSQL Spatial 函数不包括任何附加 LINESTRING 的解决方案,但我已经为您尝试了一种解决方法.
获取值
set @gval = (select ST_AsText(route) from spatial where id =5);
<块引用>
我将该表命名为spatial",并添加了一列route",该列的数据类型为线串
通过使用替换功能并输入所需的纬度(或点)来附加字符串
set @gval = replace(@gval, ')', ',8.5684875 76.8520767)');更新 spatial set route =GEOMFROMTEXT(@gval) where id=5;</p>
这对我很有用.
I have a table with points which is a LINESTRING. I have a row in there which has some points in said column.
I have a second set of points in the form a of a string, I would like to append these points to the existing row. Is there any way to do this in MySQL without selecting the points as text, manually merging the strings then updating points in the row?
MYSQL Spatial function does not include any solution for appending a LINESTRING but there is a workaround which i have tried for you.
Get the value
set @gval = (select ST_AsText(route) from spatial where id =5);
I named the table as 'spatial' and added a column as 'route' which is of datatype linestring
Appended the string by using the replace function and entering your required lat lon (or point)
set @gval = replace(@gval, ')', ',8.5684875 76.8520767)');
Update spatial set route =GEOMFROMTEXT(@gval) where id=5;
this works good for me.
这篇关于如何将点附加到 LINESTRING SQL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为