Gimp 程序中的 OpenCV Python 脚本 - 草/硬表面边缘检

时间:2023-02-06
本文介绍了Gimp 程序中的 OpenCV Python 脚本 - 草/硬表面边缘检测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想开发一个 Python OpenCV 脚本来复制/改进我开发的 Gimp 程序.该过程的目标是提供一个 x,y 点阵列,该阵列遵循草地和硬表面之间的分界线.这个阵列可以让我完成我的 500 磅 54 英寸宽的压力清洗机器人,它有一个 Raspberry Pi Zero(和摄像头),这样它就可以以每秒几英寸的速度跟随那个边缘.我将监控和/或当我在沙发上看电视时,通过它的 wifi 视频流和 iPhone 应用程序控制机器人.

这是一个示例原始图像(60x80 像素):

Gimp 程序是:

  1. 将图像转换为索引的 2 种颜色.基本上一侧是草,另一侧是砖块或人行道.糟糕的阴影 哎呀,这就是我 :)

  1. 在这两种颜色中,取较低的色相值,并使用下面的魔杖设置在该值的像素上使用魔杖.色调设置为 23 是我去除阴影的方法,羽毛设置是 15 是我去除岛屿/锯齿(裂缝中的草:)的方法.

  1. 使用以下高级设置值对路径进行高级选择(默认值的更改为黄色).基本上我只想要线段,我的 (x,y) 点数组将是黄色路径点.

  1. 接下来,我将路径导出到一个 .xml 文件,我可以从中解析和隔离上图中的黄点.这是 .xml 文件:

<?xml version="1.0" encoding="UTF-8" Standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd"><svg xmlns="http://www.w3.org/2000/svg"宽度="0.833333in" 高度="1.11111in"viewBox="0 0 60 80"></svg>

我的 Pi Zero 上这个 OpenCV 程序的执行时间目标是大约 1-2 秒或更短(目前大约需要 0.18 秒).

我拼凑了一些导致 Gimp xml 文件中相同点的东西.我完全不确定它是否在做 Gimp 关于遮罩的色调范围所做的事情.我还没有弄清楚如何在面罩上应用最小半径,我很确定当面罩在硬表面边缘出现草"块作为面罩的一部分时,我将需要它.以下是目前为止的所有轮廓点(ptscanvas.bmp):

截至 2018 年 7 月 6 日美国东部标准时间下午 5:08,这是一个仍然混乱"的脚本,它可以工作并找到这些点;

import numpy as np导入时间、sys、cv2img = cv2.imread('2-60.JPG')cv2.imshow('原创',img)# 得到一个空白的 pntscanvas 用于在上面绘制点pntscanvas = np.zeros(img.shape, np.uint8)打印(系统版本)如果 sys.version_info[0] <3:raise Exception("需要 Python 3 或更新的版本.")定义多雷多():start_time = time.time()# 使用 kmeans 转换为 2 色图像hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)Z = hsv_img.reshape((-1,3))Z = np.float32(Z)# 定义标准,簇数(K)标准 = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)K = 2ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)# 通过选择 2 种颜色的最低色调周围的色调范围来创建蒙版如果中心[0,0] <中心[1,0]:hueofinterest = 中心 [0,0]别的:hueofinterest = 中心[1,0]hsvdelta = 8lowv = np.array([hueofinterest - hsvdelta, 0, 0])higv = np.array([hueofinterest + hsvdelta, 255, 255])掩码 = cv2.inRange(hsv_img, lowv, higv)# 从蒙版中提取轮廓ret,thresh = cv2.threshold(掩码,250,255,cv2.THRESH_BINARY_INV)im2,轮廓,层次结构 = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)# 找到面积最大的轮廓cnt = 轮廓[0]max_area = cv2.contourArea(cnt)对于轮廓中的 cont:如果 cv2.contourArea(cont) >最大面积:cnt = 续max_area = cv2.contourArea(续)# 将 largets 轮廓的所有边缘点组成一个数组,命名为 allpnts周长 = cv2.arcLength(cnt,True)epsilon = 0.01*cv2.arcLength(cnt,True) # 0.0125*cv2.arcLength(cnt,True) 似乎效果更好allpnts = cv2.approxPolyDP(cnt,epsilon,True)end_time = time.time()print("经过的 cv2 时间是 %g 秒" % (end_time - start_time))# 转回uint8,制作2色图片保存显示中心 = np.uint8(中心)res = 中心 [label.flatten()]res2 = res.reshape((hsv_img.shape))# 保存、显示和打印内容cv2.drawContours(pntscanvas, allpnts, -1, (0, 0, 255), 2)cv2.imwrite("pntscanvas.bmp", pntscanvas)cv2.imshow("pntscanvas.bmp", pntscanvas)打印('allpnts')打印(全部)打印(中心")打印(中心)打印('lowv',lowv)打印('higv',higv)cv2.imwrite('mask.bmp',mask)cv2.imshow('mask.bmp',mask)cv2.imwrite('CvKmeans2Color.bmp',res2)cv2.imshow('CvKmeans2Color.bmp',res2)打印(等待‘空格键’执行/重做或‘Esc’退出")而(1):ch = cv2.waitKey(50)如果 ch == 27:休息如果 ch == ord(' '):多雷多()cv2.destroyAllWindows()

剩下的事情:

  1. 在非边缘像素上添加蒙版半径以处理原始蒙版,例如 Gimp 在蒙版上运行最小半径之前创建的蒙版:

1a.截至 2018 年 7 月 9 日,我一直专注于这个问题,因为这似乎是我最大的问题.我无法让 cv2.findcontours 像 Gimp 那样用它的魔杖半径功能平滑边缘草".左侧是 2 色问题"蒙版和叠加的红色"点,它们直接使用 cv2.findcontours 找到,右侧是在 cv2 之前应用于左侧图像问题"蒙版的 Gimp 半径蒙版.findcontours 应用于它,从而产生正确的图像和点:

我曾尝试查看 Gimps 源代码,但它超出了我的理解范围,我找不到任何可以执行此操作的 OpenCV 例程.有没有办法对 OpenCV 中边缘蒙版的非边缘"像素应用最小半径平滑???非边缘"是指如您所见,Gimp 不会对这些角"(黄色高亮内部)进行半径处理,但似乎仅将半径平滑应用于图像内部"的边缘(注意:Gimps 半径算法消除了所有掩码中的小岛,这意味着您不必在应用 cv2.findcontours 后找到最大区域轮廓来获取兴趣点):

  1. 从位于图像边缘的所有 pnt 中删除不相关的数组点.
  2. 弄清楚为什么它发现的数组点似乎围绕着绿草而不是硬表面,我以为我正在处理硬表面色调.
  3. 弄清楚为什么 CvKmeans2Color.bmp 中的硬表面颜色显示为橙色,而不是 Gimps 转换中的米色,为什么这与 Gimps 转换中的像素不匹配?这是 CvKmeans2Color.bmp 和 Gimps:

截至 2018 年 7 月 12 日美国东部标准时间下午 5 点:我已经使用了我最容易使用 VB6 创建代码的语言,嗯,我知道.无论如何,我已经能够制作一个在像素级别上工作的线/边缘平滑例程来完成我想要的最小半径蒙版.它的工作方式就像一个 PacMan 沿着边缘的右侧漫游,尽可能靠近它,并在 Pac 的左侧留下一个面包屑轨迹.不确定我是否可以从该代码制作 python 脚本,但至少我有一个起点,因为没有人确认有 OpenCV 替代方法可以做到这一点.如果有人有兴趣

您可以通过这个 VB6 例程了解我的流程逻辑的要点:

Sub BeginFollowingEdgePixel()将 lastwasend 调暗为整数wasinside = 假虽然 (1)如果 HitFrontBumper 那么转到命中别的呼叫前进万一如果 circr = orgpos(0) 并且 circc = orgpos(1) 那么orgpixr = -1 '将开始/下一步按钮重置为首先找到的蓝色边缘像素GoTo outnow '这个条件表明你已经跟随了所有的蓝色边缘像素万一调用 PaintUnderFrontBumperWhite调用 PaintGreenOutsideLeftBumper挪动:If NoLeftBumperContact Then呼叫向左移动调用 PaintUnderLeftBumperWhite调用 PaintGreenOutsideLeftBumperIf NoLeftBumperContact ThenIf BackBumperContact Then呼叫 MakeLeftTheNewForward万一万一ElseIf HitFrontBumper Then打:调用 PaintAheadOfForwardBumperGreen调用 PaintGreenOutsideLeftSide致电 MakeRightTheNewForward转到 nomove别的调用 PaintAheadOfForwardBumperGreen调用 PaintGreenOutsideLeftSide调用 PaintUnderFrontBumperWhite万一如果 (circr = 19 + circrad 或 circr = -circrad 或 circc = 19 + circrad 或 circc = -circrad) 那么如果 lastwasend = 0 并且 wasinside = True 那么'完成跟随一个边缘像素最后发送 = 1转到现在调用重绘万一别的如果 IsCircleInsideImage 则wasinside = 真万一最后发送 = 0万一Pause (pausev) ' 移动之间的秒数 - 按 Esc 提前推进文德现在出来:结束子

解决方案

好吧,我终于有时间看这个了.我将解决您的每一点,然后显示代码中的更改.如果您有任何问题或建议,请告诉我.

  1. 看来你自己也能很好地做到这一点.

    1.a.这可以通过在对图像进行任何处理之前模糊图像来解决.为了实现这一点,对代码进行了以下更改;

    <代码>...start_time = time.time()blur_img = cv2.GaussianBlur(img,(5,5),0) #这里# 使用 kmeans 转换为 2 色图像hsv_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2HSV)...

  2. 我已更改代码以删除完全符合图像侧面的直线上的点.草边应该基本不可能也与此重合.

    <代码>...allpnts = cv2.approxPolyDP(cnt,epsilon,True)new_allpnts = []对于我在范围内(len(allpnts)):a = (i-1) % 长度 (allpnts)b = (i+1) % 长度 (allpnts)if ((allpnts[i,0,0] == 0 或 allpnts[i,0,0] == (img.shape[1]-1)) 和 (allpnts[i,0,1] == 0 或allpnts[i,0,1] == (img.shape[0]-1))):tmp1 = allpnts[a,0] - allpnts[i,0]tmp2 = allpnts[b,0] - allpnts[i,0]如果不是(tmp1 中为 0,tmp2 中为 0):new_allpnts.append(allpnts[i])别的:new_allpnts.append(allpnts[i])...cv2.drawContours(pntscanvas, new_allpnts, -1, (0, 0, 255), 2)...

  3. 由于如何在图像中找到轮廓,我们可以简单地翻转阈值函数并找到图像其他部分周围的轮廓.变化如下;

    <代码>...#从蒙版中提取轮廓ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY) #这里im2,轮廓,层次结构 = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)...

  4. 至于颜色差异,您已将图像转换为 HSV 格式,并且在保存之前您不会将其切换回 BGR.对 HSV 的这种更改确实会给您带来更好的结果,所以我会保留它,但它是一个不同的调色板.变化如下;

    <代码>...cv2.imshow('mask.bmp',mask)res2 = cv2.cvtColor(res2, cv2.COLOR_HSV2BGR)cv2.imwrite('CvKmeans2Color.bmp',res2)cv2.imshow('CvKmeans2Color.bmp',res2)...

免责声明:这些更改基于上面的 python 代码.对不在提供代码中的 python 代码的任何更改都会使我的更改无效.

I would like to develop a Python OpenCV script to duplicate/improve on a Gimp procedure I have developed. The goal of the procedure is to provide an x,y point array that follows the dividing line between grass and hard surfaces. This array will allow me to finish my 500 lb 54" wide pressure washing robot, which has a Raspberry Pi Zero (and camera), so that it can follow that edge at a speed of a couple inches per second. I will be monitoring and/or controlling the bot via its wifi video stream and an iPhone app while I watch TV on my couch.

Here is a sample original image (60x80 pixels):

The Gimp procedure is:

  1. Convert image to indexed 2 colors. Basically grass on one side and bricks or pavement on the other side. DARN SHADOWS oops that's me :)

  1. Of the two colors, take the lower Hue value and magic wand on a pixel of that value with the below wand settings. The Hue setting of 23 is how I remove shadows and the feather setting of 15 is how I remove islands/jaggies (grass in the cracks :).

  1. Do an advanced selection to path with the following advanced settings values (changes from default values are yellow). Basically I want just line segments and my (x,y) point array will be the Yellow path dots.

  1. Next I export the path to an .xml file from which I can parse and isolate the yellow dots in the above image. Here is the .xml file:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
              "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">

<svg xmlns="http://www.w3.org/2000/svg"
     width="0.833333in" height="1.11111in"
     viewBox="0 0 60 80">
  <path id="Selection"
        fill="none" stroke="black" stroke-width="1"
        d="M 60.00,0.00
           C 60.00,0.00 60.00,80.00 60.00,80.00
             60.00,80.00 29.04,80.00 29.04,80.00
             29.04,80.00 29.04,73.00 29.04,73.00
             29.04,73.00 30.00,61.00 30.00,61.00
             30.00,61.00 30.00,41.00 30.00,41.00
             30.00,41.00 29.00,30.85 29.00,30.85
             29.00,30.85 24.00,30.85 24.00,30.85
             24.00,30.85 0.00,39.00 0.00,39.00
             0.00,39.00 0.00,0.00 0.00,0.00
             0.00,0.00 60.00,0.00 60.00,0.00 Z" />
</svg>

My goal for execution time for this OpenCV procedure on my Pi Zero is about 1-2 seconds or less (currently taking ~0.18 secs).

I have cobbled together something that sortof results in the sameish points that are in the Gimp xml file. I am not sure at all if it is doing what Gimp does with regard to the hue range of the mask. I have not yet figured out how to apply the minimum radius on the mask, I am pretty sure I will need that when the mask gets a 'grass' clump on the edge of the hard surface as part of the mask. Here are all the contour points so far (ptscanvas.bmp):

As of 7/6/2018 5:08 pm EST, here is the 'still messy' script that sortof works and found those points;

import numpy as np
import time, sys, cv2

img = cv2.imread('2-60.JPG')
cv2.imshow('Original',img)
# get a blank pntscanvas for drawing points on 
pntscanvas = np.zeros(img.shape, np.uint8)

print (sys.version)  
if sys.version_info[0] < 3:
    raise Exception("Python 3 or a more recent version is required.")

def doredo():
    start_time = time.time()

    # Use kmeans to convert to 2 color image
    hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    Z = hsv_img.reshape((-1,3))
    Z = np.float32(Z)
    # define criteria, number of clusters(K) 
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
    K = 2
    ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)

    # Create a mask by selecting a hue range around the lowest hue of the 2 colors
    if center[0,0] < center[1,0]:
        hueofinterest = center[0,0]
    else:
        hueofinterest = center[1,0]
    hsvdelta = 8
    lowv = np.array([hueofinterest - hsvdelta, 0, 0])
    higv = np.array([hueofinterest + hsvdelta, 255, 255])
    mask = cv2.inRange(hsv_img, lowv, higv)

    # Extract contours from the mask
    ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY_INV)
    im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    
    # Find the biggest area contour
    cnt = contours[0]
    max_area = cv2.contourArea(cnt)

    for cont in contours:
        if cv2.contourArea(cont) > max_area:
            cnt = cont
            max_area = cv2.contourArea(cont)

    # Make array of all edge points of the largets contour, named allpnts  
    perimeter = cv2.arcLength(cnt,True)
    epsilon = 0.01*cv2.arcLength(cnt,True) # 0.0125*cv2.arcLength(cnt,True) seems to work better
    allpnts = cv2.approxPolyDP(cnt,epsilon,True)
    
    end_time = time.time()
    print("Elapsed cv2 time was %g seconds" % (end_time - start_time))

    # Convert back into uint8, and make 2 color image for saving and showing
    center = np.uint8(center)
    res = center[label.flatten()]
    res2 = res.reshape((hsv_img.shape))

    # Save, show and print stuff
    cv2.drawContours(pntscanvas, allpnts, -1, (0, 0, 255), 2)
    cv2.imwrite("pntscanvas.bmp", pntscanvas)
    cv2.imshow("pntscanvas.bmp", pntscanvas)
    print('allpnts')
    print(allpnts)
    print("center")
    print(center)
    print('lowv',lowv)
    print('higv',higv)
    cv2.imwrite('mask.bmp',mask)
    cv2.imshow('mask.bmp',mask)
    cv2.imwrite('CvKmeans2Color.bmp',res2)
    cv2.imshow('CvKmeans2Color.bmp',res2)

print ("Waiting for 'Spacebar' to Do/Redo OR 'Esc' to Exit")
while(1):
    ch = cv2.waitKey(50)
    if ch == 27:
        break
    if ch == ord(' '):
        doredo()
        
cv2.destroyAllWindows()

Left to do:

  1. Add mask radiusing on non-edge pixels to take care of raw masks like this one that Gimp creates before it runs a min radius on the mask:

1a. EDIT: As of July 9, 2018, I have been concentrating on this issue as it seems to be my biggest problem. I am unable to have cv2.findcontours smooth out the 'edge grass' as well as Gimp does with its magic wand radius feature. Here on the left, is a 2 colour 'problem' mask and the overlaid resultant 'Red' points that are found directly using cv2.findcontours and on the right, the Gimp radiused mask applied to the left images 'problem' mask before cv2.findcontours is applied to it, resulting in the right image and points:

I have tried looking at Gimps source code but it is way beyond my comprehension and I can not find any OpenCV routines that can do this. Is there a way to apply a minimum radius smoothing to the 'non-edge' pixels of an edge mask in OpenCV??? By 'non-edge' I mean that as you can see Gimp does not radius these 'corners' (inside Yellow highlight) but only seems to apply the radius smoothing to edges 'inside' the image (Note: Gimps radiusing algorithm eliminates all the small islands in the mask which means that you don't have to find the largest area contour after cv2.findcontours is applied to get the points of interest):

  1. Remove irrelevant array points from allpnts that are on the image edge.
  2. Figure out why the array points that it finds seem to border around the green grass instead of the hard surface, I thought I was working with the hard surface hue.
  3. Figure out why the hard surface color in CvKmeans2Color.bmp appears orange and not beige as in Gimps conversion AND why doesn't this match pixel for pixel with Gimps conversion? Here is CvKmeans2Color.bmp and Gimps:

EDIT: As of 5pm EST July 12, 2018: I have resorted to the language I can most easily create code with, VB6, ughh, I know. Anyway I have been able to make a line/edge smoothing routine that works on the pixel level to do the min radius mask I want. It works like a PacMan roaming along the right side of an edge as close at it can and leaves behind a breadcrumb trail on the Pac's left side. Not sure I can make a python script from that code but at least I have a place to start as nobody has confirmed that there is an OpenCV alternative way to do it. If anyone is interested here is a compiled .exe file that should run on most windows systems without an install (I think). Here is a screenshot from it (Blue/GreenyBlue pixels are the unsmoothed edge and Green/GreenyBlue pixels are the radiused edge):

You can get the gist of my process logic by this VB6 routine:

Sub BeginFollowingEdgePixel()
   Dim lastwasend As Integer
   wasinside = False
   While (1)
      If HitFrontBumper Then
         GoTo Hit
      Else
         Call MoveForward
      End If
      If circr = orgpos(0) And circc = orgpos(1) Then
         orgpixr = -1 'resets Start/Next button to begin at first first found blue edge pixel
         GoTo outnow 'this condition indicates that you have followed all blue edge pixels
      End If
      Call PaintUnderFrontBumperWhite
      Call PaintGreenOutsideLeftBumper
nomove:
      If NoLeftBumperContact Then
         Call MoveLeft
         Call PaintUnderLeftBumperWhite
         Call PaintGreenOutsideLeftBumper
         If NoLeftBumperContact Then
            If BackBumperContact Then
               Call MakeLeftTheNewForward
            End If
         End If
      ElseIf HitFrontBumper Then
Hit:
         Call PaintAheadOfForwardBumperGreen
         Call PaintGreenOutsideLeftSide
         Call MakeRightTheNewForward
         GoTo nomove
      Else
         Call PaintAheadOfForwardBumperGreen
         Call PaintGreenOutsideLeftSide
         Call PaintUnderFrontBumperWhite
      End If
      If (circr = 19 + circrad Or circr = -circrad Or circc = 19 + circrad Or circc = -circrad) Then
         If lastwasend = 0 And wasinside = True Then
            'finished following one edge pixel
            lastwasend = 1
            GoTo outnow
            Call redrawit
         End If
      Else
         If IsCircleInsideImage Then
            wasinside = True
         End If
         lastwasend = 0
      End If
      Pause (pausev) 'seconds between moves - Pressing Esc advances early
   Wend
outnow:
End Sub

解决方案

Okay, I finally had time to look at this. I will address each point of yours and then show the changes in the code. Let me know if you have any questions, or suggestions.

  1. Looks like you were able to do this yourself well enough.

    1.a. This can be taken care of by blurring the image before doing any processing to it. The following changes to the code were made to accomplish this;

    ...
    start_time = time.time()                                              
    
    blur_img = cv2.GaussianBlur(img,(5,5),0) #here                        
    
    # Use kmeans to convert to 2 color image                              
    hsv_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2HSV)
    ...
    

  2. I have changed the code to remove points that are on a line that perfectly follows the side of the image. It should be basically impossible for a grass edge to also coincide with this.

    ...
    allpnts = cv2.approxPolyDP(cnt,epsilon,True)                          
    
    new_allpnts = []                                                      
    
    
    for i in range(len(allpnts)):                                         
        a = (i-1) % len(allpnts)                                          
        b = (i+1) % len(allpnts)                                          
    
        if ((allpnts[i,0,0] == 0 or allpnts[i,0,0] == (img.shape[1]-1)) and (allpnts[i,0,1] == 0 or allpnts[i,0,1] == (img.shape[0]-1))):          
            tmp1 = allpnts[a,0] - allpnts[i,0]                            
            tmp2 = allpnts[b,0] - allpnts[i,0]                                                                                                                     
            if not (0 in tmp1 and 0 in tmp2):                             
                new_allpnts.append(allpnts[i])
        else:
            new_allpnts.append(allpnts[i])
    ...
    cv2.drawContours(pntscanvas, new_allpnts, -1, (0, 0, 255), 2)
    ...
    

  3. Due to how the contours are found in the image, we can simply flip the thresholding function and find the contour around the other part of the image. Changes are below;

    ...
    #Extract contours from the mask                                      
    ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY) #here      
    im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    ...
    

  4. As for the color differences, you have converted your image into HSV format and before saving you are not switching it back to BGR. This change to HSV does give you better results so I would keep it, but it is a different palette. Changes are below;

    ...
    cv2.imshow('mask.bmp',mask)                                           
    res2 = cv2.cvtColor(res2, cv2.COLOR_HSV2BGR)                          
    cv2.imwrite('CvKmeans2Color.bmp',res2)                                
    cv2.imshow('CvKmeans2Color.bmp',res2)
    ...
    

Disclaimer: These changes are based off of the python code from above. Any changes to the python code that are not in the provide code my render my changes ineffective.

这篇关于Gimp 程序中的 OpenCV Python 脚本 - 草/硬表面边缘检测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

上一篇:如何在 Python openCV 中显示 16 位 4096 强度图像? 下一篇:如何使用 opencv copyTo() 函数?

相关文章

最新文章