CN109727279B - Automatic registration method of vector data and remote sensing image - Google Patents

Automatic registration method of vector data and remote sensing image Download PDF

Info

Publication number
CN109727279B
CN109727279B CN201910083298.XA CN201910083298A CN109727279B CN 109727279 B CN109727279 B CN 109727279B CN 201910083298 A CN201910083298 A CN 201910083298A CN 109727279 B CN109727279 B CN 109727279B
Authority
CN
China
Prior art keywords
matrix
remote sensing
edge
vector data
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910083298.XA
Other languages
Chinese (zh)
Other versions
CN109727279A (en
Inventor
张文涵
李安波
李安营
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Publication of CN109727279A publication Critical patent/CN109727279A/en
Application granted granted Critical
Publication of CN109727279B publication Critical patent/CN109727279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The open remote sensing data is processed by certain decryption according to the requirements of the security policy, and the user tries to make own vectorWhen the data is superposed with the remote sensing data, the two data are not matched. Aiming at the problem, the invention discloses an automatic registration method of vector data and a remote sensing image. The method comprises the following specific steps: 1) extracting a ternary matrix A by using a scanning line algorithm based on the vector data; 2) extracting a gray matrix G based on the remote sensing image, and obtaining an edge matrix E of the gray matrix G by using a Canny operator; 3) traversing on the three-valued matrix A by using the edge matrix E to generate a normalized edge registration factor matrix F' Edge (ii) a 4) Traversing on the three-value matrix A by using the gray matrix G to generate a normalized gray registration factor matrix F' Ash of (ii) a 5) Based on the characteristics of the registration factor, carrying out automatic registration on the vector data and the remote sensing image; 6) remote sensing image data within a specified range of the vector elements are extracted.

Description

一种矢量数据与遥感影像的自动配准方法An automatic registration method of vector data and remote sensing images

技术领域technical field

本发明属于GIS领域,具体涉及一种利用遥感影像像素特征与矢量数据要素特征的一致性进行自动配准的方法。The invention belongs to the field of GIS, and in particular relates to a method for automatic registration using the consistency of remote sensing image pixel features and vector data element features.

背景技术Background technique

目前,已经有诸多平台提供了开放的遥感数据服务,为众多空间浏览与检索应用提供了良好的数据背景支持。特别是在用地审批,环境评估,道路规划等应用时,常常需要将一定矢量要素范围内的遥感影像单独提取出来进行分析。然而,由于相关开放数据均按安全政策的要求做了一定的脱密处理,在用户试图将自有矢量数据与遥感数据叠置时,会出现两者不匹配的情况。因此,急需一种高效、自动的方法,对于自有矢量数据与公开遥感影像数据间的几何偏差进行纠正。At present, many platforms have provided open remote sensing data services, providing good data background support for many spatial browsing and retrieval applications. Especially in applications such as land use approval, environmental assessment, and road planning, it is often necessary to extract remote sensing images within a certain range of vector elements for analysis. However, since the relevant open data has been decrypted according to the requirements of the security policy, when users try to overlay their own vector data and remote sensing data, there will be a mismatch between the two. Therefore, there is an urgent need for an efficient and automatic method to correct the geometric deviation between the self-owned vector data and the public remote sensing image data.

由于矢量数据与遥感影像的性质不同,并存在一定的旋转变化、尺度变化与位置变化。目前,主流做法是手工选择一定的地面控制点得到两者间的变换函数进行校准。这种方法对操作精度的要求较高,费时费力,并存在地图数据无法下载,肉眼校准不精确,缩放程度不一致等问题,并不适合作为处理海量地理信息数据的常规手段。也并不能作为一个严谨的科学方法应用于高精度、大批量的数据处理与更新中。Due to the different properties of vector data and remote sensing images, there are certain rotation changes, scale changes and position changes. At present, the mainstream practice is to manually select certain ground control points to obtain the transformation function between them for calibration. This method has high requirements for operation accuracy, is time-consuming and labor-intensive, and has problems such as inability to download map data, inaccurate calibration of the naked eye, and inconsistent zooming degrees. It is not suitable as a conventional method for processing massive geographic information data. It cannot be used as a rigorous scientific method for high-precision, large-scale data processing and updating.

近年来,针对遥感影像与矢量数据的自动配准问题,中外学者们提出了许多方法。Joachim Hohle(2008)以旧正射影像和已有的矢量地图为参照,通过正射影像和矢量地图获得道路交叉点的影像模板,在新影像上进行匹配生成新的控制点,实现遥感影像自动外定向。试验表明基本可以满足正射影像和矢量地图更新的要求;Heiner Hild等(2001)以SPOT影像和矢量地图的配准为研究对象,利用人工选取的初始控制点确定二者之间的近似变换关系,在多边形边界上提取控制点进行空中三角测量,实现影像和地图的配准;张晓东等(2006)基于面状地物多边形特征进行遥感影像与矢量数据的自动配准;刘志青(2012)使用Canny算子及边缘跟踪,筛选提取合适的遥感影像直线特征并进行相似性测度,实现遥感影像与矢量数据的自动化配准。In recent years, Chinese and foreign scholars have proposed many methods for the automatic registration of remote sensing images and vector data. Joachim Hohle (2008) used the old orthophoto and the existing vector map as a reference, obtained the image template of the road intersection through the orthophoto and the vector map, matched the new image to generate new control points, and realized the automatic remote sensing image. outward orientation. Experiments show that it can basically meet the requirements of orthophoto and vector map update; Heiner Hild et al. (2001) took the registration of SPOT image and vector map as the research object, and used the initial control points selected manually to determine the approximate transformation relationship between the two. , extracting control points on the polygon boundary for aerial triangulation to realize the registration of images and maps; Zhang Xiaodong et al. (2006) performed automatic registration of remote sensing images and vector data based on polygonal features of surface objects; Liu Zhiqing (2012) used Canny Operators and edge tracking are used to filter and extract suitable linear features of remote sensing images and perform similarity measurement to realize automatic registration of remote sensing images and vector data.

上述方法的处理步骤一般包括:特征选择与提取、矢量数据预处理、特征匹配以及实现矢栅配准。这些研究方法只是针对某种具体类型的问题而进行的,大部分方法仍然需要较多的人工交互,还没有出现完全自动化的成熟系统,难以满足高效的大规模影像数据处理需求。The processing steps of the above method generally include: feature selection and extraction, vector data preprocessing, feature matching, and realization of vector-grid registration. These research methods are only for a specific type of problem, and most methods still require a lot of manual interaction, and there is no fully automated mature system, which is difficult to meet the needs of efficient large-scale image data processing.

发明内容SUMMARY OF THE INVENTION

本发明主要针对自有矢量数据与公开遥感影像数据间形状变形与角度变形较微弱,位置偏移较大这一特点,利用遥感影像像素特征与矢量要素特征的一致性进行自动配准,以支持一定要素或要素集范围内遥感影像数据的自动裁剪和提取。在提取两者间配准因子的基础上,通过计算公开遥感影像数据与用户自有矢量数据间的相对偏差,准确、快速地实现一定要素或要素集范围内的遥感影像自动提取。The invention mainly aims at the characteristics of weak shape deformation and angular deformation between the self-owned vector data and the public remote sensing image data, and the position offset is relatively large. Automatic cropping and extraction of remote sensing image data within a certain feature or feature set. On the basis of extracting the registration factor between the two, by calculating the relative deviation between the public remote sensing image data and the user's own vector data, the automatic extraction of remote sensing images within a certain element or element set range can be accurately and quickly realized.

本发明提供了一种矢量数据与遥感影像的自动配准方法,包括以下步骤:The invention provides an automatic registration method of vector data and remote sensing images, comprising the following steps:

步骤1、基于矢量数据,利用扫描线算法提取三值矩阵A;Step 1. Based on the vector data, use the scan line algorithm to extract the ternary matrix A;

步骤2、基于遥感影像提取灰度矩阵G,并利用Canny算子得到灰度矩阵G的边缘矩阵E;Step 2. Extract the grayscale matrix G based on the remote sensing image, and use the Canny operator to obtain the edge matrix E of the grayscale matrix G;

步骤3、使用边缘矩阵E在三值矩阵A上进行遍历,生成归一化的边缘配准因子矩阵F'Step 3, use the edge matrix E to traverse the three-valued matrix A to generate the normalized edge registration factor matrix F'side;

步骤4、使用灰度矩阵G在三值矩阵A上进行遍历,生成归一化的灰度配准因子矩阵F'Step 4. Use the grayscale matrix G to traverse the three-valued matrix A to generate a normalized grayscale registration factor matrix F'gray;

步骤5、基于配准因子的特性,进行矢量数据与遥感影像的自动配准;Step 5, based on the characteristics of the registration factor, perform automatic registration of the vector data and the remote sensing image;

步骤6、提取矢量要素规定范围内的遥感影像数据。Step 6: Extract remote sensing image data within a specified range of vector elements.

步骤1具体包括:Step 1 specifically includes:

1.1、提取矢量数据的最小外包矩形,根据公式(1)将其转换为二值化栅格矩阵Y={y(i,j)|i=0,...,m-1;j=0,...,n-1},其中m为矢量数据外包矩形的高度,n为矢量数据外包矩形的宽度;1.1. Extract the minimum outer rectangle of the vector data, and convert it into a binarized grid matrix according to formula (1) Y={y(i,j)|i=0,...,m-1; j=0 ,...,n-1}, where m is the height of the outer rectangle of the vector data, and n is the width of the outer rectangle of the vector data;

Figure GDA0003699712130000021
Figure GDA0003699712130000021

1.2、创建布尔型标志矩阵F={f(i,j)|i=0,...,m-1;j=0,...,n-1}判断是否为矢量要素边缘;1.2. Create a Boolean flag matrix F={f(i,j)|i=0,...,m-1; j=0,...,n-1} to judge whether it is a vector element edge;

1.3、对每条行扫描线Lt={(t,j)|j=0,...,n-1}(t∈[0,m-1]),利用公式(2)对标志矩阵F进行赋值;1.3. For each row scan line L t ={(t,j)|j=0,...,n-1}(t∈[0,m-1]), use formula (2) to F for assignment;

Figure GDA0003699712130000022
Figure GDA0003699712130000022

1.4、根据标志矩阵F,分别标识矢量要素的边界与内部;创建矩阵A={a(i,j)|i=0,...,m-1;j=0,...,n-1},初始默认值为0,根据公式(3)进行赋值得到赋值后的三值矩阵A,其中矢量数据边缘点标识为2,内部点标识为1,背景点标识为0;1.4. Identify the boundaries and interiors of vector elements according to the marker matrix F; create a matrix A={a(i,j)|i=0,...,m-1; j=0,...,n- 1}, the initial default value is 0, and the assignment is performed according to formula (3) to obtain a three-valued matrix A after the assignment, wherein the edge point identifier of the vector data is 2, the interior point identifier is 1, and the background point identifier is 0;

Figure GDA0003699712130000023
Figure GDA0003699712130000023

步骤2具体包括:Step 2 specifically includes:

2.1、基于公开遥感影像提取灰度矩阵:读取遥感影像数据到矩阵C={c(i,j)|i=0,...,p-1;j=0,...,q-1}中,根据公式(4)将其处理为灰度矩阵G={g(i,j)|i=0,...,p-1;j=0,...,q-1};2.1. Extract grayscale matrix based on public remote sensing images: read remote sensing image data to matrix C={c(i,j)|i=0,...,p-1; j=0,...,q- 1}, it is processed as a grayscale matrix G={g(i,j)|i=0,...,p-1; j=0,...,q-1} according to formula (4). ;

g(i,j)=0.299*cr(i,j)+0.587*cg(i,j)+0.114*cb(i,j) (4)g(i,j)=0.299*c r (i,j)+0.587*c g (i,j)+0.114*c b (i,j) (4)

其中,p为遥感影像数据高度,q为遥感影像数据宽度,且满足条件(p>m)and(q>n);cr(i,j)、cg(i,j)、cb(i,j)分别表示点(i,j)处遥感影像C的R、G、B值;Among them, p is the height of remote sensing image data, q is the width of remote sensing image data, and satisfy the conditions (p>m) and (q>n); cr (i,j), c g ( i, j), c b ( i, j) represent the R, G, and B values of the remote sensing image C at point (i, j), respectively;

2.2、使用高斯滤波器对灰度矩阵G进行平滑操作;2.2. Use a Gaussian filter to smooth the grayscale matrix G;

a)利用公式(5)计算用户给定尺寸(2k+1)*(2k+1)与方差σ2下高斯卷积核G'={g'(x,y)|x=-k,...,k;y=-k,...,k};a) Use formula (5) to calculate the Gaussian convolution kernel G'={ g '(x,y)|x=-k, . ..,k; y=-k,...,k};

Figure GDA0003699712130000031
Figure GDA0003699712130000031

b)将卷积核G'与遥感影像灰度矩阵G进行卷积,得到平滑后图像矩阵S={s(i,j)|i=0,...,p-1;j=0,...,q-1};b) Convolve the convolution kernel G' with the grayscale matrix G of the remote sensing image to obtain the smoothed image matrix S={s(i,j)|i=0,...,p-1; j=0, ...,q-1};

2.3、计算梯度幅值和方向;利用公式(6)、(7)、(8)计算平滑矩阵S的梯度的幅值与方向;2.3. Calculate the gradient magnitude and direction; use formulas (6), (7), (8) to calculate the magnitude and direction of the gradient of the smoothing matrix S;

Figure GDA0003699712130000032
Figure GDA0003699712130000032

Figure GDA0003699712130000033
Figure GDA0003699712130000033

θ(i,j)=arctan(Px(i,j)/Py(i,j)) (8)θ(i,j)=arctan( Px (i,j)/ Py (i,j)) (8)

其中Px,Py分别为图像在x,y方向上的梯度算子,Px(i,j),Py(i,j)为点(i,j)处梯度算子与平滑矩阵的乘积,arctan表示正切函数,M(i,j)为平滑矩阵S在(i,j)处的幅值,θ(i,j)为平滑矩阵S在(i,j)处的方向;Among them, P x and P y are the gradient operators of the image in the x and y directions, respectively, and P x (i,j) and P y (i, j) are the gradient operators at the point (i, j) and the smoothing matrix. Product, arctan represents the tangent function, M(i,j) is the amplitude of the smoothing matrix S at (i,j), θ(i,j) is the direction of the smoothing matrix S at (i,j);

2.4、对梯度幅值进行非极大值抑制:根据公式(9),得到非极大值抑制后的梯度矩阵Grad={grad(i,j)|i=0,...,p-1;j=0,...,q-1};2.4. Non-maximum suppression of gradient amplitude: According to formula (9), the gradient matrix after non-maximum suppression is obtained Grad={grad(i,j)|i=0,...,p-1 ; j=0,...,q-1};

Figure GDA0003699712130000034
Figure GDA0003699712130000034

其中M表示沿梯度方向上点(i,j)的前一个点的梯度幅值,M表示沿梯度方向上点(i,j)的后一个点的梯度幅值;Wherein front M represents the gradient magnitude of the previous point along the gradient direction (i, j), and rear M represents the gradient magnitude of the next point along the gradient direction (i, j);

2.5、依据双阈值法进行边缘检测与连接:设立高阈值δ和低阈值δ,满足条件(10)的点(i,j)即可判定为边缘点,将这些点进行连接,得到最终的边缘图像矩阵E={e(i,j)|i=0,...,p-1;j=0,...,q-1};2.5. Edge detection and connection according to the double-threshold method: establish a high threshold δ high and a low threshold δ low , and the point (i, j) that satisfies the condition (10) can be determined as an edge point, and these points are connected to get the final result. The edge image matrix E={e(i,j)|i=0,...,p-1; j=0,...,q-1};

g(i,j)>δor(g(i,j)<δand g(i,j)>δand flag(i,j)=true) (10)g(i,j)> δhigh or(g(i,j)< δhigh and g(i,j)> δlow and flag(i,j)=true) (10)

Figure GDA0003699712130000041
Figure GDA0003699712130000041

步骤3具体包括:Step 3 specifically includes:

3.1、设定遥感影像左上角的点作为原点,利用边缘图像矩阵E及三值矩阵A,得到用户自有矢量数据在不同位置时,其左上角的点与遥感影像原点相对位移偏差的点对集合D={(dx,dy)|0≤dx≤p-m-1;0≤dy≤q-n-1},根据公式(11)计算集合D中的任一偏差(dx,dy)对应的结果矩阵Tdxdy={t(i,j)|i=0,...,m-1;j=0,...,n-1};3.1. Set the point at the upper left corner of the remote sensing image as the origin, and use the edge image matrix E and the ternary matrix A to obtain the point pair of the relative displacement deviation between the upper left corner of the user's own vector data and the origin of the remote sensing image when the user's own vector data is in different positions. Set D={(d x ,d y )|0≤d x ≤pm-1; 0≤d y ≤qn-1}, according to formula (11), calculate any deviation (d x ,d y in set D) ) corresponding result matrix T dxdy ={t(i,j)|i=0,...,m-1; j=0,...,n-1};

Figure GDA0003699712130000042
Figure GDA0003699712130000042

3.2、根据公式(12),计算该偏差对应的结果矩阵T内的要素之和f;3.2. According to formula (12), calculate the sum f of the elements in the result matrix T corresponding to the deviation;

Figure GDA0003699712130000043
Figure GDA0003699712130000043

3.3、集合D中各要素对应的要素和构成的结果矩阵F={f(x,y)|x=0,...,p-m-1;y=0,...,q-n-1},即为边缘配准因子矩阵;3.3. The elements corresponding to the elements in the set D and the resulting matrix F edge = {f edge (x, y) | x = 0, ..., pm-1; y = 0, ..., qn-1 } is the edge registration factor matrix;

3.4、将得到的边缘配准因子矩阵F,根据公式(13)进行归一化,得到归一化边缘配准因子矩阵F'={f'(dx,dy)|dx=0,...,p-m-1;dy=0,...,q-n-1};3.4. Normalize the obtained edge registration factor matrix F edge according to formula (13) to obtain the normalized edge registration factor matrix F' edge = {f' edge (d x , dy )|d x =0,...,pm-1; dy = 0,...,qn-1};

Figure GDA0003699712130000044
Figure GDA0003699712130000044

其中,f边max表示边缘配准因子矩阵的最大值,f边min表示边缘配准因子矩阵的最小值。Among them, the f- side max represents the maximum value of the edge registration factor matrix, and the f- side min represents the minimum value of the edge registration factor matrix.

步骤4具体包括:Step 4 specifically includes:

4.1、利用灰度图像矩阵G及三值矩阵A,对集合D中的任一偏差{(dx,dy)},根据公式(14)计算其对应的结果矩阵T'={t'(i,j)|i=0,...,m-1;j=0,...,n-1};4.1. Using the grayscale image matrix G and the three-valued matrix A, for any deviation {(d x , dy )} in the set D, calculate the corresponding result matrix T'={t'( i,j)|i=0,...,m-1; j=0,...,n-1};

Figure GDA0003699712130000045
Figure GDA0003699712130000045

4.2、根据公式(15)、(16)计算其所对应的结果矩阵中非零值的方差f';4.2. Calculate the variance f' of non-zero values in the corresponding result matrix according to formulas (15) and (16);

Figure GDA0003699712130000046
Figure GDA0003699712130000046

Figure GDA0003699712130000051
Figure GDA0003699712130000051

其中R为结果矩阵T'中非零值的个数;where R is the number of non-zero values in the result matrix T';

4.3、集合D中各要素对应的方差构成的结果矩阵F={f(x,y)|x=0,...,p-m-1;y=0,...,q-n-1}即为灰度配准因子矩阵;4.3. The resulting matrix F gray = {f gray (x, y)|x = 0,..., pm-1; y = 0,..., qn-1} composed of the variance corresponding to each element in the set D is the grayscale registration factor matrix;

4.4、将得到的灰度配准因子矩阵F,根据公式(17)进行归一化,得到归一化灰度配准因子矩阵F'={f'(dx,dy)|dx=0,...,p-m-1;dy=0,...,q-n-1};4.4. Normalize the obtained grayscale registration factor matrix F gray according to formula (17) to obtain a normalized grayscale registration factor matrix F' gray = {f' gray (d x , dy )| d x =0,...,pm-1; dy =0,...,qn-1};

Figure GDA0003699712130000052
Figure GDA0003699712130000052

其中,f灰max表示灰度配准因子矩阵的最大值,f灰min表示灰度配准因子矩阵的最小值。Among them, f gray max represents the maximum value of the gray registration factor matrix, and f gray min represents the minimum value of the gray registration factor matrix.

步骤5具体包括:Step 5 specifically includes:

5.1、由配准因子特性可知,某偏差位置的边缘配准因子越大而灰度配准因子越小时,该偏差位置处矢量数据与遥感影像越匹配,遵循这一原则,使用公式(18)创建综合配准因子矩阵F={f(i,j)|i=0,...,p-m-1;j=0,...,q-n-1};5.1. From the characteristics of the registration factor, it can be known that the larger the edge registration factor of a certain deviation position is and the smaller the grayscale registration factor is, the more matched the vector data at the deviation position is with the remote sensing image. Following this principle, formula (18) is used. Create a comprehensive registration factor matrix F = {fjudgment ( i,j)|i=0,...,pm-1; j=0,...,qn-1};

Figure GDA0003699712130000053
Figure GDA0003699712130000053

5.2、遍历判别值矩阵F,得到最小值对应的偏差值(i0,j0),即为矢量数据与遥感影像配准的最优偏差值。5.2. Traverse the discriminant value matrix F , and obtain the deviation value (i 0 , j 0 ) corresponding to the minimum value, which is the optimal deviation value of the registration of the vector data and the remote sensing image.

步骤6具体包括:Step 6 specifically includes:

根据公式(19)生成结果矩阵R={r(x,y)|x=0,...,m-1;y=0,...,n-1};该矩阵对应的图像为矢量数据要素或要素集范围内的遥感影像图像,其余部分为黑色背景;Generate the result matrix R={r(x,y)|x=0,...,m-1; y=0,...,n-1} according to formula (19); the image corresponding to this matrix is a vector Remote sensing imagery images within the range of data elements or feature sets, and the rest are black background;

Figure GDA0003699712130000054
Figure GDA0003699712130000054

有益效果:本发明与现有技术相比,本发明提出了一种综合利用遥感灰度图像得到的灰度配准因子矩阵与结合Canny算子得到的边缘配准因子矩阵,利用两者的特性进行遥感影像与自有矢量数据偏差的求解与优化,并最终得到一定矢量要素范围内的遥感影像的方法。该方法主要具有以下特点:Beneficial effect: Compared with the prior art, the present invention proposes a grayscale registration factor matrix obtained by comprehensively utilizing remote sensing grayscale images and an edge registration factor matrix obtained by combining with the Canny operator, using the characteristics of both It is a method to solve and optimize the deviation between remote sensing images and own vector data, and finally obtain remote sensing images within a certain range of vector elements. This method mainly has the following characteristics:

1)充分利用了自有矢量数据与遥感影像间位置偏移较大,形状、角度变形可以忽略这一特点;1) Make full use of the large position offset between the own vector data and remote sensing images, and the shape and angle deformation can be ignored;

2)综合利用遥感影像边缘图像与矢量要素的一致性以及特定范围内地物的相关性,实现了矢量要素与遥感影像的自动配准。2) The automatic registration of vector elements and remote sensing images is realized by comprehensively utilizing the consistency between remote sensing image edge images and vector elements and the correlation of objects within a specific range.

附图说明Description of drawings

图1为本发明方法的流程图;Fig. 1 is the flow chart of the method of the present invention;

图2为遥感影像与矢量数据的偏差示意图;Figure 2 is a schematic diagram of the deviation between remote sensing images and vector data;

图3为实施例使用的遥感影像数据;Fig. 3 is the remote sensing image data used in the embodiment;

图4为实施例使用的矢量数据;Fig. 4 is the vector data used in the embodiment;

图5为遥感影像数据与矢量数据的初始相对位置示意图;5 is a schematic diagram of the initial relative position of remote sensing image data and vector data;

图6为矢量数据三值矩阵示意图;6 is a schematic diagram of a ternary matrix of vector data;

图7为遥感影像灰度图像;Figure 7 is a grayscale image of a remote sensing image;

图8为遥感影像边缘图像;Figure 8 is a remote sensing image edge image;

图9为偏差为(0,0)时三值矩阵与边缘图像对应点乘部分示意图;FIG. 9 is a schematic diagram of the corresponding point product of the ternary matrix and the edge image when the deviation is (0,0);

图10为边缘配准因子矩阵示意图;10 is a schematic diagram of an edge registration factor matrix;

图11为偏差为(0,0)时三值矩阵与灰度图像对应点乘部分示意图;Figure 11 is a schematic diagram of the corresponding dot product part of a three-value matrix and a grayscale image when the deviation is (0,0);

图12为灰度配准因子矩阵示意图;12 is a schematic diagram of a grayscale registration factor matrix;

图13为综合配准因子矩阵示意图;13 is a schematic diagram of a comprehensive registration factor matrix;

图14为遥感影像与矢量数据的配准后相对位置示意图;Figure 14 is a schematic diagram of the relative position after the registration of the remote sensing image and the vector data;

图15为提取矢量要素范围内遥感影像的成果图。Figure 15 shows the results of extracting remote sensing images within the range of vector elements.

具体实施方式Detailed ways

下面结合附图和实施例进一步阐述本发明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

本实施例选用天地图-江苏发布的江苏省2016年遥感影像底图作为遥感影像数据(图3);用户自有矢量数据为无锡市公园要素资源的一部分(图4)。其两者开始的相对位置如图5所示。其中,遥感地图的图幅大小为708*571像素,用户自有矢量数据的图幅大小为612*512像素。In this example, the base map of remote sensing images of Jiangsu Province in 2016 released by Tiantu-Jiangsu is used as the remote sensing image data (Fig. 3); the user-owned vector data is a part of the park element resources in Wuxi City (Fig. 4). The relative positions of the two starting points are shown in Figure 5. Among them, the size of the remote sensing map is 708*571 pixels, and the size of the user-owned vector data is 612*512 pixels.

步骤1、进行矢量数据三值矩阵的提取;Step 1. Extract the ternary matrix of vector data;

1.1、提取矢量数据的最小外包矩形,根据公式(1)将其转换为二值化栅格矩阵Y={y(i,j)|i=0,...,m-1;j=0,...,n-1},如图4所示,本实施例中,m=708,n=571;1.1. Extract the minimum outer rectangle of the vector data, and convert it into a binarized grid matrix according to formula (1) Y={y(i,j)|i=0,...,m-1; j=0 ,...,n-1}, as shown in Figure 4, in this embodiment, m=708, n=571;

1.2、创建布尔值标志矩阵F={f(i,j)|i=0,...,m-1;j=0,...,n-1}判断是否为矢量要素边缘;1.2. Create a Boolean flag matrix F={f(i,j)|i=0,...,m-1; j=0,...,n-1} to judge whether it is a vector element edge;

1.3、对每条行扫描线Lt={(t,j)|j=0,...,n-1}(t∈[0,m-1]),利用公式(2)对标志矩阵F进行赋值;1.3. For each row scan line L t ={(t,j)|j=0,...,n-1}(t∈[0,m-1]), use formula (2) to F for assignment;

1.4、根据标志矩阵F,分别标识矢量要素的边界与内部。创建矩阵A={a(i,j)|i=0,...,m-1;j=0,...,n-1},初始值为0,根据公式(3)进行赋值得到三值矩阵A,其中矢量数据边缘点标识为2,内部点标识为1,背景点标识为0。本实施例中,得到的三值矩阵如图6所示,为显示需要,将矩阵各值放大255/2倍作为图像的像素值。1.4. According to the mark matrix F, mark the boundary and interior of the vector elements respectively. Create a matrix A={a(i,j)|i=0,...,m-1; j=0,...,n-1}, the initial value is 0, and the assignment is obtained according to formula (3). A three-valued matrix A, in which the edge point of the vector data is identified as 2, the interior point is identified as 1, and the background point is identified as 0. In this embodiment, the obtained three-valued matrix is shown in FIG. 6 . For display purposes, each value of the matrix is enlarged by 255/2 times as the pixel value of the image.

步骤2、基于遥感影像提取灰度矩阵和边缘矩阵;Step 2. Extract the grayscale matrix and the edge matrix based on the remote sensing image;

2.1、基于公开遥感影像提取灰度矩阵,读取遥感影像数据到矩阵C={c(i,j)|i=0,...,p-1;j=0,...,q-1}中,根据公式(4)将其处理为灰度矩阵G={g(i,j)|i=0,...,p-1;j=0,...,q-1},如图7所示,本实施例中,p=612,q=512;2.1. Extract the grayscale matrix based on the public remote sensing image, and read the remote sensing image data to the matrix C={c(i,j)|i=0,...,p-1; j=0,...,q- 1}, it is processed as a grayscale matrix G={g(i,j)|i=0,...,p-1; j=0,...,q-1} according to formula (4). , as shown in FIG. 7 , in this embodiment, p=612, q=512;

2.2、使用高斯滤波器对灰度矩阵G进行平滑操作;2.2. Use a Gaussian filter to smooth the grayscale matrix G;

a)利用公式(5)计算给定尺寸(2k+1)*(2k+1)与方差σ2下高斯卷积核G'={g'(x,y)|x=-k,...,k;y=-k,...,k},本实施例中取k=1,σ2=1;a) Calculate the Gaussian convolution kernel G'={ g '(x,y)|x=-k,.. .,k; y=-k,...,k}, in this embodiment, k=1, σ 2 =1;

b)将卷积核G'与遥感影像灰度矩阵G进行卷积,得到平滑后图像矩阵S={s(i,j)|i=0,...,p-1;j=0,...,q-1};b) Convolve the convolution kernel G' with the grayscale matrix G of the remote sensing image to obtain the smoothed image matrix S={s(i,j)|i=0,...,p-1; j=0, ...,q-1};

2.3、计算梯度幅值和方向,利用公式(6)、(7)、(8)计算平滑矩阵S的梯度的幅值与方向;2.3. Calculate the gradient magnitude and direction, and use formulas (6), (7), and (8) to calculate the magnitude and direction of the gradient of the smoothing matrix S;

2.4、对梯度幅值进行非极大值抑制。根据公式(9),得到非极大值抑制后的梯度矩阵Grad={grad(i,j)|i=0,...,p-1;j=0,...,q-1};2.4. Non-maximum suppression of gradient amplitude. According to formula (9), the gradient matrix after non-maximum suppression is obtained Grad={grad(i,j)|i=0,...,p-1; j=0,...,q-1} ;

2.5、依据双阈值法进行边缘检测与连接。设立高阈值δ=150和低阈值δ=50。满足条件(10)的点(i,j)即可判定为边缘点。将这些点进行连接,得到最终的边缘图像矩阵E={e(i,j)|i=0,...,p-1;j=0,...,q-1},如图8所示。2.5. Edge detection and connection are carried out according to the double threshold method. A high threshold delta high =150 and a low threshold delta low =50 were established. The point (i, j) that satisfies the condition (10) can be determined as an edge point. Connect these points to obtain the final edge image matrix E={e(i,j)|i=0,...,p-1; j=0,...,q-1}, as shown in Figure 8 shown.

步骤3、生成归一化的边缘配准因子矩阵;Step 3, generating a normalized edge registration factor matrix;

3.1、设定遥感影像左上角的点作为原点,利用边缘图像矩阵E及三值矩阵A,得到用户自有矢量数据在不同位置时,其左上角的点与遥感影像原点相对位移的偏差的点对集合D={(dx,dy)|0≤dx≤p-m-1;0≤dy≤q-n-1}。根据公式(11)计算偏差(0,0)对应的结果矩阵T={t(i,j)|i=0,...,m-1;j=0,...,n-1},其部分要素如图9所示;3.1. Set the point in the upper left corner of the remote sensing image as the origin, and use the edge image matrix E and the ternary matrix A to obtain the deviation of the relative displacement between the upper left corner of the remote sensing image and the origin of the remote sensing image when the user's own vector data is in different positions For the set D = {( dx ,dy)| 0≤dx≤pm -1; 0≤dy≤qn -1}. Calculate the result matrix T={t(i,j)|i=0,...,m-1; j=0,...,n-1} according to formula (11) , some of its elements are shown in Figure 9;

3.2、根据公式(12),计算其对应的矩阵要素和f=550;3.2. According to formula (12), calculate the corresponding matrix elements and f=550;

3.3、集合D对应的结果矩阵F={f(x,y)|x=0,...,p-m-1;y=0,...,q-n-1}即为边缘配准因子矩阵。3.3. The result matrix F edge corresponding to set D={f edge (x,y)|x=0,...,pm-1; y=0,...,qn-1} is the edge registration factor matrix.

3.4、将得到的边缘配准因子矩阵F根据公式(13)进行归一化,得到归一化边缘配准因子矩阵F'={f'(dx,dy)|dx=0,...,p-m-1;dy=0,...,q-n-1},如图10所示。3.4. Normalize the obtained edge registration factor matrix F according to formula (13) to obtain the normalized edge registration factor matrix F' side ={f' side (d x , dy )|d x = 0,...,pm-1; dy= 0,...,qn-1}, as shown in Figure 10.

步骤4、生成归一化的灰度配准因子矩阵;Step 4, generating a normalized grayscale registration factor matrix;

4.1、利用灰度图像矩阵G及三值矩阵A,对偏差(0,0),根据公式(14)计算其对应的结果矩阵T'={t'(i,j)|i=0,...,m-1;j=0,...,n-1},其部分要素如图11所示;4.1. Using the grayscale image matrix G and the three-valued matrix A, for the deviation (0,0), calculate the corresponding result matrix T'={t'(i,j)|i=0,. ..,m-1; j=0,...,n-1}, some of its elements are shown in Figure 11;

4.2、根据公式(15)、(16)计算其所对应的结果矩阵中非零值的方差f'=1555;4.2. Calculate the variance f'=1555 of non-zero values in the corresponding result matrix according to formulas (15) and (16);

4.3、集合D对应的方差矩阵F={f(x,y)|x=0,...,p-m-1;y=0,...,q-n-1}即为灰度配准因子矩阵。4.3. The variance matrix F gray corresponding to the set D = {f gray (x, y) | x = 0, ..., pm-1; y = 0, ..., qn-1} is gray registration factor matrix.

4.4、将得到的灰度配准因子矩阵F根据公式(17)进行归一化,得到归一化灰度配准因子矩阵F'={f'(dx,dy)|dx=0,...,p-m-1;dy=0,...,q-n-1},如图12所示。4.4. Normalize the obtained grayscale registration factor matrix F gray according to formula (17) to obtain the normalized grayscale registration factor matrix F' gray = {f' gray (d x , dy )|d x =0,...,pm-1; dy= 0,...,qn-1}, as shown in FIG. 12 .

步骤5、基于配准因子矩阵的特性,进行矢量数据与遥感影像的自动配准;Step 5, based on the characteristics of the registration factor matrix, perform automatic registration of the vector data and the remote sensing image;

5.1、由配准因子特性可知,某偏差位置的边缘配准因子越大而灰度配准因子越小时,该偏差位置处矢量数据与遥感影像越匹配。遵循这一原则,使用公式(18)创建综合配准因子矩阵F={f(dx,dy)|dx=0,...,p-m-1;dy=0,...,q-n-1},如图13所示;5.1. From the characteristics of the registration factor, it can be known that the larger the edge registration factor of a certain deviation position is and the smaller the gray registration factor is, the more matched the vector data at the deviation position is with the remote sensing image. Following this principle, formula (18) is used to create a comprehensive registration factor matrix F = {f = (d x , dy ) | d x = 0,..., pm-1; dy = 0, .. .,qn-1}, as shown in Figure 13;

5.2、遍历判别值矩阵F,得到最小值对应的偏差值(65,47)。此即为矢量数据与遥感影像配准的最优偏差值,其配准结果如图14所示。5.2. Traverse the discriminant value matrix F to obtain the deviation value ( 65,47 ) corresponding to the minimum value. This is the optimal deviation value of vector data and remote sensing image registration, and the registration result is shown in Figure 14.

步骤6、提取矢量要素规定范围内的遥感影像数据。Step 6: Extract remote sensing image data within a specified range of vector elements.

根据公式(19)生成结果图像矩阵R={r(x,y)|x=0,...,m-1;y=0,...,n-1}。该矩阵对应的图像(图15)为矢量数据要素对应的遥感影像图像,其余部分为黑色背景。The resulting image matrix R={r(x,y)|x=0,...,m-1;y=0,...,n-1} is generated according to formula (19). The image corresponding to this matrix (Fig. 15) is the remote sensing image image corresponding to the vector data elements, and the rest is a black background.

由上述实施例可知,该方法能够较为自动且精确地提取一定矢量范围内的遥感影像数据。与现有的配准方法相比,本方法主要针对用户自有矢量数据与公开遥感影像数据间的自动配准,利用像素特征的配准方式较为方便快捷,自动化程度高,能够满足大批量要素的配准处理需要。It can be seen from the above embodiments that the method can automatically and accurately extract remote sensing image data within a certain vector range. Compared with the existing registration methods, this method is mainly aimed at the automatic registration between the user's own vector data and the public remote sensing image data. registration processing needs.

本实施例中,选用边缘配准因子与灰度配准因子以权重相等的方式进行配准,基本满足了该范围内提取遥感影像图像的需要。不同种类的矢量数据,其配准因子的权重设置有所不同。若该矢量数据表示的是森林、湖泊等大块同质的自然地物,应当让灰度配准因子占更大的比重;而若其表示的是住宅区等并不同质、但具有明显边界的人工地物时,边缘配准因子应当占更大的比重。In this embodiment, the edge registration factor and the grayscale registration factor are selected to perform registration in a manner of equal weight, which basically meets the needs of extracting remote sensing image images within this range. Different types of vector data have different weight settings for the registration factor. If the vector data represents large homogeneous natural features such as forests and lakes, the grayscale registration factor should take a larger proportion; if it represents residential areas, which are not homogeneous but have clear boundaries For artificial features, the edge registration factor should account for a larger proportion.

Claims (7)

1. An automatic registration method of vector data and remote sensing images is characterized in that: the method comprises the following steps:
step 1, extracting a ternary matrix A by using a scanning line algorithm based on vector data;
step 2, extracting a gray matrix G based on the remote sensing image, and obtaining an edge matrix E of the gray matrix G by using a Canny operator;
step 3, traversing the three-value matrix A by using the edge matrix E to generate a normalized edge registration factor matrix F' Edge
Step 4, traversing the three-value matrix A by using the gray-scale matrix G to generate a normalized gray-scale registration factor matrix F' Ash of
Step 5, automatically registering the vector data and the remote sensing image based on the characteristics of the registration factor;
And 6, extracting the remote sensing image data in the specified range of the vector elements.
2. The automatic registration method of vector data and remote sensing images according to claim 1, characterized in that: the step 1 specifically comprises:
1.1, extracting a minimum outsourcing rectangle of the vector data, and converting the minimum outsourcing rectangle into a binary grid matrix Y ═ { Y (i, j) | i ═ 0, ·, m-1 according to a formula (1); j is 0, a., n-1, wherein m is the height of the vector data outsourcing rectangle, and n is the width of the vector data outsourcing rectangle;
Figure FDA0003699712120000011
1.2, creating a boolean flag matrix F ═ { F (i, j) | i ═ 0. j ═ 0.., n-1} determines whether or not the vector element edge is present;
1.3 scanning the line L for each row t ={(t,j)|j=0,...,n-1}(t∈[0,m-1]) Assigning the mark matrix F by using a formula (2);
Figure FDA0003699712120000012
1.4, respectively marking the boundary and the interior of the vector element according to the mark matrix F; creating a matrix a ═ { a (i, j) | i ═ 0. j is 0, the.. and n-1, the initial default value is 0, and assignment is performed according to a formula (3) to obtain an assigned ternary matrix A, wherein the edge point identifier of the vector data is 2, the internal point identifier is 1, and the background point identifier is 0;
Figure FDA0003699712120000013
3. the automatic registration method of vector data and remote sensing images according to claim 2, characterized in that: the step 2 specifically comprises:
2.1, extracting a gray matrix based on the public remote sensing image: reading remote sensing image data to a matrix C (C (i, j) | i ═ 0., p-1; j-0., q-1} which is processed as a gray matrix G ═ { G (i, j) | i ═ 0., p-1; j ═ 0.., q-1 };
g(i,j)=0.299*c r (i,j)+0.587*c g (i,j)+0.114*c b (i,j) (4)
wherein, p is the height of the remote sensing image data, q is the width of the remote sensing image data, and the conditions (p > m) and (q > n) are satisfied; c. C r (i,j)、c g (i,j)、c b (i, j) respectively represents R, G, B values of the remote sensing image C at the point (i, j);
2.2, smoothing the gray matrix G by using a Gaussian filter;
a) calculating the user given size (2k +1) × (2k +1) and the variance σ using equation (5) 2 A lower gaussian convolution kernel G '═ { G' (x, y) | x ═ k, · k; y ═ k,.., k };
Figure FDA0003699712120000021
b) convolving the convolution kernel G' with the remote sensing image gray matrix G to obtain a smoothed image matrix S ═ { S (i, j) | i ═ 0., p-1; j ═ 0.., q-1 };
2.3, calculating the amplitude and the direction of the gradient; calculating the amplitude and the direction of the gradient of the smoothing matrix S by using the formulas (6), (7) and (8);
Figure FDA0003699712120000022
Figure FDA0003699712120000023
θ(i,j)=arctan(P x (i,j)/P y (i,j)) (8)
wherein P is x ,P y Gradient operators, P, in the x, y directions of the image, respectively x (i,j),P y (i, j) is the product of the gradient operator at point (i, j) and the smoothing matrix, arctan represents the tangent function, M (i, j) is the magnitude of the smoothing matrix S at (i, j), θ (i, j) is the direction of the smoothing matrix S at (i, j);
2.4, carrying out non-maximum suppression on the gradient amplitude: obtaining a gradient matrix Grad ═ { Grad (i, j) | i ═ 0.., p-1 after the non-maximum value is suppressed according to the formula (9); j ═ 0.., q-1 };
Figure FDA0003699712120000024
wherein M is Front side Representing the magnitude of the gradient, M, at a point preceding point (i, j) in the direction of the gradient Rear end Represents the gradient magnitude of a point subsequent to the point (i, j) in the gradient direction;
2.5, carrying out edge detection and connection according to a double-threshold method: establishing a high threshold delta Height of And a low threshold delta Is low in Points (i, j) satisfying the condition (10) can be determined as edge points, and the points are connected to obtain a final edge image matrix E ═ { E (i, j) | i ═ 0. j ═ 0.., q-1 };
g(i,j)>δ height of or(g(i,j)<δ Height of and g(i,j)>δ Is low in and flag(i,j)=true) (10)
Figure FDA0003699712120000031
4. The automatic registration method of vector data and remote sensing images according to claim 3, characterized in that: the step 3 specifically includes:
3.1, setting the point at the upper left corner of the remote sensing image as an origin, and obtaining a point pair set D { (D) of the relative displacement deviation between the point at the upper left corner and the origin of the remote sensing image when the user own vector data are at different positions by using the edge image matrix E and the ternary matrix A x ,d y )|0≤d x ≤p-m-1;0≤d y Q-n-1 ≦ q, any deviation (D) in the set D is calculated according to equation (11) x ,d y ) Corresponding result matrix
Figure FDA0003699712120000032
Figure FDA0003699712120000033
3.2, calculating the sum f of elements in the result matrix T corresponding to the deviation according to the formula (12);
Figure FDA0003699712120000034
3.3 element corresponding to each element in the set D and the resulting matrix F formed Edge ={f Edge (x, y) | x ═ 0.., p-m-1; y is 0,., q-n-1, which is the edge registration factor matrix;
3.4, obtaining an edge registration factor matrix F Edge Normalizing according to a formula (13) to obtain a normalized edge registration factor matrix F' Edge ={f' Edge (d x ,d y )|d x =0,...,p-m-1;d y =0,...,q-n-1};
Figure FDA0003699712120000035
Wherein f is Side max Representing the maximum value of the edge registration factor matrix, f While min Represents the minimum of the edge registration factor matrix.
5. The automatic registration method of vector data and remote sensing images according to claim 4, characterized in that: the step 4 specifically includes:
4.1 Using the grayscale image matrix G and the ternary matrix A, any deviation { (D) in the set D x ,d y ) A corresponding result matrix T '═ { T' (i, j) | i ═ 0., m-1 is calculated according to formula (14); j ═ 0.., n-1 };
Figure FDA0003699712120000036
4.2, calculating the variance f' of the non-zero value in the corresponding result matrix according to the formulas (15) and (16);
Figure FDA0003699712120000041
Figure FDA0003699712120000042
wherein R is the number of non-zero values in the result matrix T';
4.3 result matrix F consisting of variances corresponding to elements in set D Ash of ={f Ash of (x, y) | x ═ 0.., p-m-1; y is 0, the q-n-1 is a gray level registration factor matrix;
4.4, obtaining a gray level registration factor matrix F Ash of Normalizing according to the formula (17) to obtain a normalized gray level registration factor matrix F' Ash of ={f' Ash of (d x ,d y )|d x =0,...,p-m-1;d y =0,...,q-n-1};
Figure FDA0003699712120000043
Wherein f is Ash max Representing the maximum value of the gray scale registration factor matrix, f Lime min Representing the minimum value of the gray scale registration factor matrix.
6. The automatic registration method of vector data and remote sensing images according to claim 5, characterized in that: the step 5 specifically includes:
5.1, according to the characteristic of the registration factor, if the edge registration factor of a deviation position is larger and the gray level registration factor is smaller, the vector data at the deviation position is matched with the remote sensing image more, and the comprehensive registration factor matrix F is created by using a formula (18) according to the principle Judgment ={f Judgment (i,j)|i=0,...,p-m-1;j=0,...,q-n-1};
Figure FDA0003699712120000044
5.2 traversing the discriminating momentArray F Judgment Obtaining the deviation value (i) corresponding to the minimum value 0 ,j 0 ) And the vector data is the optimal deviation value of the registration of the vector data and the remote sensing image.
7. The automatic registration method of vector data and remote sensing images according to claim 6, characterized in that: the step 6 specifically includes:
generating a result matrix R ═ { R (x, y) | x ═ 0., m-1 according to equation (19); y is 0,., n-1 }; the image corresponding to the matrix is a remote sensing image in a vector data element or element set range, and the rest part is a black background;
Figure FDA0003699712120000045
CN201910083298.XA 2018-06-04 2019-01-28 Automatic registration method of vector data and remote sensing image Active CN109727279B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018105622875 2018-06-04
CN201810562287 2018-06-04

Publications (2)

Publication Number Publication Date
CN109727279A CN109727279A (en) 2019-05-07
CN109727279B true CN109727279B (en) 2022-07-29

Family

ID=66301207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910083298.XA Active CN109727279B (en) 2018-06-04 2019-01-28 Automatic registration method of vector data and remote sensing image

Country Status (1)

Country Link
CN (1) CN109727279B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378316B (en) * 2019-07-29 2023-06-27 苏州中科天启遥感科技有限公司 Method and system for extracting ground object identification sample of remote sensing image
CN111696121A (en) * 2020-06-05 2020-09-22 中国人民解放军火箭军工程设计研究院 Planar water area extraction method and system
CN112950680B (en) * 2021-02-20 2022-07-05 哈尔滨学院 Satellite remote sensing image registration method
CN114897659B (en) * 2022-05-09 2023-12-29 南京师范大学 A zero-watermark generation method for vector geographic data and a zero-watermark information detection method
CN115830087B (en) * 2022-12-09 2024-02-20 陕西航天技术应用研究院有限公司 A fast batch registration method for continuous frame image sets of translational motion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040052409A1 (en) * 2002-09-17 2004-03-18 Ravi Bansal Integrated image registration for cardiac magnetic resonance perfusion data
CN101957991A (en) * 2010-09-17 2011-01-26 中国科学院上海技术物理研究所 Remote sensing image registration method
CN102842137A (en) * 2012-08-14 2012-12-26 中山大学 Automatic registration method for multi-temporal empty spectrum remote sensing image based on space comprehensive mutual information
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
CN105654423A (en) * 2015-12-28 2016-06-08 西安电子科技大学 Area-based remote sensing image registration method
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040052409A1 (en) * 2002-09-17 2004-03-18 Ravi Bansal Integrated image registration for cardiac magnetic resonance perfusion data
CN101957991A (en) * 2010-09-17 2011-01-26 中国科学院上海技术物理研究所 Remote sensing image registration method
CN102842137A (en) * 2012-08-14 2012-12-26 中山大学 Automatic registration method for multi-temporal empty spectrum remote sensing image based on space comprehensive mutual information
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
CN105654423A (en) * 2015-12-28 2016-06-08 西安电子科技大学 Area-based remote sensing image registration method
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature

Also Published As

Publication number Publication date
CN109727279A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109727279B (en) Automatic registration method of vector data and remote sensing image
Ahmadi et al. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN108596055B (en) An airport target detection method for high-resolution remote sensing images under complex background
CN110197157B (en) Pavement crack growth detection method based on historical crack data
Liu et al. Main road extraction from zy-3 grayscale imagery based on directional mathematical morphology and vgi prior knowledge in urban areas
Mayunga et al. A semi‐automated approach for extracting buildings from QuickBird imagery applied to informal settlement mapping
CN116452852A (en) Automatic generation method of high-precision vector map
CN108550174B (en) Coastline super-resolution mapping method and coastline super-resolution mapping system based on semi-global optimization
CN110569861A (en) An Image Matching and Localization Method Based on Fusion of Point Features and Contour Features
CN115731257A (en) Leaf form information extraction method based on image
CN102855628B (en) Automatic matching method for multisource multi-temporal high-resolution satellite remote sensing image
Palenichka et al. Automatic extraction of control points for the registration of optical satellite and LiDAR images
CN111354047B (en) A camera module positioning method and system based on computer vision
Mousa et al. Building detection and regularisation using DSM and imagery information
CN117593465B (en) Three-dimensional visualization to achieve virtual display method and system of smart city
CN113343976B (en) Anti-highlight interference engineering measurement mark extraction method based on color-edge fusion feature growth
CN115937708B (en) A method and device for automatically identifying roof information based on high-definition satellite images
CN102446356A (en) Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points
JP6188052B2 (en) Information system and server
CN114926635B (en) Object segmentation method in multi-focus images combined with deep learning method
CN111667429A (en) Target positioning and correcting method for inspection robot
Xia et al. Refined extraction of buildings with the semantic edge-assisted approach from very high-resolution remotely sensed imagery
CN116503756B (en) Method for establishing surface texture datum based on ground control point database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant