JPH04579A - Method for extracting feature point of graphic - Google Patents

Method for extracting feature point of graphic

Info

Publication number
JPH04579A
JPH04579A JP2101315A JP10131590A JPH04579A JP H04579 A JPH04579 A JP H04579A JP 2101315 A JP2101315 A JP 2101315A JP 10131590 A JP10131590 A JP 10131590A JP H04579 A JPH04579 A JP H04579A
Authority
JP
Japan
Prior art keywords
outline
contour
extracted
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2101315A
Other languages
Japanese (ja)
Inventor
Yujiro Kamimura
上村 裕二郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to JP2101315A priority Critical patent/JPH04579A/en
Publication of JPH04579A publication Critical patent/JPH04579A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To rapidly vectorize a graphic by extracting the linear parts of four prescribed directions out of picture element strings constituting the outline of an image and setting up the end point of each linear part as a feature point. CONSTITUTION:The outline of binary image data inputted to an image input part 1 is traced by an outline tracing part 2 and its coordinates are outputted. Out of picture elements constituting the extracted outline, the end points of linear parts of the four prescribed directions are regarded as feature points and extracted by a feature point extracting part 3. A coordinate output part 4 outputs the coordinates of the feature points. In this method, only the coordinate points expressed by slashed squares are extracted as feature points, i.e. outline coordinates.

Description

【発明の詳細な説明】 [産業上の利用分野] 本発明は2値画像の輪郭形状を線分近似する図形の特徴
点抽出方法に関するものである。
DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to a method for extracting feature points of a figure by approximating the contour shape of a binary image by line segments.

[従来の技術] 近年、文字認識装置や図形認識装置などをコンピュータ
等の入力装置として利用しようとする要求が高まってい
る。文字認識装置や図形認識装置といった画像認識シス
テムでは、入力された画像の輪郭形状の特徴点を高速に
抽出する必要がある。
[Background Art] In recent years, there has been an increasing demand for using character recognition devices, graphic recognition devices, and the like as input devices for computers and the like. Image recognition systems such as character recognition devices and graphic recognition devices need to extract feature points of the contour shape of an input image at high speed.

従来の図形認識装置において2値の画像データの形状抽
出は図形の輪郭抽出、輪郭のベクトル化の手順で行われ
る。輪郭抽出方法としては、単純に8近傍または4近傍
の座標点を順次追跡してい(方法が一般的である。この
方法を以下に説明する。ここで図形の実体は黒画素で表
わされ、背景は白画素で表わされるとする。まず、原画
像をラスタースキャンして、追跡開始点を求め、この点
を注目点と呼ぶ。次に第4図に示すように、この注目点
の近傍の点を112.3の順に、順次反時計回りに画素
を調べ、初めて白画素から黒画素へ変化する点を輪郭点
座標とし、次の注目点とする。
In a conventional figure recognition device, shape extraction from binary image data is performed by extracting the outline of the figure and converting the outline into vectors. The most common contour extraction method is to simply sequentially track the coordinate points in the 8 or 4 neighborhoods. This method will be explained below. Here, the entity of the figure is represented by black pixels, Assume that the background is represented by white pixels. First, the original image is raster scanned to find the tracking start point, and this point is called the point of interest. Next, as shown in Figure 4, the points near this point of interest are The pixels are examined in the order of 112.3 in a counterclockwise direction, and the point that changes from a white pixel to a black pixel for the first time is set as the contour point coordinate, and is set as the next point of interest.

これらの探索を開始点と一致するまで繰り返し行うこと
により1本の輪郭線が求まる。この方式により、第5図
(a)を8連結で輪郭抽出した結果が第5図(b)であ
り、斜線部が抽出された輪郭である。
By repeating these searches until the starting point coincides with the starting point, one contour line is found. FIG. 5(b) is the result of extracting the outline of FIG. 5(a) by 8 connections using this method, and the shaded area is the extracted outline.

次に輪郭追跡によって得られた座標列から特徴点を抽出
をする。第6図を用いて、ベクトル化の方法の一例を説
明する。第6図(a)は輪郭追跡を行って得られた画素
列で、こわらをベクトル化することを考える。まず、始
点と終点を結ぶ線分ABを引く。輪郭を構成する各画素
とABとの距離を計算し、ABと最も違い点Pを捜す(
第6図(b))。この距離が許容誤差よりも大きい場合
、ABをAPとPBに分割し、それぞれの線分に対して
同様の処理を繰り返す(第6図(C))。全ての画素が
許容誤差以内に納まった時点でABのベクトル化が終了
する(第6図(d))。
Next, feature points are extracted from the coordinate string obtained by contour tracking. An example of the vectorization method will be explained using FIG. FIG. 6(a) shows a pixel sequence obtained by performing contour tracing, and let us consider vectorizing stiffness. First, draw a line segment AB connecting the starting point and the ending point. Calculate the distance between each pixel that makes up the contour and AB, and search for the point P that is the most different from AB (
Figure 6(b)). If this distance is larger than the allowable error, AB is divided into AP and PB, and the same process is repeated for each line segment (FIG. 6(C)). Vectorization of AB ends when all pixels fall within the tolerance (FIG. 6(d)).

[発明が解決しようとする課題] しかしながら上記従来の構成では、単純に輪郭追跡して
得られたすべての画素列を対象としてベクトル化を行う
ので、上記の線分ABと各@郭画素との距離を全て計算
しなければならない。そのため、計算量が多(なり、処
理に時間かかかるという問題点を有していた。
[Problems to be Solved by the Invention] However, in the conventional configuration described above, vectorization is performed for all pixel columns obtained by simply tracing the contour, so the relationship between the line segment AB and each @contour pixel is All distances must be calculated. Therefore, there were problems in that the amount of calculation was large and the processing took a long time.

[課題を解決するための手段] 本発明はこの課題を解決するために記憶された画偉の輪
郭を構成する画素を抽出し、そのなかの所定の4方回の
直線部分を抽出シ2、抽出された直線部分の里点を特徴
点として抽出する。
[Means for Solving the Problem] In order to solve this problem, the present invention extracts pixels constituting the contour of a stored image, and extracts straight line portions in four predetermined directions among them. The points of the extracted straight line portion are extracted as feature points.

[作用] 本発明は上記した構成により、輪郭の特徴点の抽出のた
め高速に図形の輪郭の特徴点の抽出が可能となる。
[Operation] According to the present invention, with the above-described configuration, feature points of the outline of a figure can be extracted at high speed.

[実施例] 第1図は本発明の一実施例における図形の特徴点T日出
方法を用いた装置の構成を示すブロック図を示すもので
ある。第1図において1は原画像として2値画像データ
を入力する画像入力部、2は入力された2値画像の輪郭
を追跡し、その座標を出力する輪郭追跡部、3は輪郭追
跡により得られた輪郭座標から特徴点を抽出する特徴点
畑出部、4は特徴点畑出部3で抽出された座標を出力す
る座標出力部である。
[Embodiment] FIG. 1 is a block diagram showing the configuration of an apparatus using the feature point T sunrise method of a figure in an embodiment of the present invention. In Fig. 1, 1 is an image input unit that inputs binary image data as an original image, 2 is a contour tracking unit that tracks the contour of the input binary image and outputs its coordinates, and 3 is a contour tracking unit that outputs the coordinates of the input binary image. 4 is a coordinate output unit that outputs the coordinates extracted by the feature point extraction unit 3;

以上のように構成された本実施例の図形の特徴点抽出方
法を用いた装置について以下その動作を第2図のフロー
チャートを用いて説明する。
The operation of the apparatus using the graphic feature point extraction method of this embodiment configured as described above will be described below with reference to the flowchart shown in FIG.

ます、ステップS1において原画像をラスクースキャン
し、輪郭追跡の開始点を求める。ステップs2ではステ
ップslで開始点が存在したがどうかを調べ、存在する
場合は次のステップに進み、存在しない場合は輪郭追跡
を終了する。ステップs3ではステップslで求めた開
始点が追跡済みかどうかを調べ、追跡されていない点で
あわば次のステップに進み、追跡されていればステップ
S1に戻る。ステップs4で開始点をメモリに格納し、
ステップs5で隣接する輪郭点を求める。次にステップ
s6では前回求めた輪郭点と隣接する輪郭点を求める。
First, in step S1, the original image is scanned by Lascous scanning to find a starting point for contour tracking. In step s2, it is checked whether the starting point existed in step sl, and if the starting point exists, the process proceeds to the next step, and if it does not exist, the contour tracing ends. In step s3, it is checked whether the starting point obtained in step sl has been tracked or not. If the starting point has not been tracked, the process proceeds to the next step, and if it has been tracked, the process returns to step S1. In step s4, the starting point is stored in memory,
Adjacent contour points are determined in step s5. Next, in step s6, contour points adjacent to the previously determined contour point are determined.

ステップs7では前回求めた方向コードと今回求めた方
向コートが同じ力・とうがを調べ、違う場合はステップ
s8へ進み、同じ場合はステップs9へ進む。ステップ
s8では@回求めた輪郭点をメモリに格納する。ステッ
プs9では今回求めた輪郭点が追跡開始点と等しいがと
うかを調べ、等しくなければステップs6に戻って追跡
を続け、等しければステップslOで最終点をメモリに
格納してステップslに戻り、次の輪郭追跡開始前を求
める。
In step s7, it is checked whether the direction code obtained last time and the direction code obtained this time are the same, and if they are different, proceed to step s8, and if they are the same, proceed to step s9. In step s8, the contour points found @ times are stored in the memory. In step s9, it is checked whether the contour point obtained this time is equal to the tracing start point. If not, the process returns to step s6 to continue tracing; if they are equal, the final point is stored in the memory in step slO, and the process returns to step sl. Find the point before the start of the next contour tracking.

第3図は、従来例の説明に用いたものと同じ図(第5図
(a))を原画像として入力した場合の本実施例の結果
である。本実施例では第3図の斜線か入った正方形で表
わした座標のみか特徴点(輪郭座標)として抽出される
。この例の場合、特徴点数は従来例(第5図(b))の
54個から26個に減っている。本実施例で抽出される
輪郭座標に対して、従来の方法でベクトル化を行なって
もまったく支障はない。
FIG. 3 shows the results of this embodiment when the same diagram (FIG. 5(a)) used to explain the conventional example is input as the original image. In this embodiment, only the coordinates represented by diagonally lined squares in FIG. 3 are extracted as feature points (contour coordinates). In this example, the number of feature points is reduced from 54 in the conventional example (FIG. 5(b)) to 26. There is no problem at all even if the contour coordinates extracted in this embodiment are vectorized using a conventional method.

以上のように本実施例によれば、輪郭情報を減らしたこ
とにより高速に輪郭のベクトル化が可能である。また、
輪郭情報をメモリに格納する場合、メモリ容量も軽減さ
れる。
As described above, according to this embodiment, by reducing the amount of contour information, it is possible to vectorize contours at high speed. Also,
When contour information is stored in memory, memory capacity is also reduced.

C発明の効果J 以上のように本発明は特徴点抽出のための計算の数が減
少するため、高速に図形のベクトル化が可能となる。ま
た輪郭追跡の結果を格納するメモリ容量も軽減される。
C Effects of the Invention J As described above, the present invention reduces the number of calculations for extracting feature points, so it is possible to vectorize figures at high speed. Furthermore, the memory capacity for storing contour tracking results is also reduced.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の一実施例における図形の特徴点検出方
法を用いた装置のブロック図、第2図は本実施例におけ
る制御手順を示すフローチャート、第3図は本発明の方
法によって抽出された図形の特徴点を示す図、第4図は
従来の輪郭追跡の方法を示した図、第5図は従来の輪郭
追跡の処理の説明図、第6図は従来のベクトル化の方法
を示した図である。 1・・・画像入力部、2・・・輪郭追跡部、3・・・ベ
クトル化部、4・・・座標出力部
FIG. 1 is a block diagram of an apparatus using the feature point detection method of a figure according to an embodiment of the present invention, FIG. 2 is a flowchart showing the control procedure in this embodiment, and FIG. Figure 4 is a diagram showing the conventional contour tracking method, Figure 5 is an explanatory diagram of the conventional contour tracking process, and Figure 6 is the conventional vectorization method. This is a diagram. 1... Image input section, 2... Contour tracking section, 3... Vectorization section, 4... Coordinate output section

Claims (1)

【特許請求の範囲】 読み取った画像データを記憶し、 記憶された画像の輪郭を構成する画素列を抽出し、 抽出された画素列のなかから所定の4方向の直線部分を
抽出し、 抽出された直線部分の端点を特徴点とすることを特徴と
する図形の特徴点抽出方法。
[Scope of Claims] Storing the read image data, extracting pixel rows forming the outline of the stored image, extracting straight line portions in four predetermined directions from the extracted pixel rows, and extracting the extracted pixel rows. A feature point extraction method for a figure, characterized in that the end points of a straight line section are taken as feature points.
JP2101315A 1990-04-17 1990-04-17 Method for extracting feature point of graphic Pending JPH04579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2101315A JPH04579A (en) 1990-04-17 1990-04-17 Method for extracting feature point of graphic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2101315A JPH04579A (en) 1990-04-17 1990-04-17 Method for extracting feature point of graphic

Publications (1)

Publication Number Publication Date
JPH04579A true JPH04579A (en) 1992-01-06

Family

ID=14297384

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2101315A Pending JPH04579A (en) 1990-04-17 1990-04-17 Method for extracting feature point of graphic

Country Status (1)

Country Link
JP (1) JPH04579A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001017229A1 (en) * 1999-08-27 2001-03-08 Celartem Technology Inc. Image compressing method
JP2007178166A (en) * 2005-12-27 2007-07-12 Rkc Instrument Inc Heater disconnection detection method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001017229A1 (en) * 1999-08-27 2001-03-08 Celartem Technology Inc. Image compressing method
US7031514B1 (en) 1999-08-27 2006-04-18 Celartem Technology Inc. Image compression method
JP2007178166A (en) * 2005-12-27 2007-07-12 Rkc Instrument Inc Heater disconnection detection method

Similar Documents

Publication Publication Date Title
Hori et al. Raster-to-vector conversion by line fitting based on contours and skeletons
US6404921B1 (en) Contour extracting method and apparatus
JPH07244738A (en) Line extraction Hough transform image processor
US20180204090A1 (en) Coarse-to-fine search method and image processing device
Arcelli et al. Computing Voronoi diagrams in digital pictures
JPH02263277A (en) Line image vectorization method
JPH04579A (en) Method for extracting feature point of graphic
JPH11134509A (en) Drawing recognition processing method and architectural drawing recognition processing method
JPS62131382A (en) Vector conversion method for binary images
Ramel et al. Automatic reading of handwritten chemical formulas from a structural representation of the image
JP2910344B2 (en) Image processing method
JPS6341107B2 (en)
JP3582734B2 (en) Table vectorizer
JPH04255080A (en) image input device
JPH0628476A (en) Processor for image signal
JP3183949B2 (en) Pattern recognition processing method
JPS6174075A (en) Line differentiation pattern dictionary
JPH0362269A (en) Method and device for approximating line image
JP3037504B2 (en) Image processing method and apparatus
JPH02264373A (en) Shape recognition device
JPS61260235A (en) Picture positioning system
JPH0312348B2 (en)
JPS62108382A (en) Approximating system for polygonal line graphic
JPH0434668A (en) Image processor
JPS63300365A (en) Vector transforming device for line drawing information