CN106295679A - A kind of coloured image light source colour method of estimation based on category correction - Google Patents

A kind of coloured image light source colour method of estimation based on category correction Download PDF

Info

Publication number
CN106295679A
CN106295679A CN201610606092.7A CN201610606092A CN106295679A CN 106295679 A CN106295679 A CN 106295679A CN 201610606092 A CN201610606092 A CN 201610606092A CN 106295679 A CN106295679 A CN 106295679A
Authority
CN
China
Prior art keywords
light source
image
correction
training
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610606092.7A
Other languages
Chinese (zh)
Other versions
CN106295679B (en
Inventor
李永杰
张明
高绍兵
任燕泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610606092.7A priority Critical patent/CN106295679B/en
Publication of CN106295679A publication Critical patent/CN106295679A/en
Application granted granted Critical
Publication of CN106295679B publication Critical patent/CN106295679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of coloured image light source colour method of estimation based on category correction, first on the image of one group of known luminaire color, extract the edge feature of image, then learnt by method of least square, obtain the correction matrix between edge feature and light source, to pending test image zooming-out edge feature and it is multiplied with correction matrix again, obtains rough light source and estimate;By the way of finding K width adjacent image at feature space, find a class training image close with pending test characteristics of image afterwards, thus relearn, obtain light source accurately and estimate.The present invention relates to parameter few, and owing to the feature of extraction is simple and negligible amounts, so also having the features such as calculating is simple, speed is fast;Additionally, the present invention is method based on study, so high treating effect, degree of accuracy is high, is very suitable for the occasion that the accuracy of estimation of light source colour requires comparison high.

Description

一种基于分类校正的彩色图像光源颜色估计方法A Color Estimation Method of Color Image Light Source Based on Classification Correction

技术领域technical field

本发明属于计算机视觉和图像处理技术领域,具体涉及一种基于分类校正的彩色图像光源颜色估计方法的设计。The invention belongs to the technical field of computer vision and image processing, and in particular relates to the design of a method for estimating the color of a color image light source based on classification correction.

背景技术Background technique

自然环境下,同一物体在不同颜色的光的照射下会呈现不同的颜色,比如绿色的树叶在晨光照射下偏黄色,而在傍晚时分却偏蓝色。人的视觉系统可以抵制这种光源颜色变化,从而恒定的感知物体的颜色,也就是视觉系统具有颜色恒常性。然而,受技术条件限制,机器并不具备这一能力,由物理设备,比如照相机拍摄到的图片由于光源颜色的变化会产生严重的色偏。因此,如何根据已有的图像信息去准确估计场景中的光源颜色并将其移除从而得到物体在标准白光照射下的颜色就显得尤为重要。In a natural environment, the same object will appear in different colors under the illumination of different colors of light. For example, green leaves are yellowish in the morning light, but blue in the evening. The human visual system can resist the color change of the light source, so as to constantly perceive the color of the object, that is, the visual system has color constancy. However, limited by technical conditions, the machine does not have this ability, and the pictures taken by physical equipment, such as cameras, will have serious color cast due to the change of the color of the light source. Therefore, how to accurately estimate and remove the color of the light source in the scene based on the existing image information to obtain the color of the object under the standard white light is particularly important.

计算性颜色恒常正是致力于解决这一问题,它的主要目的是计算任意一幅图像所包含的未知光源的颜色,然后用这个计算得到的光源颜色对原始输入的图像进行光源颜色校正后在标准的白光下进行显示,得到所谓的标准图像。由于标准图像去除了光源颜色的影响,因而对于后续的计算任务,比如基于颜色的场景分类,图像检索就不存在因色偏而导致的误分类或误检索。Computational color constant is dedicated to solving this problem. Its main purpose is to calculate the color of the unknown light source contained in any image, and then use the calculated light source color to correct the original input image. Displayed under standard white light, the so-called standard image is obtained. Since the standard image removes the influence of the color of the light source, for subsequent computing tasks, such as color-based scene classification, there is no misclassification or misretrieval caused by color cast in image retrieval.

计算性颜色恒常方法可以分为两类:基于学习的方法和传统的静态方法。传统的静态方法从图像中提取简单的特征用于光源估计,这类方法由于估计的误差较大,不能很好的满足工程需要。基于此需求,在传统非基于学习的方法的基础上诞生了基于学习的方法,比较典型的基于学习的方法有G.D.Finlayson在2013年提出的方法,参考文献:G.D.Finlayson,“Corrected-moment illuminant estimation”in Proc.Comput.Vis.IEEEInt.Conf.,2013,pp.1904–1911,该方法通过特征提取并利用回归的方法找到特征与光源之间的关系。这种方法由于使用了回归的手段,因而估计光源的准确性相对较高,但该方法对所有图像都使用同一个校正矩阵,导致部分图像估计的光源误差很大,因而无法满足对估计光源颜色准确性要求很高的场合,例如智能机器人或自动驾驶等的接收图像的设备前端。因此,实现一种对不同的图像学习不同的校正矩阵的方法就显得尤为重要。Computational color constancy methods can be divided into two categories: learning-based methods and traditional static methods. The traditional static method extracts simple features from the image for light source estimation. Due to the large estimation error, this kind of method cannot meet the engineering needs very well. Based on this requirement, a learning-based method was born on the basis of the traditional non-learning-based method. The more typical learning-based method is the method proposed by G.D.Finlayson in 2013. Reference: G.D.Finlayson, "Corrected-moment illuminant estimation "in Proc.Comput.Vis.IEEEInt.Conf., 2013, pp.1904–1911, this method finds the relationship between features and light sources through feature extraction and regression. Because this method uses regression means, the accuracy of estimating the light source is relatively high, but this method uses the same correction matrix for all images, resulting in a large error in the estimated light source of some images, so it cannot meet the requirements for estimating the color of the light source. Occasions where high accuracy is required, such as the front end of equipment receiving images for intelligent robots or autonomous driving. Therefore, it is particularly important to implement a method to learn different correction matrices for different images.

发明内容Contents of the invention

本发明的目的是为了解决现有技术中的图像场景光源颜色估计方法无法满足对估计光源颜色准确性要求很高的场合的问题,提出了一种基于分类校正的彩色图像光源颜色估计方法。The purpose of the present invention is to solve the problem that the prior art method for estimating the color of the light source in an image scene cannot meet the high requirements for the accuracy of estimating the color of the light source, and proposes a method for estimating the color of the light source in a color image based on classification correction.

本发明的技术方案为:一种基于分类校正的彩色图像光源颜色估计方法,包括以下步骤:The technical solution of the present invention is: a color image light source color estimation method based on classification correction, comprising the following steps:

S1、提取训练图像的边缘特征:将N幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G做卷积运算,得到图像每个像素点对应的边缘值,提取边缘特征,得到N幅训练图像的边缘特征矩阵M;S1. Extract the edge features of the training image: take N color images of known light sources as the original training set T, and perform convolution operations with the template G after derivation of the Gaussian distribution to obtain the edge value corresponding to each pixel of the image. Extract edge features to obtain the edge feature matrix M of N training images;

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与N幅训练图像的标准光源L之间的校正矩阵C;S2. Learning the correction matrix: learning the correction matrix C between the feature matrix M calculated in step S1 and the standard light source L of the N training images by the least square method;

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的边缘特征,与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1;S3. Rough light source estimation: use the method in step S1 to extract the edge features of the test image, and multiply it with the correction matrix C learned in step S2 to obtain a rough light source estimation result L1;

S4、寻找与测试图像相对应的训练图像:对测试图像与原始训练集T分别去除光源,再采用S1中的方法分别提取边缘特征,形成特征空间;在特征空间中找出与测试图像特征相近的K幅训练图像,将其作为新的训练集TN;S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T respectively, and then use the method in S1 to extract edge features respectively to form a feature space; find features similar to the test image in the feature space The K training images of K are used as a new training set TN;

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time replace the training set T in step S1 with the new training set TN obtained in step S4, and the number of training images is changed from N to K accordingly, until step Until the TN obtained in S4 is the same as the TN obtained in step S4 in the last operation, the light source estimation result L1 obtained in step S3 in the last operation is taken as the final light source estimation result.

进一步地,步骤S1中高斯分布求导后的模板G为高斯梯度算子。Further, the template G after derivation of the Gaussian distribution in step S1 is a Gaussian gradient operator.

进一步地,步骤S1中提取边缘特征的计算公式为:Further, the calculation formula for extracting edge features in step S1 is:

Mm xx ythe y zz == (( ΣΣ ii == 11 NN 11 RR ii xx GG ii ythe y BB ii zz NN 11 )) 11 // (( xx ++ ythe y ++ zz )) -- -- -- (( 11 ))

式中Ri、Gi、Bi分别表示每个像素点在R、G、B三个通道的边缘值,N1表示图像中像素点的个数,Mxyz为不同x,y,z下对应的边缘特征的值,x,y,z是满足x≥0,y≥0,z≥0且x+y+z=3的所有组合。In the formula, R i , G i , and B i represent the edge values of each pixel in the three channels of R, G, and B respectively, N 1 represents the number of pixels in the image, and M xyz is different x, y, z The values of the corresponding edge features, x, y, z are all combinations satisfying x≥0, y≥0, z≥0 and x+y+z=3.

进一步地,步骤S4中K的取值范围为 Further, the value range of K in step S4 is

进一步地,步骤S4具体包括以下分步骤:Further, step S4 specifically includes the following sub-steps:

S41、对原始N幅训练图像去除标准光源L,并采用步骤S1中的方法提取边缘特征;S41. Remove the standard light source L from the original N training images, and extract edge features using the method in step S1;

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,与步骤S41中N幅训练图像提取的边缘特征共同形成特征空间;S42, removing the roughly estimated light source L1 in step S3 from the test image, and extracting edge features using the method in step S1, forming a feature space together with the edge features extracted from the N training images in step S41;

S43、在特征空间中找出与测试图像特征距离最相近的K幅图像,作为测试图像的新训练图像集合TN。S43. Find K images in the feature space that are closest to the feature distance of the test image, and use it as a new training image set TN for the test image.

进一步地,步骤S43中的特征距离为欧式距离。Further, the feature distance in step S43 is Euclidean distance.

本发明的有益效果是:本发明首先在一组已知光源颜色的图像上提取图像的边缘特征,然后通过最小二乘法进行学习,得到边缘特征与光源之间的校正矩阵,再对待处理的测试图像提取边缘特征并与校正矩阵相乘,得到粗略的光源估计;之后通过在特征空间寻找K幅邻近图像的方式找到与待处理测试图像特征相近的一类训练图像,从而重新学习,得到精准的光源估计。由于待处理的测试图像与训练图像在特征空间的距离不同,适当调节对应训练图像数K的值,可以得到更好的适于不同类型训练图像的结果,这里K是唯一的参数。本发明涉及参数少(仅有一个参数K),并且由于提取的特征简单且数量较少,所以还拥有计算简单、速度快等特点;此外,本发明是基于学习的方法,所以处理效果好,精确度高,非常适合于对光源颜色的估计准确度要求比较高的场合,例如内置在智能机器人或自动驾驶的接收图像设备的前端等。The beneficial effects of the present invention are: the present invention first extracts the edge features of the image on a group of images with known light source colors, and then learns through the least square method to obtain the correction matrix between the edge features and the light source, and then the test to be processed The image extracts edge features and multiplies them with the correction matrix to obtain a rough light source estimate; then find a class of training images that are similar to the characteristics of the test image to be processed by searching for K adjacent images in the feature space, so as to relearn and obtain accurate Light source estimation. Since the distance between the test image to be processed and the training image in the feature space is different, properly adjusting the value of K corresponding to the number of training images can obtain better results suitable for different types of training images, where K is the only parameter. The present invention involves few parameters (only one parameter K), and because the extracted features are simple and small in number, it also has the characteristics of simple calculation and fast speed; in addition, the present invention is based on a learning method, so the processing effect is good, With high accuracy, it is very suitable for occasions where the estimation accuracy of the color of the light source is relatively high, such as being built in the front end of an intelligent robot or an automatic driving image receiving device, etc.

附图说明Description of drawings

图1为本发明提供的一种基于分类校正的彩色图像光源颜色估计方法流程图。FIG. 1 is a flow chart of a color image light source color estimation method based on classification correction provided by the present invention.

图2为本发明实施例一的待处理的测试图像tools_ph-ulm.GIF。Fig. 2 is the test image tools_ph-ulm.GIF to be processed in Embodiment 1 of the present invention.

图3为本发明实施例一的各步骤估计的光源与真实光源之间的误差值示意图。FIG. 3 is a schematic diagram of error values between the estimated light source and the real light source in each step of Embodiment 1 of the present invention.

图4为本发明实施例一的最终光源估计结果与真实光源对比示意图。FIG. 4 is a schematic diagram of a comparison between the final light source estimation result and the real light source according to Embodiment 1 of the present invention.

图5为本发明实施例二的利用步骤S5计算的光源颜色值对原始测试图像进行色调校正后的结果示意图。FIG. 5 is a schematic diagram of the result of performing tone correction on the original test image by using the color value of the light source calculated in step S5 according to Embodiment 2 of the present invention.

具体实施方式detailed description

下面结合附图对本发明的实施例作进一步的说明。Embodiments of the present invention will be further described below in conjunction with the accompanying drawings.

本发明提供了一种基于分类校正的彩色图像光源颜色估计方法,如图1所示,包括以下步骤:The present invention provides a color image light source color estimation method based on classification correction, as shown in Figure 1, comprising the following steps:

S1、提取训练图像的边缘特征:将N幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G做卷积运算,得到图像每个像素点对应的边缘值,提取边缘特征,得到N幅训练图像的边缘特征矩阵M。S1. Extract the edge features of the training image: take N color images of known light sources as the original training set T, and perform convolution operations with the template G after derivation of the Gaussian distribution to obtain the edge value corresponding to each pixel of the image. Extract the edge features to obtain the edge feature matrix M of N training images.

其中,高斯分布求导后的模板G为高斯梯度算子。Among them, the template G after Gaussian distribution derivation is a Gaussian gradient operator.

提取边缘特征的计算公式为:The calculation formula for extracting edge features is:

Mm xx ythe y zz == (( ΣΣ ii == 11 NN 11 RR ii xx GG ii ythe y BB ii zz NN 11 )) 11 // (( xx ++ ythe y ++ zz )) -- -- -- (( 11 ))

式中Ri、Gi、Bi分别表示每个像素点在R、G、B三个通道的边缘值,N1表示图像中像素点的个数,Mxyz为不同x,y,z下对应的边缘特征的值,x,y,z是满足x≥0,y≥0,z≥0且x+y+z=3的所有组合,总的组合数为19,所以这里可以得到19个边缘特征。In the formula, R i , G i , and B i represent the edge values of each pixel in the three channels of R, G, and B respectively, N 1 represents the number of pixels in the image, and M xyz is different x, y, z The values of the corresponding edge features, x, y, z are all combinations that satisfy x≥0, y≥0, z≥0 and x+y+z=3, the total number of combinations is 19, so 19 can be obtained here edge features.

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与N幅训练图像的标准光源L之间的校正矩阵C。S2. Learning the correction matrix: learning the correction matrix C between the feature matrix M calculated in step S1 and the standard light source L of the N training images by the least square method.

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的边缘特征,与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1。S3. Rough light source estimation: use the method in step S1 to extract the edge features of the test image, and multiply it with the correction matrix C learned in step S2 to obtain a rough light source estimation result L1.

S4、寻找与测试图像相对应的训练图像:对测试图像与原始训练集T分别去除光源,再采用S1中的方法分别提取边缘特征,形成特征空间;在特征空间中找出与测试图像特征相近的K幅训练图像(K的取值范围为),将其作为新的训练集TN。S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T respectively, and then use the method in S1 to extract edge features respectively to form a feature space; find features similar to the test image in the feature space K training images of K (the value range of K is ) as a new training set TN.

该步骤具体包括以下分步骤:This step specifically includes the following sub-steps:

S41、对原始N幅训练图像去除标准光源L,并采用步骤S1中的方法提取边缘特征。S41. Remove the standard light source L from the original N training images, and extract edge features using the method in step S1.

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,与步骤S41中N幅训练图像提取的边缘特征共同形成特征空间。S42. Remove the roughly estimated light source L1 in step S3 from the test image, and extract edge features using the method in step S1, and form a feature space together with the edge features extracted from the N training images in step S41.

S43、在特征空间中找出与测试图像特征距离最相近的K幅图像,作为测试图像的新训练图像集合TN。S43. Find K images in the feature space that are closest to the feature distance of the test image, and use it as a new training image set TN for the test image.

其中,步骤S43中的特征距离为欧式距离。Wherein, the feature distance in step S43 is Euclidean distance.

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time replace the training set T in step S1 with the new training set TN obtained in step S4, and the number of training images is changed from N to K accordingly, until step Until the TN obtained in S4 is the same as the TN obtained in step S4 in the last operation, the light source estimation result L1 obtained in step S3 in the last operation is taken as the final light source estimation result.

经过步骤S5之后计算出来的图像的最终光源估计结果L1可以直接用于后续的计算机视觉应用,比如用输入的原彩色图像的每个颜色通道的分量除以L1,可以达到去除彩色图像中光源颜色的目的。此外,图像的白平衡以及颜色校正也需要用到S5计算出来的最终光源估计结果L1。The final light source estimation result L1 of the image calculated after step S5 can be directly used in subsequent computer vision applications. For example, by dividing the component of each color channel of the input original color image by L1, the color of the light source in the color image can be removed. the goal of. In addition, the white balance and color correction of the image also need to use the final light source estimation result L1 calculated in S5.

下面以一个具体实施例对本发明提供的一种基于分类校正的彩色图像光源颜色估计方法作进一步说明:A method for estimating the color image light source color based on classification correction provided by the present invention will be further described in a specific embodiment below:

实施例一:Embodiment one:

下载目前国际公认的用于估计场景光源颜色的图像库SFU object的所有图片(共321幅)及其对应的真实光源颜色(标准光源)L,图像大小均为468×637,选图像库前214幅图像作为训练集图像,选择剩余图像中一幅图像tools_ph-ulm.GIF(如图2所示)作为待处理的测试图像进行测试,所有图像都没有经过任何相机本身的预处理(如色调校正,gamma值校正)。则本发明的详细步骤如下:Download all the images (321 in total) of the currently internationally recognized image library SFU object for estimating scene light source colors and their corresponding real light source colors (standard light source) L. The image size is 468×637, and the first 214 images in the image library images as the training set images, select one of the remaining images tools_ph-ulm.GIF (as shown in Figure 2) as the test image to be processed for testing, all images have not undergone any preprocessing of the camera itself (such as tone correction , gamma value correction). Then the detailed steps of the present invention are as follows:

S1、提取训练图像的边缘特征:将214幅已知光源的彩色图像作为原始训练集T,分别与高斯分布求导后的模板G(高斯梯度算子)做卷积运算,得到图像每个像素点对应的边缘值,再分别提取19维的边缘特征,最后得到大小为214*19的训练集图像的边缘特征矩阵M。S1. Extract the edge features of the training image: use 214 color images of known light sources as the original training set T, and perform convolution operations with the template G (Gaussian gradient operator) after derivation of the Gaussian distribution to obtain each pixel of the image The edge value corresponding to the point, and then extract the 19-dimensional edge features respectively, and finally obtain the edge feature matrix M of the training set image with a size of 214*19.

S2、学习校正矩阵:通过最小二乘法,学习由步骤S1计算得到的特征矩阵M与214幅训练图像的标准光源L之间的校正矩阵,得到大小为19*3的校正矩阵C:S2. Learning the correction matrix: learn the correction matrix between the feature matrix M calculated in step S1 and the standard light source L of the 214 training images by the least square method, and obtain a correction matrix C with a size of 19*3:

C=[-150.0689,-30.1462,-21.5186;-96.5582,-196.1642,-348.5298;52.6551,76.4461,115.5982;-200.5289,-240.3650,-179.6495;-79.6311,72.4539,125.1126;-56.1276,-130.2963,-226.1518;683.9180,552.9035,366.8781;214.1444,-15.5379,-52.8198;-149.3407,138.0260,397.1888;154.6218,240.3336,128.2161;156.5752,-50.6503,69.4182;22.7103,90.3730,274.5781;-65.9786,-384.7642,-66.2556;-112.7044,-104.0913,-12.8868;-349.7427,81.5115,-215.8972;-79.0109,-48.0727,-32.2072;-98.2723,-22.7039,-51.2091;108.6481,-52.0896,-265.9989;172.6056,171.2726,95.1991]。C=[-150.0689,-30.1462,-21.5186;-96.5582,-196.1642,-348.5298;52.6551,76.4461,115.5982;-200.5289,-240.3650,-179.6495;-79.6311,72.4539,125.1126;-56.1276,-130.2963,- 226.1518;683.9180,552.9035,366.8781;214.1444,-15.5379,-52.8198;-149.3407,138.0260,397.1888;154.6218,240.3336,128.2161;156.5752,-50.6503,69.4182;22.7103,90.3730,274.5781;-65.9786,-384.7642,-66.2556 ;-112.7044,-104.0913,-12.8868;-349.7427,81.5115,-215.8972;-79.0109,-48.0727,-32.2072;-98.2723,-22.7039,-51.2091;108.6481,-52.0896,-265.9989;172.6056,171.2726,95.1991] .

S3、粗略的光源估计:采用步骤S1中的方法提取测试图像的19维边缘特征,得到大小为1*19的测试图像的边缘特征矩阵M1:S3. Rough light source estimation: use the method in step S1 to extract the 19-dimensional edge features of the test image, and obtain the edge feature matrix M1 of the test image with a size of 1*19:

M1=[0.0002,0.0004,0.0002,0.0014,0.0017,0.0012,0.0015,0.0013,0.0014,0.0036,0.0040,0.0031,0.0037,0.0034,0.0039,0.0037,0.0032,0.0034,0.0035]。M1=[0.0002, 0.0004, 0.0002, 0.0014, 0.0017, 0.0012, 0.0015, 0.0013, 0.0014, 0.0036, 0.0040, 0.0031, 0.0037, 0.0034, 0.0039, 0.0035, 0.00332, 0.0]

再将M1与步骤S2学习得到的校正矩阵C相乘,得到粗略的光源估计结果L1=[0.1985,0.2151,0.2360]。Then multiply M1 with the correction matrix C learned in step S2 to obtain a rough light source estimation result L1=[0.1985, 0.2151, 0.2360].

S4、寻找与测试图像相对应的训练图像:对测试图像与具有214幅图像的原始训练集T分别去除光源,再采用S1中的方法分别提取19维边缘特征,形成特征空间。在特征空间中找出与测试图像特征相近的K幅训练图像,从而得到与其特征相近的一类图像,将这K幅图像作为新的训练集TN。本发明实施例中,选取K=100。S4. Find the training image corresponding to the test image: remove the light source from the test image and the original training set T with 214 images, and then use the method in S1 to extract 19-dimensional edge features to form a feature space. In the feature space, find K training images with similar features to the test image, so as to obtain a class of images with similar features, and use these K images as a new training set TN. In the embodiment of the present invention, K=100 is selected.

该步骤具体包括以下分步骤:This step specifically includes the following sub-steps:

S41、对原始214幅训练图像去除标准光源L,并采用步骤S1中的方法提取19维边缘特征,得到大小为214*19的特征矩阵M0。S41. Remove the standard light source L from the original 214 training images, and use the method in step S1 to extract 19-dimensional edge features to obtain a feature matrix M0 with a size of 214*19.

S42、对测试图像去除步骤S3中粗略估计的光源L1,并采用步骤S1中的方法提取边缘特征,得到大小为1*19的特征矩阵M2:S42. Remove the roughly estimated light source L1 in step S3 from the test image, and use the method in step S1 to extract edge features to obtain a feature matrix M2 with a size of 1*19:

M2=[0.0060,0.0089,0.0038,0.0366,0.0371,0.0224,0.0365,0.0282,0.0285,0.0937,0.0900,0.0581,0.0921,0.0790,0.0909,0.0768,0.0673,0.0664,0.0777]。M2=[0.0060, 0.0089, 0.0038, 0.0366, 0.0371, 0.0224, 0.0365, 0.0282, 0.0285, 0.0937, 0.0900, 0.0581, 0.0921, 0.0790, 0.0909, 0.0768, 0.06773, 0.0]

再将M2与步骤S41中214幅训练图像提取的边缘特征M0共同形成特征空间。Then M2 and the edge feature M0 extracted from the 214 training images in step S41 form a feature space together.

S43、在特征空间中找出与测试图像特征距离最相近的100幅图像,作为测试图像的新训练图像集合TN。S43. Find 100 images in the feature space that are closest to the feature distance of the test image, and use it as a new training image set TN for the test image.

S5、精准的光源估计:重复步骤S1-S4,每次将步骤S1中的训练集T替换为步骤S4中得到的新的训练集TN,训练图像数也相应的由N变为K,直到步骤S4中得到的TN与上一次操作中步骤S4得到的TN相同为止,把最后一次操作中步骤S3得到的光源估计结果L1作为最终光源估计结果。S5. Accurate light source estimation: Repeat steps S1-S4, each time replace the training set T in step S1 with the new training set TN obtained in step S4, and the number of training images is changed from N to K accordingly, until step Until the TN obtained in S4 is the same as the TN obtained in step S4 in the last operation, the light source estimation result L1 obtained in step S3 in the last operation is taken as the final light source estimation result.

本发明实施例中,为节约时间,重复操作两次即可。重复操作一次后得到的光源估计结果为L1=[0.3412,0.3591,0.3168],重复操作两次后得到的光源估计结果为L1=[0.3312,0.3365,0.3430]。将执行两次后得到的光源估计结果L1=[0.3312,0.3365,0.3430]作为最终光源估计结果。In the embodiment of the present invention, in order to save time, the operation can be repeated twice. The light source estimation result obtained after repeating the operation once is L1 = [0.3412, 0.3591, 0.3168], and the light source estimation result obtained after repeating the operation twice is L1 = [0.3312, 0.3365, 0.3430]. The light source estimation result L1=[0.3312, 0.3365, 0.3430] obtained after two executions is taken as the final light source estimation result.

如图3所示,第一个柱子表示步骤S3中粗略估计的光源与真实光源之间角度误差值,第二个柱子是步骤S5中重复操作一次之后估计的光源与真实光源之间的角度误差值,第三个柱子是步骤S5中重复操作两次之后估计的光源与真实光源之间的角度误差值。三个柱子之间连接的折线反应了估计误差的下降趋势,表明估计的光源越来越准确。As shown in Figure 3, the first column represents the angular error value between the roughly estimated light source and the real light source in step S3, and the second column is the angular error between the estimated light source and the real light source after repeated operations in step S5 value, the third column is the angle error value between the estimated light source and the real light source after repeating the operation twice in step S5. The broken line connecting the three columns reflects the downward trend of the estimation error, indicating that the estimated light source is getting more and more accurate.

如图4所示为步骤S5最终计算得到的三原色空间下红色和绿色分量的响应的方向与真实光源红色和绿色分量的响应的方向,图4表明由步骤S5计算得到的响应值与真实场景光源颜色的信息很接近。As shown in Figure 4, the direction of the response of the red and green components in the three primary color spaces finally calculated in step S5 and the direction of the response of the red and green components of the real light source, Figure 4 shows that the response value calculated by step S5 is consistent with the real scene light source The color information is close.

下面以一个具体实施例对本发明最终得到的光源估计结果以图像的色调校正为例作一个实际应用时的简单示范:The following is a simple demonstration of the light source estimation result finally obtained by the present invention, taking image tone correction as an example in a practical application, using a specific embodiment:

实施例二:Embodiment two:

利用步骤S5计算得到的各个颜色分量下的光源颜色值,分别校正原始输入图像的每个颜色分量的像素值。以步骤S3中输入的测试图像的一个像素点(0.335,0.538,0.601)为例,其校正后的结果为(0.335/0.3312,0.538/0.3365,0.601/0.3430)=(1.0115,1.5988,1.7522),归一化处理之后变为(0.2319,0.3665,0.4016),然后将校正后的值乘上标准白光系数得到(0.1339,0.2116,0.2319)作为最终输出的校正图像的像素值,原始输入图像的其它像素点也做类似的计算,最后得到校正后的彩色图像,如图5所示。The pixel values of each color component of the original input image are respectively corrected by using the light source color values under each color component calculated in step S5. Taking a pixel point (0.335,0.538,0.601) of the test image input in step S3 as an example, the corrected result is (0.335/0.3312,0.538/0.3365,0.601/0.3430)=(1.0115,1.5988,1.7522), After normalization, it becomes (0.2319, 0.3665, 0.4016), and then multiply the corrected value by the standard white light coefficient Obtain (0.1339, 0.2116, 0.2319) as the pixel value of the final output corrected image, do similar calculations for other pixels of the original input image, and finally obtain the corrected color image, as shown in Figure 5.

本领域的普通技术人员将会意识到,这里所述的实施例是为了帮助读者理解本发明的原理,应被理解为本发明的保护范围并不局限于这样的特别陈述和实施例。本领域的普通技术人员可以根据本发明公开的这些技术启示做出各种不脱离本发明实质的其它各种具体变形和组合,这些变形和组合仍然在本发明的保护范围内。Those skilled in the art will appreciate that the embodiments described here are to help readers understand the principles of the present invention, and it should be understood that the protection scope of the present invention is not limited to such specific statements and embodiments. Those skilled in the art can make various other specific modifications and combinations based on the technical revelations disclosed in the present invention without departing from the essence of the present invention, and these modifications and combinations are still within the protection scope of the present invention.

Claims (6)

1. a coloured image light source colour method of estimation based on category correction, it is characterised in that comprise the following steps:
S1, the edge feature of extraction training image: using the coloured image of N width known luminaire as original training set T, respectively with height Template G after this distribution derivation does convolution algorithm, obtains the marginal value that each pixel of image is corresponding, extracts edge feature, Edge feature matrix M to N width training image;
S2, learning correction matrix: by method of least square, learn by step S1 calculated eigenmatrix M and N width training figure Correction matrix C between standard light source L of picture;
S3, rough light source are estimated: use the method in step S1 to extract the edge feature of test image, learn with step S2 To correction matrix C be multiplied, obtain rough light source estimated result L1;
S4, find the training image corresponding with testing image: test image is removed light source respectively with original training set T, then Use the method in S1 to extract edge feature respectively, form feature space;Feature space is found out and tests characteristics of image phase Near K width training image, as new training set TN;
S5, accurately light source estimate: repeat step S1-S4, training set T in step S1 replaced with in step S4 every time and obtain New training set TN, training image number is also become K from N accordingly, until in step S4 the TN that obtains with in last operation Till TN that step S4 obtains is identical, the light source estimated result L1 that step S3 in last operation is obtained is as final light source Estimated result.
Coloured image light source colour method of estimation based on category correction the most according to claim 1, it is characterised in that institute Stating in step S1 template G after Gauss distribution derivation is Gauss gradient operator.
Coloured image light source colour method of estimation based on category correction the most according to claim 1, it is characterised in that institute State and step S1 is extracted the computing formula of edge feature be:
M x y z = ( Σ i = 1 N 1 R i x G i y B i z N 1 ) 1 / ( x + y + z ) - - - ( 1 )
R in formulai、Gi、BiRepresent each pixel marginal value at tri-passages of R, G, B, N respectively1Represent pixel in image Number, MxyzFor different x, the value of edge feature corresponding under y, z, x, y, z are to meet x >=0, y >=0, z >=0 and x+y+z=3's All combinations.
Coloured image light source colour method of estimation based on category correction the most according to claim 1, it is characterised in that institute Stating the span of K in step S4 is
Coloured image light source colour method of estimation based on category correction the most according to claim 1, it is characterised in that institute State step S4 specifically include following step by step:
S41, original N width training image is removed standard light source L, and use method in step S1 to extract edge feature;
S42, to the light source L1 of rough estimate in test image removal step S3, and it is special to use the method in step S1 to extract edge Levying, the edge feature extracted with N width training image in step S41 is collectively forming feature space;
S43, in feature space, find out and test the characteristics of image the most close K width image of distance, as the new instruction of test image Practice image collection TN.
Coloured image light source colour method of estimation based on category correction the most according to claim 5, it is characterised in that institute Stating the characteristic distance in step S43 is Euclidean distance.
CN201610606092.7A 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction Active CN106295679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Publications (2)

Publication Number Publication Date
CN106295679A true CN106295679A (en) 2017-01-04
CN106295679B CN106295679B (en) 2019-06-25

Family

ID=57663052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610606092.7A Active CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Country Status (1)

Country Link
CN (1) CN106295679B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060308A (en) * 2019-03-28 2019-07-26 杭州电子科技大学 A kind of color constancy method based on light source colour distribution limitation
CN112995634A (en) * 2021-04-21 2021-06-18 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
CN116188797A (en) * 2022-12-09 2023-05-30 齐鲁工业大学 Scene light source color estimation method capable of being effectively embedded into image signal processor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060308A (en) * 2019-03-28 2019-07-26 杭州电子科技大学 A kind of color constancy method based on light source colour distribution limitation
CN110060308B (en) * 2019-03-28 2021-02-02 杭州电子科技大学 Color constancy method based on light source color distribution limitation
CN112995634A (en) * 2021-04-21 2021-06-18 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
CN112995634B (en) * 2021-04-21 2021-07-20 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
CN116188797A (en) * 2022-12-09 2023-05-30 齐鲁工业大学 Scene light source color estimation method capable of being effectively embedded into image signal processor
CN116188797B (en) * 2022-12-09 2024-03-26 齐鲁工业大学 Scene light source color estimation method capable of being effectively embedded into image signal processor

Also Published As

Publication number Publication date
CN106295679B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
US20190266435A1 (en) Method and device for extracting information in histogram
CN109348731B (en) A method and device for image matching
US11967040B2 (en) Information processing apparatus, control method thereof, imaging device, and storage medium
CN118154603A (en) Display screen defect detection method and system based on cascaded multi-layer feature fusion network
CN109520706B (en) Screw hole coordinate extraction method of automobile fuse box
WO2015074521A1 (en) Devices and methods for positioning based on image detection
CN110569774B (en) Automatic line graph image digitalization method based on image processing and pattern recognition
CN102779157B (en) Method and device for searching images
CN113083804A (en) Laser intelligent derusting method and system and readable medium
Banić et al. Color cat: Remembering colors for illumination estimation
CN114998097A (en) Image alignment method, apparatus, computer equipment and storage medium
CN109255390A (en) Preprocess method and module, discriminator, the readable storage medium storing program for executing of training image
CN105046701A (en) Multi-scale salient target detection method based on construction graph
CN106295679B (en) A kind of color image light source colour estimation method based on category correction
CN106529549B (en) Vision significance detection method based on self-adaptive features and discrete cosine transform
CN102567969A (en) Color image edge detection method
CN113223098A (en) Preprocessing optimization method for image color classification
CN106204500B (en) A method of realizing that different cameral shooting Same Scene color of image remains unchanged
CN112381751A (en) Online intelligent detection system and method based on image processing algorithm
CN106296658B (en) A kind of scene light source estimation accuracy method for improving based on camera response function
CN109377524B (en) Method and system for recovering depth of single image
LU500193B1 (en) Low-illumination image enhancement method and system based on multi-expression fusion
CN111178229A (en) Vein imaging method and device based on deep learning
CN109993690A (en) A high-precision grayscale method for color images based on structural similarity
CN105844260A (en) Multifunctional smart cleaning robot apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant