WO2012177015A2 - Procédé et dispositif de codage/décodage d'image - Google Patents
Procédé et dispositif de codage/décodage d'image Download PDFInfo
- Publication number
- WO2012177015A2 WO2012177015A2 PCT/KR2012/004694 KR2012004694W WO2012177015A2 WO 2012177015 A2 WO2012177015 A2 WO 2012177015A2 KR 2012004694 W KR2012004694 W KR 2012004694W WO 2012177015 A2 WO2012177015 A2 WO 2012177015A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- filter
- depth
- picture
- filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/625—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
Definitions
- the present invention relates to image processing, and more particularly, to a resolution reduction update method and apparatus thereof.
- the 3D video may provide a realism and immersion by using a plurality of view channels, and the 3D video providing the stereoscopic effect may include a binocular 3D video, a multiview 3D video, a hologram, and the like.
- a texture video unlike a conventional 2D video, not only a texture video but also a depth video may be encoded and / or decoded together.
- a depth image may include depth information of objects in a screen.
- the three-dimensional effect can be represented by the above-described depth information.
- An object of the present invention is to provide an image encoding method and apparatus capable of improving image encoding efficiency and reducing complexity.
- Another object of the present invention is to provide an image decoding method and apparatus capable of improving image encoding efficiency and reducing complexity.
- Another technical problem of the present invention is to provide a resolution reduction update method and apparatus capable of improving image encoding efficiency and reducing complexity.
- Another technical problem of the present invention is to provide a filtering method and apparatus for improving image encoding efficiency.
- An embodiment of the present invention is a video decoding method.
- the method includes receiving a bitstream, generating a prediction block and a residual block for a current picture based on the bitstream, and reduced-resolution update (RRU) type information derived from the bitstream.
- RRU reduced-resolution update
- Performing up-sampling on the residual block generating a reconstruction block based on the prediction block and the residual block on which the upsampling is performed, and based on the RRU type information
- the RRU type information includes sampling ratio information and filter type information. Upsampling is performed based on a sampling rate indicated, and the filter indicated by the filter type information is performed in the filtering step. Based filtering to perform.
- the sampling rate information may indicate a higher sampling rate as the frequency of the region corresponding to the residual block is lower in the current block.
- the method of claim 1, wherein the performing of the filtering comprises: determining a window of a predetermined size and shape for a pixel to be filtered in the residual block and for the pixel to be filtered based on the pixels in the window.
- the method may further include performing filtering.
- the filter indicated by the filter type information is a bi-laterally weighted filter that performs weighted averaging based on pixels in the window. filter).
- the bidirectional weight filter performs weak smoothing when the pixel to be filtered is a pixel in an edge region including an edge, and the pixel to be filtered is configured to perform edge smoothing. Strong smoothing may be performed in the case of a pixel in a non-edge region that does not include.
- the filtering may be performed based on texture pixels in an area corresponding to the window in the texture picture.
- the filter indicated by the filter type information may be a bi-laterally weighted Winner_Take_All filter
- the bidirectional weighted WTA filter is a depth pixel in the window based on the texture pixel.
- a weighting score may be derived for each, and a pixel value of a depth pixel having the highest weight score among the depth pixels may be determined as an output value for the pixel to be filtered, wherein the depth pixel is within the depth picture. It may be a pixel.
- the filter indicated by the filter type information may be a bi-laterally weighted median filter
- the bidirectional weighted median filter may include a depth in the window based on the texture pixel.
- a weighting score is derived for each pixel
- a median value is derived among the depth pixels based on the weight score
- a pixel value of a depth pixel corresponding to the derived median value is obtained for the filtering target pixel.
- the depth pixel may be a pixel in the depth picture.
- the filter indicated by the filter type information is And a bidirectional weighted median filter, wherein the bidirectional weighted median filter derives a weighting score for each of the depth pixels based on depth pixels in the window, and based on the weight score
- the intermediate value may be derived from the pixels, and the pixel value of the depth pixel corresponding to the derived intermediate value may be determined as an output value for the filtering target pixel, and the depth pixel may be a pixel in the depth picture.
- the bidirectional weighted median filter performs weak smoothing when the pixel to be filtered is a pixel in an edge region including an edge, and the pixel to be filtered does not include an edge. In the case of non-edge pixels, strong smoothing may be performed.
- the apparatus includes: a receiver for receiving a bitstream, a prediction block and a residual block generator for generating a prediction block and a residual block for a current picture based on the bitstream, and an RRU derived from the bitstream
- a reconstruction block for generating a reconstruction block based on the up-sampler for performing up-sampling on the residual block, the prediction block, and the residual block on which the up-sampling is performed, based on -Resolution Update) type information
- a filter unit configured to perform filtering on the reconstruction block based on a generation unit and the RRU type information, wherein the RRU type information includes sampling ratio information and filter type information;
- the upsampler performs upsampling based on the sampling rate indicated by the sampling rate information, and the filter unit performs the filter type information. Performs filtering based on the filter indicated by.
- the sampling rate information may indicate a higher sampling rate as the frequency of the region corresponding to the residual block is lower in the current block.
- the filter unit is configured to determine a window having a predetermined size and shape with respect to the filtering target pixel in the residual block and perform filtering on the filtering target pixel based on the pixel in the window. Can be.
- the filter indicated by the filter type information is a bi-laterally weighted filter that performs weighted averaging based on pixels in the window. filter).
- the bidirectional weighting filter performs weak smoothing when the pixel to be filtered is a pixel in an edge region including an edge, and the pixel to be filtered is configured to perform edge smoothing. Strong smoothing may be performed in the case of a pixel in a non-edge region that does not include.
- the filter unit may perform filtering based on texture pixels in an area corresponding to the window in the texture picture.
- the filter indicated by the filter type information may be a bi-laterally weighted Winner_Take_All filter, wherein the bidirectional weighted WTA filter is a depth pixel in the window based on the texture pixel.
- a weighting score may be derived for each, and a pixel value of a depth pixel having the highest weight score among the depth pixels may be determined as an output value for the pixel to be filtered, wherein the depth pixel is within the depth picture. It may be a pixel.
- the filter indicated by the filter type information may be a bi-laterally weighted median filter, and the bidirectional weighted median filter is configured to determine a depth in the window based on the texture pixel.
- a weighting score is derived for each pixel, a median value is derived among the depth pixels based on the weight score, and a pixel value of a depth pixel corresponding to the derived median value is obtained for the filtering target pixel.
- the depth pixel may be a pixel in the depth picture.
- the filter indicated by the filter type information is And a bidirectional weighted median filter, wherein the bidirectional weighted median filter derives a weighting score for each of the depth pixels based on depth pixels in the window, and based on the weight score
- the intermediate value may be derived from the pixels, and the pixel value of the depth pixel corresponding to the derived intermediate value may be determined as an output value for the filtering target pixel, and the depth pixel may be a pixel in the depth picture.
- image encoding efficiency may be improved and complexity may be reduced.
- image decoding efficiency may be improved and complexity may be reduced.
- image encoding efficiency may be improved and complexity may be reduced.
- image encoding efficiency may be improved.
- FIG. 1 is a block diagram schematically illustrating an image encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram schematically illustrating an image decoding apparatus according to an embodiment of the present invention.
- FIG. 3 is a block diagram schematically illustrating an image encoding apparatus according to another embodiment of the present invention.
- FIG. 4 is a block diagram schematically illustrating an image decoding apparatus according to another embodiment of the present invention.
- FIG. 5 is a diagram illustrating an upsampling process and a filtering process according to the present invention.
- FIG. 6 is a flowchart schematically showing an embodiment of a decoding method according to the present invention.
- FIG. 7 is a flowchart schematically showing another embodiment of a decoding method according to the present invention.
- each of the components in the drawings described in the present invention are shown independently for the convenience of the description of the different characteristic functions in the image encoding / decoding apparatus, each component is implemented by separate hardware or separate software It does not mean to be.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
- the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included within the scope of the present invention.
- FIG. 1 is a block diagram schematically illustrating an image encoding apparatus according to an embodiment of the present invention.
- the image encoding apparatus may be applied to both a texture video and a depth video.
- the apparatus for encoding an image may include a motion estimator (ME) 110, a motion compensator (MC) 120, a down-sampler 130, and a converter ( Transformer (T) 135, Quantizer (Q) 140, Entropy coder 150, Inverse Quantizer or Dequantizer (IQ) 155, Inverse Transformer : IT) 160, an up-sampler 170, a filter unit 180, and a reference picture buffer 190.
- ME motion estimator
- MC motion compensator
- T Transformer
- Q Quantizer
- IQ Inverse Quantizer or Dequantizer
- IT Inverse Transformer : IT
- the picture may be divided into at least one processing unit.
- the processing unit may be a macro block, a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
- PU prediction unit
- TU transform unit
- CU coding unit
- embodiments to be described below are described based on a processing unit generated by dividing a picture to be encoded / decoded (or a current picture), and a unit may be referred to as a block in some cases.
- the texture image and the depth image may include a low frequency region which is high in flatness and a monotonous region, and a high frequency region in which detail characteristics are important.
- the encoding process is performed on all pixel values of the low frequency region where detail is not important, complexity may be excessively high compared to coding efficiency. Since the coding efficiency may not be significantly reduced even when the low frequency region is processed at a low resolution, the low frequency region may be subjected to an encoding process by reducing the resolution through a down sampler, and upsampling through an up sampler during reconstruction and / or decoding. A method of carrying out this can be used.
- the sampling ratio of the down sampler and the up sampler may be determined differently depending on the degree of monotony of the image, the degree of importance of the detail of the image, and / or the frequency of the image.
- the down sampling rate and the up sampling rate may be the same. A specific embodiment of the down / up sampling rate determination method will be described later.
- a method of reducing the resolution in the encoding process and performing upsampling in the reconstruction and / or decoding process according to the frequency of the image may include a reduced resolution update (RRU, hereinafter, a reduced resolution update). And RRU both have the same meaning. Also, resolution reduction updates may sometimes be referred to as reduced resolution updates. In the embodiment of FIG. 1, the above-described resolution reduction update method may be used.
- the motion predictor 110 may obtain a motion vector based on a reference picture stored in the reference picture buffer 190.
- the motion compensator 120 may generate a prediction block by performing motion compensation using the motion vector.
- the motion information used for prediction eg, a motion vector and a reference picture index
- the down sampler 130 may perform down sampling on a residual value (residual block) between the generated prediction block and the original block.
- the down-sampled residual block may be input to the converter 135.
- the residual block may be directly input to the converter 135 without passing through the down sampler 130. For example, when the region corresponding to the current block is a high frequency region, since the detailed characteristic of the image is important, the residual block may be directly input to the converter 135 without downsampling.
- the transformer 135 may transform the residual block (or downsampled residual block) in a transform unit and generate transform coefficients, and the quantization unit 140 may convert the residual values transformed by the transformer. Quantization may be performed to generate quantization coefficients.
- the entropy encoder 150 may output a bitstream by performing entropy encoding on the quantization coefficients.
- the inverse quantization unit 155 may inverse quantize values quantized by the quantization unit 140.
- the inverse transformer 160 may inversely transform the inverse quantized values by the inverse quantizer 155.
- the up sampler 170 may perform upsampling on the residual value and / or the residual block generated by the inverse quantizer 155 and the inverse transformer 160.
- the upsampled residual block may be combined with the prediction block generated by the motion compensator 120 to generate a reconstructed block.
- the filter unit 180 may perform filtering according to the present invention on the reconstruction block. Specific embodiments of the filter unit will be described later.
- the residual block generated by the inverse transformer 160 may be combined with the prediction block generated by the motion compensator 120 without passing through the up sampler 170. In this case, for example, the filter unit 180 may not perform filtering on the generated reconstruction block.
- the reference picture buffer 190 may store the generated reconstruction block and / or the reconstruction block on which the filtering is performed.
- the reconstructed blocks and / or pictures stored in the reference picture buffer 190 may be provided to the motion predictor 110 and the motion compensator 120.
- the image decoding apparatus includes an entropy decoder 210, an inverse quantizer IQ 220, an inverse transformer IT 230, an up sampler 240, a motion compensator MC, 250, a filter unit 260, and a reference picture buffer 270.
- the image decoding apparatus according to the embodiment of FIG. 2 may be applied to both texture video and depth video, similarly to the image encoding apparatus.
- the above-described resolution reduction update method may be used.
- the entropy decoder 210 may perform entropy decoding on the input bitstream.
- the motion information for generating the prediction block among the information decoded by the entropy decoder 210 may be provided to the motion compensator 250, and the residual value on which entropy decoding is performed may be input to the inverse quantizer 220.
- the inverse quantization unit 220 may perform inverse quantization based on the residual value on which entropy decoding is performed, and the inverse transform unit 230 performs the inverse transformation on the inverse quantization result performed by the inverse quantization unit 220.
- the block can be derived.
- the up sampler 240 may increase the resolution reduced by the down sampler 130 of the image encoding apparatus by performing up sampling on the residual block derived from the inverse transformer 230. If down sampling is not performed on the residual block in the image encoding apparatus, upsampling may not be performed on the residual block derived by the inverse transformer 230.
- the motion compensator 250 may generate a prediction block based on the motion information provided by the entropy decoder 210 and previously decoded blocks and / or picture information provided by the reference picture buffer 270.
- the reconstruction block may be generated by adding the residual block derived from the inverse transform unit 230 and / or the residual block provided by the up sampler 240 to the prediction block.
- the filter unit 260 may perform filtering according to the present invention on the reconstructed block. Specific embodiments of the filtering will be described later.
- the reference picture buffer 270 may store the generated reconstructed picture (or reconstructed block) and / or reconstructed reconstructed picture (or reconstructed block) on which the filtering has been performed, so that the reconstructed picture and / or the reconstructed picture may be used as a reference picture or a reference block.
- the restoration block may be provided as an output unit.
- the apparatus for encoding an image includes a predictor 310, a down sampler 320, a transformer T and 330, a quantizer Q and 340, an entropy encoder 350, and an inverse quantizer IQ. 360, an inverse transform unit IT 370, an up sampler 380, a filter unit 385, and a reference picture buffer 390.
- the image encoding apparatus may be applied to both a texture image and a depth image.
- the above-described resolution reduction update (RRU) method may be used in the embodiment of FIG. 3.
- FIG. 1 an encoding method when inter prediction is performed is described, but the present invention is not limited thereto.
- the present invention can be applied in the same or similar manner when not only inter prediction but also intra prediction is performed.
- the predictor 310 may include an inter predictor that performs inter prediction and an intra predictor that performs intra prediction.
- the inter prediction unit may generate a prediction block by performing prediction based on information of at least one picture of a previous picture or a subsequent picture of the current picture.
- the detailed operation of the inter prediction unit is the same as in the embodiment of FIG. 1 and will be omitted.
- the intra predictor may generate a prediction block by performing prediction based on pixel information in the current picture. Prediction mode information and motion information used for prediction may be encoded by the entropy encoder 350 together with the residual value and transmitted to the decoder.
- the image decoding apparatus includes an entropy decoder 410, an inverse quantizer IQ 420, an inverse transformer IT 430, an up sampler 440, a predictor 450, and a filter 455. ) And a reference picture buffer 460.
- the image decoding apparatus according to the embodiment of FIG. 4 may be applied to both the texture image and the depth image similarly to the image encoding apparatus of FIG. 3.
- the above-described resolution reduction update (RRU) method may be used in the embodiment of FIG. 4.
- the present invention is not limited thereto.
- the present invention can be applied in the same or similar manner when not only inter prediction but also intra prediction is performed.
- the predictor 450 may include an inter predictor that performs inter prediction and an intra predictor that performs intra prediction.
- the inter prediction unit may generate the prediction block based on the motion information provided by the entropy decoder 410 and previously decoded blocks and / or picture information provided by the reference picture buffer 460.
- the intra predictor may generate the predictive block by performing prediction based on the prediction block generation related information provided by the entropy decoder 410 and the pixel information in the current picture.
- FIG. 5 is a diagram illustrating an upsampling process and a filtering process according to the present invention.
- 5 is a diagram for describing an operation of a filter applied to an operation and a reconstruction block of the up-sampler illustrated in the embodiments of FIGS. 1 to 4. 5 shows the location of the sample to be generated or interpolated by the up sampling process and the down sampled low-resolution sample.
- the sample may have the same meaning as the pixel in some cases, and this distinction may be easily made by those skilled in the art.
- the sample generated by the upsampling process is referred to as interpolation sample.
- An interpolation sample and / or a reconstruction sample corresponding to the interpolation sample at any position within the current picture includes the location of the interpolation sample.
- the upsampler may perform upsampling on the input residual block. That is, interpolation samples at any location in the residual block (eg, (m, n), where m and n are integers greater than or equal to 0) can be generated by upsampling.
- the up sampler may perform a zero-padding process on the position of the sample to be interpolated.
- the zero padding process may mean a process of filling a value of 0 at the position of the sample to be interpolated.
- the up sampler may generate interpolated samples by performing copy or simple linear filtering based on the low resolution samples in the window.
- the reconstruction block may be generated by adding a prediction block and a residual block on which upsampling is performed.
- the filter unit may derive the final reconstructed sample of the (m, n) position by performing filtering on the reconstructed sample of the (m, n) position in the generated reconstruction block based on the low resolution sample in the window.
- the filtering may correspond to, for example, low-pass filtering.
- the upsampler may determine or set a window N (m, n) of a predetermined size including the position of (m, n) for generating an interpolation sample of the position (m, n).
- 520 of FIG. 5 shows the window N (m, n).
- the window N (m, n) may be a 2 ⁇ 2 square window, and the window may include 2 ⁇ 2 low resolution samples.
- the size, shape and / or position of the window is not limited to the embodiment of FIG. 5 and may be determined or set differently according to implementation and / or needs.
- the upsampler may generate an interpolated sample at the position (m, n) by performing upsampling on the residual block.
- the up-sampler may perform a zero padding procedure on the location of the sample to be interpolated.
- the image encoding / decoding apparatus may generate a reconstruction block by adding a prediction block and a residual block on which upsampling is performed.
- the filter unit performs filtering (eg, low pass filtering) on the reconstructed sample 510 at the (m, n) position in the generated reconstructed block based on the low resolution sample in the window 520. , the final reconstructed sample of position (m, n) can be derived.
- the above upsampling process and the filtering process may be performed based on upsampling ratio information and filter type information, respectively.
- the down sampling rate and the up sampling rate may be the same.
- Embodiments described later are described based on upsampling, but an upsampling rate value applied to upsampling may also be used as a downsampling rate value.
- the up / down sampling ratio may be determined differently depending on the degree of monotony of the image, the degree of importance of the detail of the image, and / or the degree of frequency of the image.
- the video signal being encoded / decoded is a signal corresponding to a low frequency region characterized by a continuous series of monotonous values, rather than a high frequency region where detail is of importance
- rate-distortion optimization rate -distortion optimization (RDO)
- RDO rate -distortion optimization
- the image signal to be encoded / decoded is a signal corresponding to a high frequency region where detail is important, it may be advantageous to use a low sampling rate in view of rate-distortion optimization.
- the filter type may be determined differently according to the characteristics of the image.
- the filter unit may apply a filter that performs a weak smoothing on an edge region including an edge, and applies a strong smoothing on a non-edge region that does not include an edge. You can apply filters to perform. Application of this adaptive filter type can allow salient features such as edges in the video signal to be better maintained.
- the image encoding apparatus may derive an optimal combination of upsampling (and / or downsampling) ratio and filter type in terms of rate-distortion optimization.
- the combination of upsampling (and / or downsampling) ratio and filter type may be represented by a reduced-resolution update type (RRU type) and / or a reduced-resolution update mode. That is, one RRU type may indicate and / or include one upsampling (and / or downsampling) rate and one filter type.
- the upsampling process and the filtering process for one block may be performed based on one RRU type information and / or one RRU mode information.
- the above-described RRU type may be independently applied to each of the texture image and the depth image. Therefore, the filter type applied to the texture image and the filter type applied to the depth image may be different from each other, and the sampling rate applied to the texture image and the sampling rate applied to the depth image may also be different.
- Table 1 below shows an embodiment of an RRU type according to the present invention.
- the embodiment of the RRU type is not limited to Table 1 described below, and a value, an upsampling rate value, and a filter type assigned to the RRU type may be determined differently from those of the embodiment of Table 1 as necessary.
- one RRU type may indicate one upsampling rate and one filter type. For example, when the RRU type is 0, since the upsampling ratio is 1, upsampling may not be performed and filtering may not be performed. As another example, when the RRU type is 1, since the upsampling ratio is 2, twice the upsampling may be performed and the filter A may be applied. Specific examples of filter A and filter B shown in Table 1 will be described later.
- the image encoding apparatus may determine an optimal RRU type and / or an RRU mode in view of rate-distortion optimization. In this case, the image encoding apparatus performs resolution reduction update based on the determined RRU type and / or RRU mode, thereby causing local characteristics of the image such as salient features (eg, edge regions), high frequency texture, etc. of the image. Can keep them better. That is, when the RRU type information is used, since the combination of the sampling rate and the filter type may be adaptively determined according to the characteristics of the image, the adaptive resolution reduction update may be enabled.
- the cost function for determining the RRU type is applied based on macro block units.
- the present invention is not limited thereto, and the cost function for determining the RRU type may be applied based on a CU, a PU, and / or a TU.
- a rate-distortion optimization based RRU type determination method may be represented by Equation 1 according to an embodiment.
- J represents a cost function and ⁇ represents a Lagrange multiplier.
- rru_type may indicate an RRU type
- mb_type may indicate a general macroblock type such as inter_16x16, skip, and the like.
- the cost function is calculated in units of CUs, PUs, or TUs instead of macroblocks
- the mb_type may be replaced with cu_type, pu_type, or tu_type, respectively.
- cu_type, pu_type, and tu_type may indicate the type of CU, the type of PU, and the type of TU, respectively.
- R MV Denotes a bit rate for encoding a motion vector (MV) and / or a reference picture index (eg, ref_idx), where R mb_type May represent a bit rate for encoding mb_type.
- R mb_type Is based on the unit in which the cost function is calculated cu_type
- R pu_type Or R tu_type Can be replaced with R in this case cu_type
- R pu_type And R tu_type May denote a bit rate for encoding a CU, a PU, and a TU, respectively.
- R rru_type Denotes a bit rate for encoding the up sampling rate and the filter type.
- Equation 1 D may be calculated as Equation 2 below.
- MB denotes a macroblock
- X denotes a position of a sample existing in the input image
- X p denotes a sample position in the reference image corresponding to a sample of the X position.
- the cost function is calculated in units of CUs, PUs, or TUs instead of macroblocks
- the MBs may be replaced with CUs, PUs, or TUs.
- MV represents a motion vector
- X p may refer to a position of a prediction sample in a reference picture indicated by MV.
- I may represent a pixel value in the input image
- I p may represent a predicted pixel value in the reference image.
- the image encoding apparatus may find an upsampling ratio and a filter type that minimize the aforementioned cost function.
- the apparatus for encoding an image may determine an RRU type that minimizes a cost function as an optimal RRU type for the current block.
- FIG. 6 is a flowchart schematically showing an embodiment of a decoding method according to the present invention.
- the decoding process according to the embodiment of FIG. 6 may be applied to both the texture image and the depth image.
- the encoder may encode and transmit information about the RRU type to the decoder.
- the decoder may receive and decode information about the RRU type from the encoder. In this case, the decoder may perform upsampling and filtering based on the information about the decoded RRU type.
- the decoder may not perform a resolution reduction update process (S610).
- the upsampling ratio is 1, so upsampling may not be performed and filtering may not be applied according to the filter type information.
- the decoder may perform upsampling on the residual block (S620).
- the decoder may perform upsampling at a rate of twice the width and height directions of the residual block, respectively.
- the decoder may generate a reconstructed block by adding the residual block on which upsampling has been performed with the prediction block (S630).
- the decoder may perform filtering on each reconstruction sample in the generated reconstruction block.
- the decoder may apply filter A to the reconstructed samples in the reconstruction block (S640).
- the decoder may apply filter B to the reconstructed samples in the reconstruction block (S650).
- specific examples of the filter A and the filter B will be described later.
- the decoder may perform upsampling on the residual block (S660).
- the decoder can perform upsampling at a ratio of four times in each of the width and height directions of the residual block.
- the decoder may generate a reconstructed block by adding the residual block on which upsampling has been performed with the prediction block (S670).
- the decoder may perform filtering on each reconstruction sample in the generated reconstruction block.
- the decoder may apply filter A to the reconstructed samples in the reconstruction block (S680).
- the decoder may apply filter B to the reconstructed samples in the reconstruction block (S690).
- specific examples of the filter A and the filter B will be described later.
- the encoder may perform the upsampling process and the filtering process as in the decoder, and thus the above-described upsampling method and the filtering method may be applied to the encoder in the same manner.
- filter A and filter B are used, and embodiments described below are described with reference to filter A and filter B.
- the filter type according to the present invention is not limited thereto, and the order in which the filter types are allocated according to the RRU type and / or the number and type of filter types used may be determined differently according to implementation and / or needs. .
- the above-described embodiments may be applied not only to the texture video but also to the depth video, and that the filter applied to the texture image and the filter applied to the depth image may be different from each other. Therefore, in the embodiments described below, embodiments of the filter applied to the texture image and the filter applied to the depth image are described independently.
- the filter unit may apply a bi-laterally weighted filter to the reconstructed image (or reconstructed block) corresponding to the texture image.
- the bidirectional weight filter may perform a weighted average based on pixel values in a window corresponding to the pixel to be filtered.
- the bidirectional weight filter applied to the reconstructed image (or reconstructed block) corresponding to the texture image may be represented by Equation 3 below.
- (m, n) may indicate the position of the pixel to be filtered and N (m, n) may indicate a window corresponding to the pixel to be filtered. Since the embodiment of the window has been described above, it will be omitted here.
- I (i, j) may represent the input pixel values of the filter
- (i, j) may represent the position of the input pixel present in the window N (m, n).
- I '(m, n) may indicate an output value of the filter for the pixel to be filtered at the position (m, n).
- Equation 3 w (i, j) represents a weighting kernel, and the geometric distance between (m, n) and (i, j) and the photometric between the two points It may correspond to a monotonically decreasing function based on distance.
- the geometric distance may mean a physical distance between two points, and the luminance distance may indicate a difference in intensity and / or a pixel value between pixels existing at two points. That is, the weight kernel of the filter applied to the texture image is based on the difference in intensity (and / or luma pixel value difference) between the two pixels as well as the geometric distance between the pixel to be filtered and the surrounding pixels existing in the window. Can be determined. In this case, the filter unit may apply a greater weight to a pixel similar to the pixel to be filtered. Equation 4 below shows an embodiment of the above-described weight kernel.
- I (i, j) may represent a pixel input value at position (i, j)
- I (m, n) may represent a pixel input value at position (m, n).
- sigma s is a spreading parameter indicating a smoothing degree in a spatial domain
- sigma I is a spreading parameter indicating a smoothing degree in a pixel-intensity region.
- the bidirectional weight filter described above may have a non-linear characteristic and may perform smoothing on the image while preserving the edge of the image.
- filter A and filter B may correspond to the bidirectional weight filter described above.
- filter A may be a filter applied to a textureless region (or block) and / or a region (or block) in which no edge exists
- filter B may be a high-textured region ( Or block) and / or a filter applied to the edge region (or block) in which the edge is present.
- a bidirectional weighting filter that performs strong smoothing may be applied to the textureless region (or block) and / or the region (or block) where no edge exists.
- the spreading parameters sigma s and sigma I applied to filter A may be diffusion parameters with relatively large values.
- a bidirectional weighting filter that performs weak-smoothing may be applied for a highly-textured region (or block) and / or an edge region (or block) in which an edge exists.
- the spreading parameters sigma s and sigma I applied to filter B may be diffusion parameters with relatively small values.
- the filter unit may perform filtering on the reconstructed image corresponding to the depth image.
- the texture image and the depth image may be different from each other, but may have a similar tendency. Therefore, when the texture image (or picture or block) corresponding to the depth image (or picture or block) to be filtered is valid, the filter unit performs filtering on the reconstructed image corresponding to the depth image based on the pixel value of the decoded texture image. Can be done. At this time, for example, the filter unit may perform filtering on the filtering target pixel based on the pixel in the region corresponding to the window of the filtering target pixel among the pixels in the texture picture.
- the filter applied to the reconstructed image corresponding to the depth image may perform filtering using information related to the texture image.
- pixels in the depth image are referred to as depth pixels
- pixels in the texture image are referred to as texture pixels.
- the depth image has a feature in which details are not important as compared to the texture image in the remaining regions except the edge region, sharpness of the edge portion is important in the depth image, and the remaining portion may not be important. . Therefore, a filter considering the characteristics of the depth image needs to be provided.
- the filter A and the filter B applied to the reconstructed image (or block) corresponding to the depth image may be a bidirectional weighting filter.
- the filter A may be a bidirectional weighted Winner-Take-All (WTA) filter
- the filter B may be a bidirectional weighted median filter.
- the pixel value of the depth pixel having the highest weight score (and / or weight frequency) in the window corresponding to the pixel to be filtered is the filter output value (filtered pixel value). Can be determined.
- the bidirectional weighted WTA filter applied to the reconstructed image corresponding to the depth image may be represented by Equation 5 below.
- (m, n) may represent the position of the depth pixel to be filtered
- N (m, n) may represent a window corresponding to the depth pixel to be filtered. Since the embodiment of the window has been described above, it will be omitted here.
- D (i, j) may represent input depth pixel values of the filter, and (i, j) may represent a position of an input pixel existing in the window N (m, n).
- D ′ (m, n) may indicate an output value of the filter for the depth pixel to be filtered at the position (m, n).
- Equation (5) Denotes a weighting kernel and may correspond to a monotonic reduction function based on the geometric distance between (m, n) and (i, j) and the luminance distance between the two points.
- the geometric distance may mean a physical distance between two points
- the luminance distance may correspond to an intensity difference and / or texture between the texture pixels I (m, n) and I (i, j) existing at the same position as the two points.
- This may indicate a pixel value difference. That is, the weighted kernel of the filter applied to the depth image is not only the geometric distance between the depth pixel to be filtered and the neighboring pixels present in the window, but also the intensity difference (and / or) between the texture pixels present at the same positions as the two pixels, respectively. Luma pixel value difference). In this case, the filter unit may apply a greater weight to a pixel similar to the pixel to be filtered.
- sigma s is a spreading parameter indicating a smoothing degree in a spatial domain
- sigma I is a spreading parameter indicating a smoothing degree in a pixel-intensity region.
- the bidirectional weighted WTA filter may derive a weight score for each depth pixel in the window based on the texture pixel.
- Equation 5 described above May correspond to a weighting score.
- the filter unit may determine a depth pixel value having the highest weight score among the depth pixels in the window as an output value for the depth pixel to be filtered. That is, according to the bidirectional weighted WTA filter described above, the filter unit selects a value having the highest weight score (and / or weight frequency) among the pixels in the window corresponding to the depth pixel to be filtered, and the filtered pixel for the depth pixel to be filtered. Can be derived by value.
- the filter part does not introduce or derive new depth values that do not exist prior to filtering, so that the object boundary in the depth image and / or the characteristics of the boundary can be better maintained.
- the depth pixel value corresponding to the median value based on the weight score (and / or weight frequency) within the window corresponding to the pixel to be filtered is the filter output value (filtering). Pixel value).
- the bidirectional weighted median filter applied to the reconstructed image corresponding to the depth image may be represented by Equation 6 below.
- (m, n) may represent the position of the depth pixel to be filtered
- N (m, n) may represent a window corresponding to the depth pixel to be filtered. Since the embodiment of the window has been described above, it will be omitted here.
- D (i, j) may represent input depth pixel values of the filter, and (i, j) may represent a position of an input pixel existing in the window N (m, n).
- D ′ (m, n) may indicate an output value of the filter for the depth pixel to be filtered at the position (m, n).
- Equation 6 Denotes a weighting kernel and may correspond to a monotonic reduction function based on the geometric distance between (m, n) and (i, j) and the luminance distance between the two points.
- the geometric distance may mean a physical distance between two points
- the luminance distance may be an intensity difference between the 'texture pixels' (I (m, n) and I (i, j) ) existing at the same position as the two points and / or
- texture pixel values may be different from each other.
- the weighted kernel of the filter applied to the depth image is not only the geometric distance between the depth pixel to be filtered and the surrounding pixels present in the window, but also the intensity difference (and / or) between the texture pixels present at the same positions as the two pixels, respectively. Luma pixel value difference).
- the filter unit may apply a greater weight to a pixel similar to the pixel to be filtered.
- sigma s is a spreading parameter indicating a smoothing degree in a spatial domain
- sigma I is a spreading parameter indicating a smoothing degree in a pixel-intensity region.
- 'Scale' is a constant, and represents a value that is multiplied to allow C k to have a value within a predetermined appropriate range (for example, 1 to 100).
- Equation 6 Can represent the largest integer value less than or equal to x.
- D i C i is one, that means that D i is listed as C i dog.
- D i may have a larger value as i is larger, and D tot may represent the largest depth pixel value in the window.
- the bidirectional weighted median filter may derive a weight score for each depth pixel in the window based on the texture pixel.
- the weight score may correspond to C k .
- the filter unit may find a depth pixel corresponding to an intermediate value among depth pixels in the window based on the derived weight score.
- the pixel value of the depth pixel corresponding to the derived intermediate value may be determined as an output value for the pixel to be filtered.
- the filter unit may derive a median value based on a weight score (and / or weighted frequency) among pixels in a window corresponding to the depth pixel to be filtered, and the derived intermediate The value may be determined as a filter output value (filtered pixel value) for the depth pixel to be filtered.
- the filter part does not introduce or derive a new depth value that does not exist prior to filtering, so that the object boundary in the depth image and / or the characteristics of the boundary can be better maintained.
- the filter unit may perform filtering on the reconstructed image corresponding to the depth image.
- the filter unit may perform filtering based on pixel values of the decoded texture image.
- the texture image (or the picture or the block) corresponding to the filtering depth image (or the picture or the block) may not be valid.
- the filter unit may not use the texture image related information when filtering the filtering depth image.
- the decoding process of the filtering target depth image and the texture image corresponding thereto is performed in parallel, the decoding process of the texture image may not be completed when the filtering of the filtering target image is performed. Even in this case, the filter unit may not use the texture image related information when performing filtering of the filtering depth image.
- the filter unit may perform filtering based on pixels in the filtering depth image, not the texture image.
- the depth image has a feature in which details are not important as compared to the texture image in the remaining regions except the edge region, sharpness of the edge portion is important in the depth image, and the remaining portion may not be important. . Therefore, a filter considering the characteristics of the depth image needs to be provided.
- the filter unit may apply a bidirectional weighted median filter to the reconstructed image (or block) corresponding to the depth image.
- the filter unit may derive a median value based on a weight score (and / or weight frequency) among pixels in a window corresponding to the depth pixel to be filtered, and the derived median value Can be determined as the filter output value (filtered pixel value).
- the bidirectional weighted median filter applied to the reconstructed image corresponding to the depth image may be represented by Equation 7 below.
- (m, n) may represent the position of the depth pixel to be filtered
- N (m, n) may represent a window corresponding to the depth pixel to be filtered. Since the embodiment of the window has been described above, it will be omitted here.
- D (i, j) may represent input depth pixel values of the filter, and (i, j) may represent a position of an input pixel existing in the window N (m, n).
- D ′ (m, n) may indicate an output value of the filter for the depth pixel to be filtered at the position (m, n).
- Equation 7 Denotes a weighting kernel and may correspond to a monotonic reduction function based on the geometric distance between (m, n) and (i, j) and the luminance distance between the two points.
- the geometric distance may mean a physical distance between the two points
- the luminosity distance may be an intensity difference between the 'depth pixels' D (m, n) and D (i, j) present at the two points and / or Depth pixel values may represent differences. That is, the weight kernel of the filter applied to the depth image may be determined based on the difference in intensity (and / or luma pixel value difference) between the two pixels as well as the geometric distance between the depth pixel to be filtered and the surrounding pixels existing in the window. have. In this case, the filter unit may apply a greater weight to a pixel similar to the pixel to be filtered.
- sigma s is a spreading parameter indicating a smoothing degree in a spatial domain
- sigma D is a spreading parameter indicating a smoothing degree in a pixel-intensity region.
- 'Scale' is a constant, and represents a value that is multiplied to allow C k to have a value within a predetermined appropriate range (for example, 1 to 100).
- a predetermined appropriate range for example, 1 to 100.
- D i C i is one, that means that D i is listed as C i dog.
- D i may have a larger value as i is larger, and D tot may represent the largest depth pixel value in the window.
- the bi-directional weighted median filter may derive a weight score for each depth pixel in the window based on the depth pixel in the window.
- the weight score may correspond to C k .
- the filter unit may find a depth pixel corresponding to an intermediate value among depth pixels in the window based on the derived weight score.
- the pixel value of the depth pixel corresponding to the derived intermediate value may be determined as an output value for the pixel to be filtered.
- the filter unit may derive a median value based on a weight score (and / or weighted frequency) among pixels in a window corresponding to the depth pixel to be filtered, and the derived intermediate The value may be determined as a filter output value (filtered pixel value) for the depth pixel to be filtered.
- the filter part does not introduce or derive a new depth value that does not exist prior to filtering, so that the object boundary in the depth image and / or the characteristics of the boundary can be better maintained.
- filter A and filter B may correspond to the bidirectional weighted median filter described above.
- filter A may be a filter applied to a non-edge region (or block) in which no edge exists
- filter B may be a filter applied to an edge region (or block) in which the edge exists. Can be.
- a bidirectional weighted median filter that performs strong smoothing may be applied for a non-edge region (or block) where no edge exists. Therefore, the spreading parameters sigma s and sigma D applied to filter A may be spreading parameters having a relatively large value. Also, for an edge region (or block) in which an edge exists, a bidirectional weighted median filter that performs weak smoothing may be applied. Therefore, the spreading parameters sigma s and sigma D applied to the filter B may be spreading parameters having a relatively small value.
- FIG. 7 is a flowchart schematically showing another embodiment of a decoding method according to the present invention.
- the decoding process according to the embodiment of FIG. 7 may be applied to both the texture image and the depth image.
- the above-described resolution reduction update (RRU) method may be applied to the embodiment of FIG. 7.
- the decoder may generate a prediction block (S710).
- the decoder may generate a predictive block based on previously decoded blocks and / or picture information provided in the reference picture buffer.
- intra prediction the decoder may generate a prediction block by performing prediction based on pixel information in the current picture.
- the decoder may generate a residual block (S720). Since a specific embodiment of the residual block generation process has been described above with reference to FIG. 2, it will be omitted here.
- the decoder may receive and decode RRU type information from the encoder (S730).
- the encoder may determine the RRU type based on the rate-distortion optimization and then encode and transmit the RRU type information to the decoder.
- the RRU type information may include sampling rate information and / or filter type information.
- the filter since the filter is a filter applied to a reconstruction block generated based on the residual block on which upsampling is performed, it may also be called an RRU filter or an upsampling filter.
- the encoder may encode and transmit sampling rate information and / or RRU filter information for each residual block to the decoder.
- the sampling rate information may be represented by the syntax element sampling_ratio
- the RRU filter type information may be represented by the syntax element rru_filter_type.
- the sampling rate information and / or the RRU filter information may be included in a residual block level and / or a residual block header (eg, residual ()) and transmitted from the encoder to the decoder through a bitstream.
- the decoder may receive and decode the sampling rate information and / or the RRU filter information.
- the sampling rate information and the RRU filter information transmitted to the decoder may be defined by syntax elements as included in Table 2 according to an embodiment.
- sampling_ratio may represent up-sampling ratio information.
- sampling_ratio may represent an upsampling rate value itself, or may represent a value used to derive an actual upsampling ratio.
- the decoder may derive the actual upsampling ratio by the following Equation (8).
- Up-sampling ratio sampling_ratio ⁇ 1
- the up-sampling ratio may indicate an actual upsampling ratio applied to the upsampling process.
- the actual upsampling rate value applied to the upsampling process may be 4.
- rru_filter_type may indicate an upsampling filter type. That is, rru_filter_type may indicate the type of upsampling filter used for filtering the reconstructed block generated based on the residual block on which upsampling is performed.
- the encoder may transmit the RRU type information to the decoder through one syntax element indicating the RRU type, unlike in the above-described embodiment.
- the RRU type information may be represented by a syntax element rru_type .
- the encoder and the decoder may store the same look-up table indicating the relationship between the RRU type, the upsampling rate, and the RRU filter type.
- the encoder can determine the optimal upsampling rate and the optimal RRU filter type based on the rate-distortion optimization.
- the encoder may determine the RRU type corresponding to the determined up-sampling rate and the RRU filter type based on the lookup table, and may transmit information about the determined RRU type to the decoder through one syntax element rru_type.
- the decoder may receive and decode the RRU type information transmitted from the encoder.
- the decoder may determine the RRU type for the current block based on the decoded RRU type information, and may derive an upsampling rate and an RRU filter type based on the determined RRU type and the stored lookup table.
- the decoder may perform upsampling on the generated residual block based on the decoded RRU type information (S740). Since the RRU type information includes upsampling rate information, the decoder may perform upsampling on the residual block according to the upsampling rate indicated by the upsampling rate information. However, when the upsampling ratio for the residual block is 1, upsampling may not be performed for the residual block.
- the decoder may generate a reconstruction block based on the prediction block and the residual block on which upsampling is performed (S750). In this case, if upsampling is not performed on the residual block, the residual block used to generate the reconstructed block may be a residual block on which upsampling is not performed.
- the decoder may perform filtering on each reconstruction sample in the generated reconstruction block (S760). Since the above-described RRU type information includes filter type information, the decoder may perform filtering based on the filter type information. Since specific embodiments of the filter applied to the texture image and the filter applied to the depth image have been described above, a description thereof will be omitted.
- the present invention may perform resolution reduction update based on the RRU type and / or RRU mode, so that characteristics of an image such as an edge may be better maintained. That is, when the RRU type information is used, the sampling rate or the combination of the filter types may be changed according to the characteristics of the image, so that adaptive resolution reduction updating may be possible.
- the depth image has a feature that the detail is less important than the texture image in most regions except the edge region.
- the upscaling method and / or resolution reduction update method described above will be particularly efficient when applied to depth images in terms of rate-distortion optimization.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Selon la présente invention, un procédé de décodage d'image consiste : à recevoir un train de bits ; à générer un bloc de prédiction ainsi qu'un bloc résiduel pour une image actuelle sur la base du train de bits ; à effectuer un suréchantillonnage sur le bloc résiduel en fonction d'informations de type rafraîchissement à résolution réduite (RRU) en provenance dudit train de bits ; à générer un bloc de restauration sur la base du bloc de prédiction et du bloc résiduel suréchantillonné ; et à effectuer un filtrage sur le bloc de restauration en fonction des informations de type RRU. Selon la présente invention, l'efficacité du codage d'image est accrue et la complexité est réduite.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201161499128P | 2011-06-20 | 2011-06-20 | |
| US61/499,128 | 2011-06-20 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2012177015A2 true WO2012177015A2 (fr) | 2012-12-27 |
| WO2012177015A3 WO2012177015A3 (fr) | 2013-04-04 |
Family
ID=47423059
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2012/004694 Ceased WO2012177015A2 (fr) | 2011-06-20 | 2012-06-14 | Procédé et dispositif de codage/décodage d'image |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2012177015A2 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019054561A1 (fr) * | 2017-09-15 | 2019-03-21 | 서울과학기술대학교 산학협력단 | Dispositif et procédé de codage d'images à 360 degrés, et support d'enregistrement pour la mise en œuvre de ce procédé |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1730968A2 (fr) * | 2004-04-02 | 2006-12-13 | THOMSON Licensing | Procede et systeme de codage video echelonnable avance |
| WO2007129840A1 (fr) * | 2006-05-04 | 2007-11-15 | Electronics And Telecommunications Research Institute | Procédé et dispositif de codage d'images en mouvement multivision |
| WO2009001793A1 (fr) * | 2007-06-26 | 2008-12-31 | Kabushiki Kaisha Toshiba | Procédé et appareil de codage et décodage d'image |
| KR101418095B1 (ko) * | 2008-07-21 | 2014-07-11 | 에스케이 텔레콤주식회사 | 예측 방향을 이용한 영상 부호화/복호화 장치 및 방법 |
| WO2010146771A1 (fr) * | 2009-06-19 | 2010-12-23 | 三菱電機株式会社 | Dispositif de codage d'image, dispositif de décodage d'image, procédé de codage d'image et procédé de décodage d'image |
-
2012
- 2012-06-14 WO PCT/KR2012/004694 patent/WO2012177015A2/fr not_active Ceased
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019054561A1 (fr) * | 2017-09-15 | 2019-03-21 | 서울과학기술대학교 산학협력단 | Dispositif et procédé de codage d'images à 360 degrés, et support d'enregistrement pour la mise en œuvre de ce procédé |
| US10986342B2 (en) | 2017-09-15 | 2021-04-20 | Foundation For Research And Business, Seoul National University Of Science And Technology | 360-degree image encoding apparatus and method, and recording medium for performing the same |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2012177015A3 (fr) | 2013-04-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR20240073227A (ko) | 영상 부호화 및 복호화 장치 및 그 방법 | |
| JP5663093B2 (ja) | 参照ピクチャー処理のための最適化されたフィルタ選択 | |
| JP5044057B2 (ja) | フィルタを選択する方法及び装置 | |
| CN103026707B (zh) | 使用用于视频编码的高级运动模型的参考处理 | |
| JP5529293B2 (ja) | メタデータによる時間スケーリングのためのエッジエンハンスメントのための方法 | |
| JP6245888B2 (ja) | エンコーダおよび符号化方法 | |
| JP5902814B2 (ja) | 映像符号化方法および装置、映像復号方法および装置、及びそれらのプログラム | |
| JP2017188943A (ja) | 動き補償方法及び動き補償装置 | |
| CN115552905A (zh) | 用于图像和视频编码的基于全局跳过连接的cnn滤波器 | |
| CN117041555A (zh) | 使用插值滤波器进行帧内预测的方法和装置 | |
| KR20150010903A (ko) | 모바일 단말 화면을 위한 3k해상도를 갖는 디스플레이 영상 생성 방법 및 장치 | |
| JP2014171097A (ja) | 符号化装置、符号化方法、復号装置、および、復号方法 | |
| KR20120009648A (ko) | 깊이영상 부호화를 위한 율-왜곡 최적화 장치 및 방법 | |
| JP7384974B2 (ja) | 適応乗算係数を用いた画像フィルタリングのための方法および装置 | |
| JPWO2014050741A1 (ja) | 映像符号化方法および装置、映像復号方法および装置、それらのプログラム及び記録媒体 | |
| CN116848843A (zh) | 可切换的密集运动向量场插值 | |
| KR102163477B1 (ko) | 비디오 부호화 및 복호화 방법, 그를 이용한 장치 | |
| CN103548350B (zh) | 图像编码设备和图像解码设备 | |
| KR20130037188A (ko) | 영상 부호화 방법과 영상 복호화 방법 | |
| WO2013051897A1 (fr) | Procédé de codage d'image et procédé de décodage d'image | |
| WO2012177015A2 (fr) | Procédé et dispositif de codage/décodage d'image | |
| KR20200004348A (ko) | 타겟 영역 수정을 통해 비디오 신호를 처리하는 방법 및 장치 | |
| KR20210024113A (ko) | 양방향 인트라 예측을 위한 참조 샘플 보간 방법 및 장치 | |
| KR20130098121A (ko) | 적응적 보간 필터를 이용하는 영상 부호화/복호화 장치 및 영상을 부호화/복호화하는 방법 | |
| JP5367161B2 (ja) | 画像符号化方法、装置、及びプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12802516 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12802516 Country of ref document: EP Kind code of ref document: A2 |