WO2024258110A1 - Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire - Google Patents
Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire Download PDFInfo
- Publication number
- WO2024258110A1 WO2024258110A1 PCT/KR2024/007749 KR2024007749W WO2024258110A1 WO 2024258110 A1 WO2024258110 A1 WO 2024258110A1 KR 2024007749 W KR2024007749 W KR 2024007749W WO 2024258110 A1 WO2024258110 A1 WO 2024258110A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- block vector
- vector
- merge candidate
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/88—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
Definitions
- the present invention relates to a video encoding/decoding method, a device, and a recording medium storing a bitstream. Specifically, the present invention relates to a video encoding/decoding method, a device, and a recording medium storing a bitstream based on improved intra block copy merge mode prediction.
- Intra block copy prediction has high prediction accuracy for screen contents with repeated similar shapes.
- encoding efficiency can be improved by applying intra block copy prediction. Therefore, various tools for intra block copy prediction are being discussed to improve encoding efficiency when only intra prediction is applicable.
- the purpose of the present invention is to provide a video encoding/decoding method and device with improved encoding/decoding efficiency.
- the present invention aims to provide a recording medium storing a bitstream generated by an image decoding method or device according to the present invention.
- the present invention aims to provide a method for correcting a block vector derived in an intra block copy merge mode to solve the above problems.
- a video decoding method may include a step of determining a prediction mode of a current block as an intra block copy merge mode, a step of determining a block vector merge candidate list of the current block, a step of deriving a block vector of the current block based on the block vector merge candidate list, a step of correcting the block vector using differential block vector information, and a step of generating a prediction block of the current block based on the corrected block vector.
- the block vector merge candidate list can be determined using block vector information of blocks surrounding the current block.
- the block vector merge candidates in the determined block vector merge candidate list can be rearranged.
- the block vector merge candidates can be reordered based on the similarity between the template of the reference block indicated by the block vector merge candidates and the template of the current block.
- a block vector merge candidate corresponding to a template of a reference block having a high similarity to the template of the current block may be rearranged to have a high priority in the block vector merge candidate list.
- the similarity can be determined by either the SAD (sum of absolute differences) method or the SSE (sum of square error) method.
- the derived block vector can be derived from one of a predetermined number of block vector merge candidates corresponding to a high priority in the block vector merge candidate list.
- the differential block vector information may include direction information and distance information.
- the direction information may include vertical direction information and vertical direction information.
- the distance information may include vertical distance information and horizontal distance information.
- the method further comprises a step of determining whether to correct the derived block vector, and a step of obtaining the differential block vector information when it is determined that the derived block vector is corrected, wherein the derived block vector can be corrected based on the differential block vector information according to the determination that the derived block vector is corrected.
- a video encoding method may include a step of determining a prediction mode of a current block as an intra block copy merge mode, a step of determining a block vector merge candidate list of the current block, a step of deriving a block vector of the current block based on the block vector merge candidate list, a step of correcting the block vector using differential block vector information, and a step of generating a prediction block of the current block based on the corrected block vector.
- a non-transitory computer-readable recording medium can store a bitstream generated by the image encoding method.
- a bitstream transmission method can transmit a bitstream generated by the image encoding method.
- a video encoding/decoding method and device with improved encoding/decoding efficiency can be provided.
- a method for correcting a block vector derived by an intra block copy merge mode can be provided.
- prediction accuracy can be improved by generating a prediction block based on a corrected block vector.
- Figure 1 is a block diagram showing the configuration according to one embodiment of an encoding device to which the present invention is applied.
- FIG. 2 is a block diagram showing the configuration of one embodiment of a decryption device to which the present invention is applied.
- FIG. 3 is a diagram schematically showing a video coding system to which the present invention can be applied.
- FIG. 4 is a drawing for explaining an intra block copy method according to one embodiment of the present invention.
- FIG. 5 is a diagram for explaining a method for rearranging block vector merge candidates in a block vector merge candidate list according to an embodiment of the present invention.
- FIG. 6 is a diagram for explaining four-directional information included in differential block vector information according to one embodiment of the present invention.
- FIG. 7 is a diagram for explaining eight-directional information included in differential block vector information according to one embodiment of the present invention.
- FIG. 8 is a diagram for explaining 16 direction information included in differential block vector information according to one embodiment of the present invention.
- FIG. 9 is a flowchart illustrating a block vector correction method according to an embodiment of the present invention.
- FIG. 10 is a drawing exemplarily showing a content streaming system to which an embodiment according to the present invention can be applied.
- a video decoding method may include a step of determining a prediction mode of a current block as an intra block copy merge mode, a step of determining a block vector merge candidate list of the current block, a step of deriving a block vector of the current block based on the block vector merge candidate list, a step of correcting the block vector using differential block vector information, and a step of generating a prediction block of the current block based on the corrected block vector.
- first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are only used for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- the term and/or includes a combination of a plurality of related described items or any item among a plurality of related described items.
- each component shown in the embodiments of the present invention are independently depicted to indicate different characteristic functions, and do not mean that each component is formed as a separate hardware or software configuration unit. That is, each component is listed and included as a separate component for convenience of explanation, and at least two components among each component may be combined to form a single component, or one component may be divided into multiple components to perform a function, and such integrated embodiments and separate embodiments of each component are also included in the scope of the present invention as long as they do not deviate from the essence of the present invention.
- the terminology used in the present invention is only used to describe specific embodiments and is not intended to limit the present invention.
- the singular expression includes the plural expression unless the context clearly indicates otherwise.
- some components of the present invention are not essential components that perform essential functions in the present invention and may be optional components that merely enhance performance.
- the present invention may be implemented by including only essential components for implementing the essence of the present invention excluding components used only for enhancing performance, and a structure including only essential components excluding optional components used only for enhancing performance is also included in the scope of the present invention.
- the term "at least one” can mean one of a number greater than or equal to 1, such as 1, 2, 3, and 4.
- the term "a plurality of” can mean one of a number greater than or equal to 2, such as 2, 3, and 4.
- video may mean one picture constituting a video, and may also represent the video itself.
- encoding and/or decoding of a video may mean “encoding and/or decoding of a video,” and may also mean “encoding and/or decoding of one of the videos constituting the video.”
- the target image may be an encoding target image that is a target of encoding and/or a decoding target image that is a target of decoding.
- the target image may be an input image input to an encoding device and may be an input image input to a decoding device.
- the target image may have the same meaning as the current image.
- encoder and image encoding device may be used interchangeably and have the same meaning.
- decoder and image decoding device may be used interchangeably and interchangeably.
- image may be used with the same meaning and may be used interchangeably.
- target block may be an encoding target block that is a target of encoding and/or a decoding target block that is a target of decoding.
- target block may be a current block that is a target of current encoding and/or decoding.
- target block and current block may be used with the same meaning and may be used interchangeably.
- a coding tree unit may be composed of one luma component (Y) coding tree block (CTB) and two chroma component (Cb, Cr) coding tree blocks related to it.
- sample may represent a basic unit constituting a block.
- Figure 1 is a block diagram showing the configuration according to one embodiment of an encoding device to which the present invention is applied.
- an encoding device (100) may include an image segmentation unit (110), an intra prediction unit (120), a motion prediction unit (121), a motion compensation unit (122), a switch (115), a subtractor (113), a transformation unit (130), a quantization unit (140), an entropy encoding unit (150), an inverse quantization unit (160), an inverse transformation unit (170), an adder (117), a filter unit (180), and a reference picture buffer (190).
- multiple sub-pictures can be individually restored, they have the advantage of being easy to edit in applications that configure multi-channel input into one picture.
- tiles can be segmented horizontally to generate bricks.
- a brick can be utilized as a basic unit of intra-picture parallel processing.
- one CTU can be recursively split into a quad tree (QT: Quadtree), and the terminal node of the split can be defined as a CU (Coding Unit).
- the CU can be split into a prediction unit (PU) and a transformation unit (TU) to perform prediction and splitting. Meanwhile, the CU can be utilized as a prediction unit and/or a transformation unit itself.
- the minimum block size (MinQTSize) of the quad tree of the luma block during splitting can be set to 16x16
- the maximum block size (MaxBtSize) of the binary tree can be set to 128x128, and the maximum block size (MaxTtSize) of the triple tree can be set to 64x64.
- the minimum block size (MinBtSize) of the binary tree and the minimum block size (MinTtSize) of the triple tree can be set to 4x4
- the maximum depth (MaxMttDepth) of the multi-type tree can be set to 4.
- a dual tree that uses different CTU split structures for luma and chrominance components can be applied to improve the encoding efficiency of the I slice.
- the luminance and chrominance CTBs (Coding Tree Blocks) within the CTU can be split into a single tree sharing the coding tree structure.
- the encoding device (100) may perform encoding on the input image in the intra mode and/or the inter mode.
- the encoding device (100) may perform encoding on the input image in a third mode (e.g., IBC mode, Palette mode, etc.) other than the intra mode and the inter mode.
- a third mode e.g., IBC mode, Palette mode, etc.
- the third mode may be classified as the intra mode or the inter mode for convenience of explanation. In the present invention, the third mode will be classified and described separately only when a specific explanation is required.
- the switch (115) can be switched to intra, and when the inter mode is used as the prediction mode, the switch (115) can be switched to inter.
- the intra mode can mean an intra-screen prediction mode
- the inter mode can mean an inter-screen prediction mode.
- the encoding device (100) can generate a prediction block for an input block of an input image.
- the encoding device (100) can encode a residual block using a residual of the input block and the prediction block.
- the input image can be referred to as a current image which is a current encoding target.
- the input block can be referred to as a current block which is a current encoding target or an encoding target block.
- the intra prediction unit (120) can use samples of blocks already encoded/decoded around the current block as reference samples.
- the intra prediction unit (120) can perform spatial prediction on the current block using the reference sample, and can generate prediction samples for the input block through spatial prediction.
- intra prediction can mean prediction within the screen.
- non-directional prediction modes such as DC mode and Planar mode and directional prediction modes (e.g., 65 directions) can be applied.
- the intra prediction method can be expressed as an intra prediction mode or an intra-screen prediction mode.
- the motion prediction unit (121) can search for an area that best matches the input block from the reference image during the motion prediction process, and can derive a motion vector using the searched area. At this time, the search area can be used as the area.
- the reference image can be stored in the reference picture buffer (190).
- it when encoding/decoding for the reference image is processed, it can be stored in the reference picture buffer (190).
- the above motion prediction unit (121) and motion compensation unit (122) can generate a prediction block by applying an interpolation filter to a portion of an area within a reference image when the value of a motion vector does not have an integer value.
- the AFFINE mode of sub-PU based prediction the AFFINE mode of sub-PU based prediction, the SbTMVP (Subblock-based Temporal Motion Vector Prediction) mode, and the MMVD (Merge with MVD) mode, the GPM (Geometric Partitioning Mode) mode of PU based prediction can be applied.
- the SbTMVP Subblock-based Temporal Motion Vector Prediction
- MMVD Merge with MVD
- GPM Gaometric Partitioning Mode
- the subtractor (113) can generate a residual block using the difference between the input block and the predicted block.
- the residual block may also be referred to as a residual signal.
- the residual signal may mean the difference between the original signal and the predicted signal.
- the residual signal may be a signal generated by transforming, quantizing, or transforming and quantizing the difference between the original signal and the predicted signal.
- the residual block may be a residual signal in block units.
- a 4x4 luminance residual block generated through within-screen prediction can be transformed using a basis vector based on DST (Discrete Sine Transform), and a basis vector based on DCT (Discrete Cosine Transform) can be used to transform the remaining residual blocks.
- a transform block can be divided into a quad tree shape for one block using RQT (Residual Quad Tree) technology, and after performing transformation and quantization on each transform block divided through RQT, a coded block flag (cbf) can be transmitted to increase encoding efficiency when all coefficients become 0.
- RQT Residual Quad Tree
- the Multiple Transform Selection (MTS) technique can be applied to perform transformation by selectively using multiple transformation bases. That is, instead of dividing the CU into TUs through the RQT, a function similar to TU division can be performed through the Sub-block Transform (SBT) technique.
- SBT Sub-block Transform
- the SBT is applied only to inter-screen prediction blocks, and unlike the RQT, the current block can be divided into 1 ⁇ 2 or 1 ⁇ 4 sizes in the vertical or horizontal direction, and then the transformation can be performed on only one of the blocks. For example, if it is divided vertically, the transformation can be performed on the leftmost or rightmost block, and if it is divided horizontally, the transformation can be performed on the topmost or bottommost block.
- LFNST Low Frequency Non-Separable Transform
- a secondary transform technique that additionally transforms the residual signal converted to the frequency domain through DCT or DST, can be applied.
- LFNST additionally performs a transform on the low-frequency region of 4x4 or 8x8 in the upper left, so that the residual coefficients can be concentrated in the upper left.
- a quantizer using QP values of 0 to 51 can be used.
- 0 to 63 QP can be used.
- DQ Dependent Quantization
- DQ performs quantization using two quantizers (e.g., Q0 and Q1), and even without signaling information about the use of a specific quantizer, the quantizer to be used for the next transform coefficient can be selected based on the current state through a state transition model.
- the entropy encoding unit (150) can generate a bitstream by performing entropy encoding according to a probability distribution on values produced by the quantization unit (140) or coding parameter values produced in the encoding process, and can output the bitstream.
- the entropy encoding unit (150) can perform entropy encoding on information about image samples and information for decoding the image. For example, information for decoding the image can include syntax elements, etc.
- the entropy encoding unit (150) can use an encoding method such as exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), or Context-Adaptive Binary Arithmetic Coding (CABAC) for entropy encoding.
- CAVLC Context-Adaptive Variable Length Coding
- CABAC Context-Adaptive Binary Arithmetic Coding
- the entropy encoding unit (150) can perform entropy encoding using a Variable Length Coding/Code (VLC) table.
- VLC Variable Length Coding/Code
- the table probability update method when applying CABAC, in order to reduce the size of the probability table stored in the decryption device, the table probability update method can be changed to a table update method using a simple formula and applied.
- two different probability models can be used to obtain more accurate symbol probability values.
- the entropy encoding unit (150) can change a two-dimensional block form coefficient into a one-dimensional vector form through a transform coefficient scanning method to encode a transform coefficient level (quantized level).
- Coding parameters may include information (flags, indexes, etc.) encoded in an encoding device (100) and signaled to a decoding device (200), such as syntax elements, as well as information derived during an encoding process or a decoding process, and may mean information necessary when encoding or decoding an image.
- signaling a flag or index may mean that the encoder entropy encodes the flag or index and includes it in the bitstream, and that the decoder entropy decodes the flag or index from the bitstream.
- the encoded current image can be used as a reference image for other images to be processed later. Therefore, the encoding device (100) can restore or decode the encoded current image again, and store the restored or decoded image as a reference image in the reference picture buffer (190).
- a sample adaptive offset can be used to add an appropriate offset value to the sample value to compensate for the encoding error.
- the sample adaptive offset can correct the offset from the original image on a sample basis for the image on which deblocking has been performed.
- a method can be used in which the samples included in the image are divided into a certain number of regions, and then the region to be offset is determined and the offset is applied to the region, or a method can be used in which the offset is applied by considering the edge information of each sample.
- Bilateral filter can also compensate for the offset from the original image on a sample-by-sample basis for the deblocked image.
- An adaptive loop filter can perform filtering based on a comparison value between a restored image and an original image. After dividing samples included in an image into a predetermined group, a filter to be applied to each group can be determined, and filtering can be performed differentially for each group. Information related to whether to apply an adaptive loop filter can be signaled for each coding unit (CU), and the shape and filter coefficients of the adaptive loop filter to be applied can vary for each block.
- CU coding unit
- LMCS Luma Mapping with Chroma Scaling
- LM luma mapping
- CS chroma scaling
- LMCS can be utilized as an HDR correction technique that reflects the characteristics of HDR (High Dynamic Range) images.
- the restored block or restored image that has passed through the filter unit (180) may be stored in the reference picture buffer (190).
- the restored block that has passed through the filter unit (180) may be a part of the reference image.
- the reference image may be a restored image composed of restored blocks that have passed through the filter unit (180).
- the stored reference image may be used for inter-screen prediction or motion compensation thereafter.
- FIG. 2 is a block diagram showing the configuration of one embodiment of a decryption device to which the present invention is applied.
- the decoding device (200) may be a decoder, a video decoding device, or an image decoding device.
- the decoding device (200) may include an entropy decoding unit (210), an inverse quantization unit (220), an inverse transformation unit (230), an intra prediction unit (240), a motion compensation unit (250), an adder (201), a switch (203), a filter unit (260), and a reference picture buffer (270).
- an entropy decoding unit (210) may include an entropy decoding unit (210), an inverse quantization unit (220), an inverse transformation unit (230), an intra prediction unit (240), a motion compensation unit (250), an adder (201), a switch (203), a filter unit (260), and a reference picture buffer (270).
- the decoding device (200) can receive a bitstream output from the encoding device (100).
- the decoding device (200) can receive a bitstream stored in a computer-readable recording medium, or can receive a bitstream streamed through a wired/wireless transmission medium.
- the decoding device (200) can perform decoding on the bitstream in an intra mode or an inter mode.
- the decoding device (200) can generate a restored image or a decoded image through decoding, and can output the restored image or the decoded image.
- the switch (203) can be switched to intra. If the prediction mode used for decryption is inter mode, the switch (203) can be switched to inter.
- the decoding device (200) can obtain a reconstructed residual block by decoding the input bitstream and can generate a prediction block. When the reconstructed residual block and the prediction block are obtained, the decoding device (200) can generate a reconstructed block to be decoded by adding the reconstructed residual block and the prediction block.
- the decoding target block can be referred to as a current block.
- the entropy decoding unit (210) can change a one-dimensional vector-shaped coefficient into a two-dimensional block-shaped coefficient through a transform coefficient scanning method to decode a transform coefficient level (quantized level).
- the quantized level can be dequantized in the dequantization unit (220) and detransformed in the inverse transform unit (230).
- the quantized level can be generated as a restored residual block as a result of the dequantization and/or detransformation.
- the dequantization unit (220) can apply a quantization matrix to the quantized level.
- the dequantization unit (220) and the detransform unit (230) applied to the decoding device can apply the same technology as the dequantization unit (160) and the detransform unit (170) applied to the encoding device described above.
- the intra prediction unit (240) can generate a prediction block by performing spatial prediction on the current block using sample values of already decoded blocks surrounding the block to be decoded.
- the intra prediction unit (240) applied to the decoding device can apply the same technology as the intra prediction unit (120) applied to the encoding device described above.
- the motion compensation unit (250) can perform motion compensation using a motion vector and a reference image stored in the reference picture buffer (270) for the current block to generate a prediction block.
- the motion compensation unit (250) can apply an interpolation filter to a part of the reference image to generate a prediction block when the value of the motion vector does not have an integer value.
- the motion compensation unit (250) applied to the decoding device can apply the same technology as the motion compensation unit (122) applied to the encoding device described above.
- the adder (201) can add the restored residual block and the prediction block to generate a restored block.
- the filter unit (260) can apply at least one of an Inverse-LMCS, a deblocking filter, a sample adaptive offset, and an adaptive loop filter to the restored block or the restored image.
- the filter unit (260) applied to the decoding device can apply the same filtering technology as that applied to the filter unit (180) applied to the encoding device described above.
- the filter unit (260) can output a restored image.
- the restored block or restored image can be stored in the reference picture buffer (270) and used for inter prediction.
- the restored block that has passed through the filter unit (260) can be a part of the reference image.
- the reference image can be a restored image composed of restored blocks that have passed through the filter unit (260).
- the stored reference image can be used for inter-screen prediction or motion compensation thereafter.
- FIG. 3 is a diagram schematically showing a video coding system to which the present invention can be applied.
- a video coding system may include an encoding device (10) and a decoding device (20).
- the encoding device (10) may transmit encoded video and/or image information or data to the decoding device (20) in the form of a file or streaming through a digital storage medium or a network.
- An encoding device (10) may include a video source generating unit (11), an encoding unit (12), and a transmitting unit (13).
- a decoding device (20) may include a receiving unit (21), a decoding unit (22), and a rendering unit (23).
- the encoding unit (12) may be called a video/image encoding unit, and the decoding unit (22) may be called a video/image decoding unit.
- the transmitting unit (13) may be included in the encoding unit (12).
- the receiving unit (21) may be included in the decoding unit (22).
- the rendering unit (23) may include a display unit, and the display unit may be configured as a separate device or an external component.
- the video source generation unit (11) can obtain a video/image through a process of capturing, synthesizing, or generating a video/image.
- the video source generation unit (11) can include a video/image capture device and/or a video/image generation device.
- the video/image capture device can include, for example, one or more cameras, a video/image archive including previously captured video/image, etc.
- the video/image generation device can include, for example, a computer, a tablet, a smartphone, etc., and can (electronically) generate a video/image.
- a virtual video/image can be generated through a computer, etc., and in this case, the video/image capture process can be replaced with a process of generating related data.
- the encoding unit (12) can encode the input video/image.
- the encoding unit (12) can perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
- the encoding unit (12) can output encoded data (encoded video/image information) in the form of a bitstream.
- the detailed configuration of the encoding unit (12) can also be configured in the same manner as the encoding device (100) of FIG. 1 described above.
- the transmission unit (13) can transmit encoded video/image information or data output in the form of a bitstream to the reception unit (21) of the decoding device (20) through a digital storage medium or a network in the form of a file or streaming.
- the digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
- the transmission unit (13) can include an element for generating a media file through a predetermined file format and can include an element for transmission through a broadcasting/communication network.
- the reception unit (21) can extract/receive the bitstream from the storage medium or network and transmit it to the decoding unit (22).
- the decoding unit (22) can decode video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding unit (12).
- the detailed configuration of the decoding unit (22) can also be configured in the same manner as the decoding device (200) of FIG. 2 described above.
- the rendering unit (23) can render the decrypted video/image.
- the rendered video/image can be displayed through the display unit.
- a block vector derived in an intra block copy prediction merge mode can be corrected.
- the intra block copy prediction method means a method of searching for an optimal prediction block in a reconstructed area of a current picture using a block vector and copying it to generate a prediction block of the current block.
- the encoder and/or decoder may correct the block vector.
- FIG. 4 is a diagram for explaining an intra block copy prediction method according to one embodiment of the present invention.
- a matching block (440) can be derived within a predefined search range (R1, R2, R3, R4) of a reconstructed area (430) of a current picture (400) based on a block vector (420) of a current block (410). Then, a prediction block of the current block (410) can be generated based on the matching block (440).
- the predefined search ranges R1, R2, R3, and R4 in Fig. 4 can be defined as the current CTU (Coding Tree Unit) including the current block, the upper left CTU, the upper CTU, and the left CTU, respectively.
- the current CTU Coding Tree Unit
- reference templates may be searched based on a predefined search order in a predefined search range. For example, reference templates may be searched in a zigzag order of R1, R4, R3, R2.
- search range can be set to a value preset in the encoder/decoder.
- a matching block is derived within a predefined search range in Fig. 4, a matching block can be derived based on a block vector in a restored area within the current picture.
- the block vector (420) in FIG. 4 is a block vector before correction, but a matching block can be derived based on a block vector that corrects the block vector of the current block, and a prediction block of the current block can be generated based on the derived matching block.
- Intra block copy merge mode may mean a mode in which the block vector of the current block is derived from the block vector information of the surrounding blocks of the current block.
- a block vector merge candidate list can be determined using block vector information of surrounding blocks of the current block predicted by the intra block copy merge mode or the intra template matching prediction mode included in the restored area.
- the block vector merge candidate list can be a list in which block vector information of reference blocks of the current block is stored.
- FIG. 5 is a diagram for explaining a method for reordering block vector merge candidates of a block vector merge candidate list according to one embodiment of the present invention. Specifically, FIG. 5 is a diagram for explaining an adaptive reordering of merge candidates with template matching (ARMC-TM).
- ARMC-TM adaptive reordering of merge candidates with template matching
- the template matching-based adaptive reordering method means a method in which block vector merge candidates in a block vector merge candidate list are reordered based on the similarity between the template of a reference block represented by each block vector merge candidate and the template of the current block.
- block vector merge candidates can be reordered so that a block vector merge candidate corresponding to a template of a reference block that has a high similarity to the template of the current block has a higher priority in the block vector merge candidate list.
- a matching block M1 (550) can be derived from a block vector V1 (530) in a predefined search range (R1, R2, R3, R4) of a reconstructed area (510) of a current picture (500), and a matching block M2 (560) can be derived from a block vector V2 (540).
- the block vector merge candidate list can include information about block vectors V1 and V2.
- the neighboring ⁇ region (i.e., the left, top, and upper left regions) of the current block (520) can be defined as the template of the current block (current template, 570)
- the neighboring ⁇ region of M1 can be defined as the template of the reference block M1 (template of M1, 580)
- the neighboring ⁇ region of the matching block M2 can be defined as the template of the reference block M2 (template of M2, 590).
- the similarity between the template (570) of the current block and the template (580) of M1 and the similarity between the template (570) of the current block and the template (590) of M2 are determined, and the block vector merge candidate list can be rearranged based on the similarity.
- the above similarity can be determined by either the SAD method or the SSE method.
- the predefined search range may be searched for reference templates based on a predefined search order.
- the reference templates may be searched in a zigzag order of R1, R4, R3, R2.
- information about the search range and the size and shape of the template can be determined by the encoder and transmitted to the decoder.
- the search range and the size and shape of the template can be set to values predetermined in the encoder/decoder.
- a matching block is derived from a predefined search range, but a matching block can be derived based on a block vector from a restored area within the current picture.
- a block vector merge candidate has a high priority in the block vector merge candidate list, it is matched to a low number index, but this is only one example, and it may be matched to a certain index according to the priority.
- the block vector merge candidate corresponding to the template of the reference block having a high similarity to the template of the current block is reordered to have a high priority in the block vector merge candidate list, but this is only one example, and the reordering can be done to have an arbitrary priority based on the similarity.
- the block vector of the current block can be derived based on the block vector merge candidate list. Specifically, if one block vector merge candidate among the block vector merge candidates included in the block vector merge candidate list is determined, the block vector of the current block can be derived from the determined block vector merge candidate.
- an optimal block vector merge candidate can be determined from the block vector merge candidate list, and the block vector of the current block can be derived from the optimal block vector merge candidate.
- the optimal block vector merge candidate may mean a block vector merge candidate with the smallest cost value among the block vector merge candidates included in the block vector merge candidate list.
- the block vector of the current block can be derived based on the determined block vector merge candidate list. Specifically, when one block vector merge candidate is determined from the rearranged block vector merge candidates, the block vector of the current block can be derived from the determined block vector merge candidate.
- the block vector of the current block can be derived from one of the n block vector merge candidates having a high priority in the block vector merge candidate list.
- one block vector merge candidate among the n block vector merge candidates having a high priority in the block vector candidate list can be determined, and the block vector of the current block can be derived from the determined block vector merge candidate.
- n is an arbitrary positive integer that is equal to or smaller than the number of total block vector merge candidates included in the block vector merge candidate list.
- the priority can be determined by the cost value.
- a block vector merge candidate in the block vector merge candidate list can have a higher priority as the cost value is smaller.
- a high priority could mean that it matches a lower numbered index in the block vector merge candidate list.
- the cost value can be calculated by a predefined cost function.
- the method for calculating the cost value can be either the SAD method or the SSE method.
- a block vector of a current block derived in an intra block copy prediction merge mode can be corrected, and a prediction block of the current block can be generated based on the corrected block vector.
- the block vector of the derived current block can be corrected using differential block vector information, a matching block can be derived based on the corrected block vector, and a prediction block of the current block can be generated based on the derived matching block.
- the block vector of the corrected current block can be calculated as in mathematical expression 1.
- the initial block vector may mean the block vector of the current block derived in the intra block copy prediction merge mode, and the final block vector means the corrected block vector.
- the differential block vector can be derived using differential block vector information.
- the corresponding block vector when it is determined whether to correct a derived block vector, can be corrected based on differential block vector information. Specifically, it is determined whether to correct a derived block vector, and when it is determined that it has been corrected, differential block vector information is acquired, and when it is determined that the derived block vector has been corrected, it can be corrected based on the differential block vector information.
- the predicted block of the current block can be generated based on the derived block vector without obtaining differential block vector information.
- the differential block vector information may include direction information and distance information.
- the direction information may mean information about the direction of the differential block vector
- the distance information may mean information about the size of the differential block vector.
- Figures 6 to 8 are drawings for explaining direction information included in differential block vector information according to one embodiment of the present invention.
- a circle shape indicates an initial position of a differential block vector
- a square shape indicates a direction of a differential block vector.
- FIG. 6 is a diagram for explaining four-directional information included in differential block vector information according to one embodiment of the present invention. Specifically, FIG. 6 shows that the direction of the differential block vector corresponds to one of four directions (right horizontal direction, left horizontal direction, upward vertical direction, and downward vertical direction).
- direction information indicating the direction of the differential block vector among the four directions can be transmitted/parsed by matching it to an index.
- the direction information can include horizontal direction information and vertical direction information.
- direction information including horizontal direction (+1) information and vertical direction (0) information can be matched to index 0.
- direction information including horizontal direction (0) information and vertical direction (+1) information can be matched to index 2.
- the direction of the differential block vector can be the right horizontal direction (600)
- the direction of the differential block vector can be the upward vertical direction (610).
- Table 1 shows an example in which the direction information of the differential block vector can be transmitted/parsed by matching it to an arbitrary index.
- the arbitrary index can be a preset index.
- FIG. 7 is a diagram for explaining eight direction information included in differential block vector information according to one embodiment of the present invention. Specifically, FIG. 7 shows that the direction of the differential block vector is one of eight directions (right horizontal direction, left horizontal direction, upper vertical direction, lower vertical direction, upper right diagonal direction, upper left diagonal direction, lower right diagonal direction, and lower left diagonal direction).
- FIG. 8 is a diagram for explaining 16 direction information included in differential block vector information according to one embodiment of the present invention. Specifically, FIG. 8 shows that the direction of the differential block vector is one of 16 directions in which 8 directions are added in addition to the 8 directions described above.
- the 8-directional information and 16-directional information described in Figures 7 and 8 can also be transmitted/parsed by matching the direction information of the differential block vector to an arbitrary index in the same manner as Table 1 described above.
- Figures 6 to 8 represent 4-direction information, 8-direction information, and 16-direction information, respectively, this is only an example, and the direction of the differential block vector can be any one of N directions. In this case, N is any positive integer.
- Distance information included in the differential block vector information can be transmitted/parsed by matching any index in the range of 1/4 pixel to 32 pixels.
- distances from 1/4 pixel to 32 pixels can be matched to 8 indices, and distance information can be transmitted/parsed with an index matching the size of the differential block vector. For example, if the size of the differential block vector corresponds to a distance of 1 pixel, distance information can be transmitted/parsed with the corresponding index 2, and if the size of the differential block vector corresponds to a distance of 8 pixels, distance information can be transmitted/parsed with index 5.
- the distance information may include vertical distance information and horizontal distance information.
- the vertical distance information represents the absolute value of the vertical component of the differential block vector.
- the horizontal distance information represents the absolute value of the horizontal component of the differential block vector.
- the differential block vector can be derived using the differential block vector information.
- the direction of the differential block vector can be determined by the transmitted/parsed direction information, and the size can be determined by the transmitted/parsed distance information.
- the differential block vector direction information including the index for the right horizontal direction as direction information and the index for the distance of 1 pixel as distance information is transmitted/parsed, and the block vector of the current block can be corrected using the differential block vector having the corresponding size and direction.
- Fig. 9 is a flowchart for explaining a block vector correction method according to one embodiment of the present invention.
- the block vector correction method of Fig. 9 can be performed by an image decoding device.
- the video decoding device can determine a block vector merge candidate list of the current block (S910).
- the block vector merge candidate list can be determined using block vector information of blocks surrounding the current block.
- a block vector merge candidate corresponding to a template of a reference block having a high similarity to the template of the current block may be reordered to have a high priority in the block vector merge candidate list.
- the similarity can be determined by either the SAD method or the SSE method.
- the image decoding device can derive the block vector of the current block based on the block vector merge candidate list (S920).
- the above-described induced block vector can be derived from one of a predetermined number of block vector merge candidates corresponding to a high priority in the above-described block vector merge candidate list.
- the differential block vector information may include direction information and distance information.
- the direction information may include horizontal direction information and vertical direction information.
- the distance information may include vertical distance information and horizontal distance information.
- the image decoding device can generate a prediction block of the current block based on the corrected block vector (S940).
- the image decoding device can determine whether to correct the derived block vector, and if it is determined that the derived block vector is corrected, it can obtain the differential block vector information.
- the derived block vector can be corrected based on the differential block vector information according to the determination that the derived block vector is corrected.
- a bitstream can be generated by an image encoding method including the steps described in Fig. 9.
- the bitstream can be stored in a non-transitory computer-readable recording medium, and can also be transmitted (or streamed).
- FIG. 10 is a drawing exemplarily showing a content streaming system to which an embodiment according to the present invention can be applied.
- the encoding server compresses content input from multimedia input devices such as smartphones, cameras, CCTVs, etc. into digital data to generate a bitstream and transmits it to the streaming server.
- multimedia input devices such as smartphones, cameras, CCTVs, etc. directly generate a bitstream
- the encoding server may be omitted.
- the above bitstream can be generated by an image encoding method and/or an image encoding device to which an embodiment of the present invention is applied, and the streaming server can temporarily store the bitstream during the process of transmitting or receiving the bitstream.
- the above streaming server transmits multimedia data to a user device based on a user request via a web server, and the web server can act as an intermediary that informs the user of any available services.
- the web server transmits it to the streaming server, and the streaming server can transmit multimedia data to the user.
- the content streaming system may include a separate control server, and in this case, the control server may perform a role of controlling commands/responses between each device within the content streaming system.
- the above streaming server can receive content from a media storage and/or an encoding server. For example, when receiving content from the encoding server, the content can be received in real time. In this case, in order to provide a smooth streaming service, the streaming server can store the bitstream for a certain period of time.
- Examples of the user devices may include mobile phones, smart phones, laptop computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), navigation devices, slate PCs, tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glasses, HMDs (head mounted displays)), digital TVs, desktop computers, digital signage, etc.
- PDAs personal digital assistants
- PMPs portable multimedia players
- navigation devices slate PCs
- tablet PCs tablet PCs
- ultrabooks ultrabooks
- wearable devices e.g., smartwatches, smart glasses, HMDs (head mounted displays)
- digital TVs desktop computers, digital signage, etc.
- an image can be encoded/decoded using at least one or a combination of at least one of the above embodiments.
- the order in which the above embodiments are applied may be different in the encoding device and the decoding device. Alternatively, the order in which the above embodiments are applied may be the same in the encoding device and the decoding device.
- the above embodiments can be performed for each of the luminance and chrominance signals, or the above embodiments can be performed identically for the luminance and chrominance signals.
- the methods are described based on the flowchart as a series of steps or units, but the present invention is not limited to the order of the steps, and some steps may occur in a different order or simultaneously with other steps described above.
- the steps shown in the flowchart are not exclusive, and other steps may be included, or one or more steps in the flowchart may be deleted without affecting the scope of the present invention.
- a bitstream generated by an encoding method according to the above embodiment can be stored in a non-transitory computer-readable recording medium.
- the bitstream stored in the non-transitory computer-readable recording medium can be decoded by a decoding method according to the above embodiment.
- examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, and hardware devices specifically configured to store and execute program instructions such as ROMs, RAMs, and flash memories.
- Examples of program instructions include not only machine language codes generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter, etc.
- the hardware devices may be configured to operate as one or more software modules to perform the processing according to the present invention, and vice versa.
- the present invention can be used in a device for encoding/decoding an image and a recording medium storing a bitstream.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un procédé et un dispositif de codage/décodage d'image, un support d'enregistrement stockant un flux binaire et un procédé de transmission. Le procédé de décodage d'image peut comprendre les étapes consistant à : déterminer un mode de prédiction d'un bloc actuel à titre de mode de fusion de copie intra-bloc; déterminer une liste de candidats à une fusion de vecteurs de bloc du bloc actuel; dériver un vecteur de bloc du bloc actuel sur la base de la liste de candidats à une fusion de vecteurs de bloc; corriger le vecteur de bloc en utilisant des informations différentielles sur le vecteur de bloc; et générer un bloc de prédiction du bloc actuel sur la base du vecteur de bloc corrigé.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202480026782.8A CN121040068A (zh) | 2023-06-12 | 2024-06-05 | 图像编码/解码方法和装置以及存储比特流的记录介质 |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2023-0074967 | 2023-06-12 | ||
| KR20230074967 | 2023-06-12 | ||
| KR1020240073754A KR20240175310A (ko) | 2023-06-12 | 2024-06-05 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
| KR10-2024-0073754 | 2024-06-05 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024258110A1 true WO2024258110A1 (fr) | 2024-12-19 |
Family
ID=93852400
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2024/007749 Pending WO2024258110A1 (fr) | 2023-06-12 | 2024-06-05 | Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024258110A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20180063094A (ko) * | 2015-10-02 | 2018-06-11 | 퀄컴 인코포레이티드 | 인트라 블록 카피 병합 모드 및 이용가능하지 않는 ibc 참조 영역의 패딩 |
| KR20200078378A (ko) * | 2018-12-21 | 2020-07-01 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
| KR20200104251A (ko) * | 2019-02-26 | 2020-09-03 | 주식회사 엑스리스 | 영상 신호 부호화/복호화 방법 및 이를 위한 장치 |
| WO2023040968A1 (fr) * | 2021-09-15 | 2023-03-23 | Beijing Bytedance Network Technology Co., Ltd. | Procédé, appareil et support de traitement vidéo |
| KR20230075499A (ko) * | 2021-09-01 | 2023-05-31 | 텐센트 아메리카 엘엘씨 | Ibc 병합 후보들에 대한 템플릿 매칭 |
-
2024
- 2024-06-05 WO PCT/KR2024/007749 patent/WO2024258110A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20180063094A (ko) * | 2015-10-02 | 2018-06-11 | 퀄컴 인코포레이티드 | 인트라 블록 카피 병합 모드 및 이용가능하지 않는 ibc 참조 영역의 패딩 |
| KR20200078378A (ko) * | 2018-12-21 | 2020-07-01 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
| KR20200104251A (ko) * | 2019-02-26 | 2020-09-03 | 주식회사 엑스리스 | 영상 신호 부호화/복호화 방법 및 이를 위한 장치 |
| KR20230075499A (ko) * | 2021-09-01 | 2023-05-31 | 텐센트 아메리카 엘엘씨 | Ibc 병합 후보들에 대한 템플릿 매칭 |
| WO2023040968A1 (fr) * | 2021-09-15 | 2023-03-23 | Beijing Bytedance Network Technology Co., Ltd. | Procédé, appareil et support de traitement vidéo |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023200206A1 (fr) | Procédé et appareil de codage/décodage d'image, et support d'enregistrement stockant un train de bits | |
| WO2023239147A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
| WO2023200214A1 (fr) | Procédé et appareil de codage/décodage d'image, et support d'enregistrement stockant un train de bits | |
| WO2024039155A1 (fr) | Procédé de codage/décodage d'image, dispositif, et support d'enregistrement pour le stockage de flux binaire | |
| WO2024258110A1 (fr) | Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire | |
| WO2024210624A1 (fr) | Procédé de codage/décodage d'image, dispositif, et support d'enregistrement stockant des flux binaires | |
| WO2024191219A1 (fr) | Procédé et appareil de codage/décodage d'image et support d'enregistrement dans lequel est stocké un flux binaire | |
| WO2024253465A1 (fr) | Procédé et appareil de codage/décodage d'image, et support d'enregistrement pour stocker un flux binaire | |
| WO2024262870A1 (fr) | Procédé et dispositif de codage/décodage d'images et support d'enregistrement stockant un flux binaire | |
| WO2024181820A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un train de bits est stocké | |
| WO2024210648A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
| WO2026071463A1 (fr) | Procédé de codage/décodage d'image, dispositif et support d'enregistrement pour le stockage de flux binaire | |
| WO2025009816A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement pour stocker un flux binaire | |
| WO2025048492A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
| WO2025048441A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
| WO2024215069A2 (fr) | Procédé de codage/décodage d'image, dispositif, et support d'enregistrement pour le stockage de flux binaire | |
| WO2024147600A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un train de bits est stocké | |
| WO2025084817A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
| WO2024248598A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
| WO2025192990A1 (fr) | Dispositif et procédé de codage/décodage d'image, et support d'enregistrement dans lequel sont stockés des trains de bits | |
| WO2024005456A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
| WO2025042121A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
| WO2025005615A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement pour stocker un flux binaire | |
| WO2025110783A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
| WO2024177434A1 (fr) | Procédé et dispositif d'encodage/de décodage d'image, support d'enregistrement stockant un flux binaire |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24823630 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |