WO2023158376A2 - 数据处理方法、装置、电子设备和存储介质 - Google Patents
数据处理方法、装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2023158376A2 WO2023158376A2 PCT/SG2023/050076 SG2023050076W WO2023158376A2 WO 2023158376 A2 WO2023158376 A2 WO 2023158376A2 SG 2023050076 W SG2023050076 W SG 2023050076W WO 2023158376 A2 WO2023158376 A2 WO 2023158376A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- grid
- processed
- target
- sub
- display information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—Three-dimensional [3D] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating three-dimensional [3D] models or images for computer graphics
- G06T19/20—Editing of three-dimensional [3D] images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Definitions
- the present disclosure provides a data processing method, device, electronic device, and storage medium to achieve the technical effect of improving the comprehensiveness and efficiency of data collection.
- the present disclosure provides a data processing method, the method comprising: when receiving an instruction for determining a three-dimensional model corresponding to a target subject, starting an acquisition device; On the display interface of the device, generate and display the 3D grid to be processed covering the target body; based on the relative acquisition angle between the acquisition device and the 3D grid to be processed, adjust the corresponding 3D grid to be processed The subgrid's target displays information.
- the present disclosure also provides a data processing device, which includes: a collection device startup module, configured to start the collection device when receiving an instruction to determine a 3D model corresponding to a target subject; the 3D network to be processed a grid display module, configured to generate and display a three-dimensional grid to be processed covering the target body on the display interface of the device to which the acquisition device belongs when the preset acquisition conditions are met; and a target display information adjustment module, configured to be based on the set adjusting the target display information of the corresponding sub-grid in the three-dimensional grid to be processed according to the relative acquisition angle between the acquisition device and the three-dimensional grid to be processed.
- a collection device startup module configured to start the collection device when receiving an instruction to determine a 3D model corresponding to a target subject
- the 3D network to be processed
- a grid display module configured to generate and display a three-dimensional grid to be processed covering the target body on the display interface of the device to which the acquisition device belongs when the preset acquisition conditions are met
- the present disclosure also provides electronic equipment, and the equipment includes: One or more processors; a storage device configured to store one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors realize the above-mentioned data processing method.
- the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the above-mentioned data processing method is implemented.
- the present disclosure further provides a computer program product, including a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the above data processing method.
- FIG. 1 is a schematic flowchart of a data processing method provided by Embodiment 1 of the present disclosure
- FIG. 2 is a schematic diagram of a three-dimensional grid to be processed provided by Embodiment 1 of the present disclosure
- FIG. 3 is an embodiment of the present disclosure
- One provides a schematic diagram of a target subject
- FIG. 4 is a schematic diagram of displaying a three-dimensional grid to be processed on a target subject provided by Embodiment 1 of the present disclosure
- FIG. 5 is a schematic diagram of a target subject provided by Embodiment 1 of the present disclosure.
- FIG. 6 is a schematic flow chart of a data processing method provided by Embodiment 2 of the present disclosure
- FIG. 7 is a schematic diagram of Embodiment 3 of the present disclosure
- a schematic flowchart of a data processing method provided
- FIG. 8 is a schematic flowchart of a data processing method provided in Embodiment 4 of the present disclosure
- FIG. 9 is a structural block diagram of a data processing device provided in Embodiment 5 of the present disclosure
- FIG. 10 is a schematic structural diagram of an electronic device provided by Embodiment 6 of the present disclosure.
- Embodiment 1 is a schematic flow chart of a data processing method provided by Embodiment 1 of the present disclosure.
- This embodiment of the present disclosure is applicable to the situation of collecting three-dimensional data corresponding to a target subject.
- This method can be executed by a data processing device.
- the apparatus may be implemented in the form of software and/or hardware, and implemented by electronic equipment, and the electronic equipment may be a mobile terminal, a personal computer (Personal Computer, PC) terminal or a server, and the like.
- the method of this embodiment includes:
- the device for executing the data processing method may be integrated in application software supporting data processing functions, and the software may be installed in electronic equipment, and the electronic equipment may be a mobile terminal or a PC terminal.
- the application software may be a type of software that collects the features of the object, and the application software will not be repeated here, as long as the feature collection can be realized.
- a corresponding application program can be developed based on this technical solution, and the application program can realize the functions mentioned in this technical solution.
- the target subject refers to any object whose characteristics are expected to be collected, such as wearing items, doll items, and the like. Take the item that needs to be collected currently as the target subject.
- the number of subjects in the entry screen may be one or more, and if the number of subjects is one, this subject may be used as the target subject.
- the target subject can be dynamically determined based on the distance information between the camera and each subject, for example, the subject with the closest distance can be used as the target subject, that is, presets can be deployed in the terminal in advance program, to determine the subject closest to the camera device based on the preset program as the target subject; all subjects can also be used as the target subject; it can also be based on the entry screen
- the relative distance information between multiple subjects determines the target subject. For example, the distance between the camera and a subject is relatively close, and the subject is the target subject. These subjects inside the circle also serve as target subjects.
- the subject closest to the camera is used as the target subject, and the target subject is inseparable from other subjects, for example, if they are stacked, then the indivisible subject may also be used as the target subject.
- the collection device may be a camera device with a shooting function.
- the camera device to which the application software is deployed may be the camera device to which the terminal belongs, or the camera device that communicates with the deployed terminal. If deployed on a mobile terminal, the collection device may be a built-in camera on the mobile terminal.
- the user triggers a button on the display screen it may be considered that the system has received an instruction to determine the 3D model corresponding to the target subject, and at this time the acquisition device may be started to collect the target subject.
- the preset acquisition condition When the preset acquisition condition is met, generate and display the three-dimensional mesh to be processed covering the target body on the display interface of the device to which the acquisition device belongs.
- the preset collection conditions include at least one of the following: a target subject is included in the field of view of the collection device; a collection control is triggered; and the display duration of the target subject in the field of view reaches a preset duration threshold.
- the field of view refers to the area that can be photographed by the acquisition device at one position, or the field of view that can be seen.
- the acquisition control may be a button that can be displayed on the display interface of the application software. The triggering of the button requires acquisition of the target body and generation of a three-dimensional grid covering the target body.
- the device to which the collection device belongs may be a terminal device bound to the collection device, or a terminal device that remotely communicates with the collection device.
- the 3D grid to be processed may be a 3D grid covering the target subject, used to guide the user to collect data of the target subject, and the 3D grid may be a hemispherical transparent cover grid, see FIG. 2 .
- the three-dimensional grid to be processed includes multiple sub-grids. For example, the sub-grids may be obtained after division according to latitude and longitude, or may be obtained after division according to a preset grid area.
- the advantage of determining the three-dimensional grid to be processed is that it can be determined whether the data collection of the target subject is completed according to the display information of each sub-grid in the three-dimensional grid to be processed, realizing the effect of visualization of the collection progress.
- there may be various conditions for triggering the generation of the 3D mesh to be processed covering the target subject which may be: When it is detected that the target subject is included in the field of view of the camera device, it can be considered that the generation of the package is triggered A 3D mesh overlaying the target subject. It may also be that when the user triggers the acquisition control on the display screen, it is considered that the generation of the three-dimensional grid is triggered.
- a schematic diagram of the 3D mesh to be processed covering the target body (Fig. 3) It can be referred to FIG. 2, and it can be displayed on the display interface of the device to which the acquisition device belongs.
- the schematic diagram of the display can be referred to FIG. 4.
- the original display information of the 3D grid to be processed can be set to translucent white, and the original display information can also be a special effect map that can see the target subject. If the original display information is a special effect map, the light wave frequency band corresponding to the map cannot be collected by the collection device.
- a three-dimensional mesh to be processed covering the target subject may be generated on a display interface of the device to which the acquisition device belongs.
- the position of the mobile terminal may change, and correspondingly, the relative collection angle between the camera device and the target subject may also change.
- the world coordinate system can be established with the center point of the target subject as the center. When the camera rotates, the coordinates of the camera device in the world coordinate system will change, and the relative acquisition angle to the target subject will also change.
- the target display information may be color information or pattern information, and the color information or pattern information is not limited here.
- the target display information is different from the original display information of the 3D mesh to be processed.
- the original display information is translucent white, and the target display information can be purple; or according to actual needs, the image collected at this angle can be used as the target display information, that is, the image captured at this angle can be pasted to the corresponding subgrid.
- the relative acquisition angle between the acquisition device and the three-dimensional grid to be processed is 30 degrees, and 30 degrees corresponds to the sub-grid numbered 5.
- the sub-grid numbered 5 can be The displayed information is adjusted from the original translucent white ( Figure A in Figure 5) to purple ( Figure B in Figure 5).
- the acquisition device when receiving an instruction to determine the three-dimensional model corresponding to the target subject, the acquisition device is activated, and when the preset acquisition conditions are met, a wrapping is generated on the display interface of the device to which the acquisition device belongs.
- the 3D grid to be processed of the target subject is displayed, and then based on the relative acquisition angle between the acquisition device and the 3D grid to be processed, the target display information of the corresponding sub-grid in the 3D grid to be processed is adjusted, so that when the target subject is collected, By judging whether the corresponding sub-grid in the three-dimensional grid to be processed is adjusted to display information of the target, it is determined whether the target subject has been collected under the relative collection angle, which not only avoids the problem of repeated data collection and missed collection, but also realizes to improve data collection The technical effect of comprehensiveness, convenience and accuracy.
- Embodiment 2 FIG.
- FIG. 6 is a schematic flowchart of a data processing method provided in Embodiment 2 of the present disclosure, and S120 is described on the basis of the foregoing embodiments. Wherein, technical terms that are the same as or corresponding to those in the foregoing embodiments will not be repeated here. As shown in Figure 6, the method includes the following steps:
- the three-dimensional grid to be processed can be a hemisphere, or a cube or a cuboid.
- the target subject is placed on the plane B, and the intersection point of the ray of the camera device and the plane B is the center point.
- the radius can be determined based on the distance between the center point where the ray of the camera device intersects the plane B and the edge point of the target subject. Based on the center point and radius, determine the hemispherical mesh that wraps the target body.
- determining the center point corresponding to the three-dimensional grid to be processed includes: determining the center point according to the visual angle of the acquisition device and the plane to which the target subject belongs; or, determining the to-be-processed grid according to the trigger operation on the display interface The center point of the 3D grid.
- the trigger operation may include but not limited to operations such as clicking, touching, sliding, zooming in with two fingers, and zooming out with two fingers.
- the positioning technology can be used to detect the position of the intersection of the collection light emitted by the collection device and the plane to which the target subject belongs, and this intersection position can be used as the center point.
- simultaneous localization and mapping Simultaneous Localization And Mapping, SLAM
- SLAM Simultaneous Localization And Mapping
- the center point of the three-dimensional grid to be processed can also be determined according to the user's trigger operation on the display interface. For example, when the trigger target subject is detected, the trigger position can be used as the center point.
- the display size of the 3D grid to be processed can be adjusted. If the 3D grid to be processed is hemisphere-like, then the display size can be characterized based on the radius. If the 3D grid to be processed is a cube, then the display size can be represented by the side length. In this embodiment, determining the display size may be determined based on the display ratio of the target subject in the display interface.
- the display ratio of the target body relative to the display interface is the first ratio, and the range of the first ratio can be 20% ⁇ 50%, then the radius can be the first length; if the range of the first ratio is 50% ⁇ 80%, the radius can be the second Two lengths.
- the first length is distinct from the second length.
- determining the 3D grid to be processed according to the display size may be: determining the radius length of the 3D grid to be processed according to the display size of the target subject in the image display area, so as to determine the 3D grid to be processed based on the radius length and the center point Process and display 3D meshes.
- the 3D grid to be processed As a hemispherical grid as an example.
- the target subject When it is displayed in the display interface, it can be calculated as a percentage of the display area. For example, when it is detected that the display ratio is 80%, it can be explained that the target subject is relatively large compared to the size of the display interface.
- the 3D network to be processed The radius of the grid is relatively large; if the display ratio is 20%, it means that the size of the target subject is relatively small relative to the display interface, and the radius of the 3D grid to be processed is also relatively small. Based on this method, the 3D grid to be processed can be drawn. In practical applications, the radius length can be preset or determined dynamically.
- the 3D grid to be processed is a hemispherical grid, according to the center point Generate and display the 3D grid to be processed covering the target body, including: draw a hemispherical grid corresponding to the target body according to the preset radius length and center point, as the grid to be used; adjustment based on the grid to be used Operation, determine the 3D grid to be processed and display it.
- the adjustment operation may be a two-finger dragging zoom-in operation, or a two-finger dragging zoom-out operation.
- One or more radius lengths can be preset.
- the corresponding adaptive radius lengths can be configured based on the different display ratios of the target subject on the display interface in advance, and the mapping relationship between each display ratio and the corresponding radius length can be established. For example, if the display If the ratio is 80%, the matching radius length is 30cm; if the display ratio is 20%, the matching radius length is 10cm.
- the adapted radius length can be retrieved based on the mapping relationship, and the corresponding hemispherical grid can be drawn based on the radius length as the grid to be used. If you set a radius length, you can directly use this radius length as the radius length when drawing the hemispherical grid.
- the hemispherical grid drawn at this time may not cover the target subject.
- the size of the hemispherical grid can be adjusted by two-finger zoom-in or two-finger zoom-out operations to obtain the final 3D image to be processed. grid.
- determine the identification information of each sub-grid in the three-dimensional grid to be processed and determine Specifies the material parameters for each sub-mesh.
- the identification information may be a grid number.
- Material parameters can be translucent materials.
- the corresponding grid number can be set for each sub-grid in the 3D grid to be processed, so that when the sub-grid is collected, the corresponding grid number can be sent to the customer terminal to notify the client of the acquisition of the target subject. Also set the material parameter of the sub-mesh to translucent.
- a texture can be set for each sub-grid in the 3D grid to be processed.
- the map can be set according to actual needs. Multiple map templates can be preset, which can be selected by the user according to actual needs.
- the target display information may be the display information after adjusting the color, transparency, and depth of the texture.
- the center point is determined based on the intersection point of the collection light of the collection device and the plane to which the target subject belongs, or according to the trigger operation on the display interface, so as to draw a possible
- the to-be-processed three-dimensional grid of the target subject is covered and displayed, and when data collection is performed based on this grid, the comprehensiveness of data collection can be improved and data leakage can be avoided.
- FIG. 7 is a schematic flowchart of a data processing method provided in Embodiment 3 of the present disclosure, and S130 is described on the basis of the foregoing embodiments. Wherein, technical terms that are the same as or corresponding to those in the foregoing embodiments will not be repeated here. As shown in Figure 7, the method includes:
- this technical solution can be deployed in a terminal device, and correspondingly, the acquisition target master The body is collected through the collection device in the terminal equipment.
- the location information of the collection device in space can be adjusted to collect the target subject based on the collection device.
- the target sub-grid to be processed Based on the collection position and the display information of each sub-grid in the three-dimensional grid to be processed, determine the target sub-grid to be processed. In practical applications, it is assumed that the acquisition device collects the main body data under a viewing angle, and at this time, the viewing angle corresponds to a sub-grid in the 3D grid to be processed, and this sub-grid can be used as the target sub-grid to be processed.
- the display information of the sub-grid is the original display information, it means that the collection angle corresponding to this sub-grid has not been collected yet. If you need to obtain more comprehensive data, you need to collect data at this angle.
- the target sub-grid to be processed is determined.
- the distance between the acquisition device and each sub-grid can be determined based on the distance between the acquisition device and each sub-grid.
- the display information of the sub-grid for example, judging whether the display information of the nearest sub-grid has been adjusted, if the display information has been adjusted, it means that the data at this collection angle has been collected, and corresponding prompt information can be fed back; if If the displayed information has not been adjusted, it means that the data at this collection angle has not been collected. At this time, the data at this angle can be collected to improve the collection efficiency.
- the target sub-grid to be processed Based on the acquisition position and the display information of each sub-grid in the three-dimensional grid to be processed, determine the target sub-grid to be processed, including: determining the sub-grid to be determined with the smallest distance from the acquisition position in the three-dimensional grid to be processed; If the display information of the sub-grid is not the target display information, it is determined that the sub-grid to be determined is the target sub-grid to be processed. The sub-grid with the smallest distance from the acquisition device can be determined and used as the sub-grid to be determined. If the display information of the sub-grid to be processed is the target display information, it means that the data under this collection angle has been collected, and the collected information can be fed back to the terminal device to achieve the effect of reminding the user.
- the target subject can be collected at this angle.
- the to-be-determined subgrid can be used as the target subgrid to be processed. The above method realizes avoiding the problem of repeated data collection.
- S350 Determine the projection center point of the acquisition device on the plane to which the target sub-grid to be processed belongs, and when it is detected that the projection center point is within a preset projection threshold range, adjust the target to-be-processed sub-grid
- the display information of the grid is the display information of the target.
- the projection threshold range is determined based on the grid center point of the corresponding sub-grid to be processed and the distance information between the acquisition device and the grid center point.
- the collection device when the collection device emits a collection light to collect the target subject, it can calculate the projection center point of the collection light on the plane to which the target sub-grid to be processed belongs, and then detect the projection center point Whether it is within the preset projection threshold range, if the projection center point is within the preset projection threshold range, adjust the display information of the target sub-grid to be processed to the target display information. For example, in practical applications, the intersection point of the plane where the subgrid nearest to the collection device is located and the projection direction ray of the collection device can be calculated, and whether the intersection point falls within the projection threshold range of the grid center point, if the intersection point If it falls within the projection threshold range of the center point of the grid, the color of the grid is adjusted.
- the number of the sub-grid can also be transmitted to the client through a message. If the intersection point does not fall within the projection threshold range of the grid center point, the relative acquisition angle is adjusted until the target sub-grid to be processed is found, and the network is adjusted. grid color. If the projection center point is within the preset projection threshold range, data collection can be performed on the target subject, and at this time, the target display information of the target sub-grid to be processed can also be changed.
- adjusting the display information of the target sub-grid to be processed to the target display information includes: if the projection center point is within the preset If it is within the range of the projection threshold, it is determined that the target body has been collected under the relative collection angle; and the display information of the target sub-grid to be processed is adjusted to the target display information.
- the projection center point corresponding to the collection light is within the preset projection threshold range, it means that data collection can be performed on the target subject.
- the display information of the target sub-grid to be processed may be adjusted to the target display information.
- FIG. 8 is a schematic flowchart of a data processing method provided in Embodiment 4 of the present disclosure.
- the method includes the following steps:
- the target device may be the device used to capture the target subject.
- the number of the subgrid updated as the target display information and the main body data corresponding to the subgrid can be sent to the target device as a data pair, so that the target device can determine which collection angle the current data belongs to. target subject view for .
- the target device may also determine the data collection situation based on the number of received sub-grid numbers, so as to dynamically adjust the data collection progress information.
- the collection progress information of the target subject on the display interface is adjusted.
- a collection progress bar corresponding to the target subject may be preset, or may be a collection ring, and displayed on the display screen.
- the collection progress bar and the collection ring are matched with the collection control.
- the user can determine the collection progress according to the percentage on the collection progress information. Among them, the percentage is determined according to the total number of grids and the number of grids adjusted to the target display information.
- the collection progress percentage is 10%.
- the collection progress information 100%, it means that the collection of the target subject is completed, and corresponding special prompts can also be given at this time, for example, voice broadcasts such as "Didi" and "Da Da", music broadcasts and other prompts.
- the collection of the target subject is completed, and the collected feature data may be used as the 3D data of the target subject.
- a 3D model corresponding to the target subject can be constructed based on the 3D data, and the 3D model can be displayed on the terminal device. Constructing and displaying a three-dimensional model corresponding to the target subject based on the three-dimensional data, so as to determine images of the target subject under different visual angles based on a trigger operation on the three-dimensional model.
- the three-dimensional data can be processed by using the model reconstruction technology to construct a three-dimensional model corresponding to the target subject. It is also possible to use a variety of computer software to construct a three-dimensional model, for example, first use computer-aided drafting (Computer Aided Drafting, CAD) or revit software to create an engineering project, and after importing the determined three-dimensional data into the software, based on these information, the software The corresponding configuration items are set, and after the software parameters are set, the rendering simulation operation can be performed on the target subject, and then the 3D model corresponding to the target subject can be constructed, and then the 3D model can be displayed on the display interface.
- the target display interface is the display interface of the corresponding terminal device when the three-dimensional model is uploaded.
- Images of the target subject at different visual angles can be obtained by triggering operations on the 3D model. For example, when the user clicks on a position of the model, an image of the target subject at the visual angle of the position can be obtained.
- the display information is the target display information
- the 3D data of the target subject is obtained, and the 3D model corresponding to the target subject is constructed and displayed, so as to determine the target subject in different visual
- the images under different angles can improve the collection efficiency and accuracy while ensuring the output of the 3D model.
- FIG. 9 is a structural block diagram of a data processing device provided in Embodiment 5 of the present disclosure, which can execute the data processing method provided in any embodiment of the present disclosure, and has corresponding functional modules and effects for executing the method.
- the device includes: a collection device startup module 510, a three-dimensional grid display module 520 to be processed, and a target display information adjustment module 530.
- the collection device activation module 510 is configured to start the collection device when receiving an instruction to determine the 3D model corresponding to the target subject; the pending 3D grid display module 520 is configured to start the collection device when the preset collection conditions are met.
- the target display information adjustment module 530 is configured to, based on the relative acquisition angle between the acquisition device and the 3D grid to be processed, Adjusting target display information of corresponding sub-grids in the three-dimensional grid to be processed.
- the preset collection conditions include at least one of the following: the target subject is included in the field of view of the collection device; the collection control is triggered; the target subject is in the field of view The display duration of reaches the preset duration threshold.
- the 3D grid display module 520 to be processed includes a center point Determine the unit and the 3D grid display unit to be processed.
- the center point determination unit configured to determine a center point corresponding to the 3D grid to be processed based on the target body; a 3D grid display unit to be processed to generate a frame covering the target body based on the center point
- the 3D grid is to be processed and displayed.
- the center point determination unit includes a center point determination subunit.
- the center point determination subunit is configured to determine the center point according to the visual angle of the acquisition device and the plane to which the target subject belongs; or, determine the three-dimensional grid to be processed according to a trigger operation on the display interface the center point of .
- the three-dimensional grid to be processed is a hemispherical grid
- the display unit of the three-dimensional grid to be processed further includes a first sub-unit for determining the radius length.
- the radius length determination first subunit is configured to determine the radius length of the three-dimensional grid to be processed according to the display size of the target subject in the image display area, so as to determine the radius length and the center point.
- the 3D grid is to be processed and displayed.
- the 3D grid to be processed is a hemispherical grid
- the 3D grid display unit to be processed further includes a second subunit for determining the radius length and a 3D grid display subunit to be processed.
- the radius length determines the second subunit, which is set to draw a hemispherical grid corresponding to the target body according to the preset radius length and the center point, as the grid to be used; the three-dimensional grid to be processed displays the subunit, It is set to determine and display the 3D grid to be processed based on the adjustment operation on the grid to be used.
- the device further includes a sub-grid parameter determination subunit.
- the sub-grid parameters determine the sub-units, and are set to determine the identification information of each sub-grid in the three-dimensional grid to be processed, and determine the material parameters of each sub-grid.
- the sub-grid parameters determine sub-units, including texture setting small units.
- the texture setting small unit is configured to set a texture for each sub-grid in the three-dimensional grid to be processed.
- the target display information adjustment module 520 includes a collection position determination unit, a target sub-grid determination unit and a target display information adjustment unit.
- the collection position determination unit is configured to adjust the relative collection angle between the collection device and the target subject, and determine the collection position of the collection device;
- the target sub-grid determination unit is configured to be based on the collection position and the pending processing the display information of each sub-grid in the three-dimensional grid, and determining the sub-grid of the target to be processed;
- the target display information adjustment unit is configured to determine that the acquisition device is in the target to be processed The projection center point of the plane to which the sub-grid belongs, and when it is detected that the projection center point is within the preset projection threshold range, adjust the display information of the target sub-grid to be processed to the target display information; wherein, The projection threshold range is determined based on the grid center point of the corresponding sub-grid to be processed.
- the target sub-grid determination unit includes a sub-grid determination subunit and a sub-grid determination subunit.
- the sub-grid determination sub-unit to be determined is set to determine the sub-grid to be determined with the smallest distance from the collection position in the three-dimensional grid to be processed; the target sub-grid determination sub-unit to be processed is set to If it is determined that the display information of the sub-grid is not the display information of the target, then it is determined that the sub-grid to be determined is the target sub-grid to be processed.
- the target display information adjustment unit includes a collection completion subunit and a target display information adjustment subunit.
- the acquisition completion subunit is set to determine that the target subject has been acquired under the relative acquisition angle if the projection center point is within the preset projection threshold range; the target display information adjustment subunit is set to The display information of the target subgrid to be processed is adjusted to the target display information.
- the target display information is different from the original display information of the three-dimensional grid to be processed; the target display information includes color information or pattern information.
- the device further includes: a data sending module.
- the data sending module is configured to correspondingly send the number of the sub-grid and the collected main body data updated to the target display information to the target device.
- the device further includes: a collection progress information adjustment module.
- the acquisition progress information adjustment module is configured to adjust the acquisition progress information of the target subject on the display interface when it is detected that the display information of the corresponding sub-grid is adjusted to the target display information.
- the device further includes: a three-dimensional data acquisition module.
- the 3D data acquisition module is configured to determine that the acquisition of the target subject is completed when it is detected that the display information of each sub-grid in the 3D grid to be processed is the target display information, and obtain the 3D data of the target subject data.
- the 3D data acquisition module includes a 3D model building unit.
- the 3D model building unit is configured to construct and display a 3D model corresponding to the target subject based on the 3D data, so as to determine images of the target subject at different visual angles based on a trigger operation on the 3D model.
- the target display information of the corresponding sub-grid in the 3D grid to be processed is adjusted, so that when the target subject is collected, by judging the Deal with whether the corresponding sub-grid in the 3D grid is adjusted to display information for the target to determine whether the target subject has been collected under the relative collection angle, which not only avoids the problem of repeated data collection and missed collection, but also realizes the accuracy of data collection.
- the data processing device provided in the embodiments of the present disclosure can execute the data processing method provided in any embodiment of the present disclosure, and has corresponding functional modules and effects for executing the method.
- the multiple units and modules contained in the above-mentioned device are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the names of multiple functional units are only for mutual convenience. The distinction is not used to limit the protection scope of the embodiments of the present disclosure.
- Embodiment 6 FIG. 10 is a schematic structural diagram of an electronic device provided in Embodiment 6 of the present disclosure. Referring now to FIG.
- the terminal device in the embodiments of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia player Mobile terminals such as portable media players (Portable Media Player, PMP), vehicle-mounted terminals (eg, vehicle-mounted navigation terminals), etc., and fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
- the electronic device 600 shown in FIG. 10 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
- an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be stored in a program in a read-only memory (Read-Only Memory, ROM) 602 or from a storage device 608 programs loaded into random access memory (Random Access Memory, RAM) 603 to execute various appropriate actions and processes. In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored.
- the processing device 601, ROM 602 and RAM 603 are connected to each other through a bus 604.
- An input/output (Input/Output, I/O) interface 605 is also connected to the bus 604.
- the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, and a gyroscope; including, for example, a liquid crystal display (Liquid Crystal Display, LCD) , an output device 607 such as a speaker and a vibrator; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
- the communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data.
- FIG. 10 shows electronic device 600 having various means, it is not required to implement or have all of the means shown.
- the processes described above with reference to the flowcharts may be implemented as computer software programs.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from a network via communication means 609 , or from storage means 608 , or from ROM 602 .
- the processing device 601 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- Embodiment 7 provides a computer storage medium on which a computer program is stored, and when the program is executed by a processor, the data processing method provided in the above embodiment is implemented.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
- Examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, device, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which computer-readable program codes are carried. The propagated data signal may take various forms, including but not limited to electromagnetic signal, optical signal, or any suitable combination of the above.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable signal medium may send, propagate or transmit a program for use by or in combination with an instruction execution system, apparatus or device .
- the program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to: electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable medium mentioned above The combination.
- the client and the server can communicate using any currently known or future developed network protocols such as HyperText Transfer Protocol (HyperText Transfer Protocol, HTTP), and can communicate with digital data in any form or medium Communication (eg, communication network) interconnection.
- Examples of communication networks include local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), Internet (for example, Internet) and peer-to-peer network (for example, ad hoc peer-to-peer network), and any currently existing networks that are known or developed in the future.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist independently without being incorporated into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: when receiving an instruction to determine the three-dimensional model corresponding to the target subject, start acquisition device; when the preset acquisition conditions are met, on the display interface of the device to which the acquisition device belongs, generate and display a three-dimensional grid to be processed covering the target subject; based on the relationship between the acquisition device and the three-dimensional grid to be processed Relative to the collection angle, the target display information of the corresponding sub-grid in the three-dimensional grid to be processed is adjusted.
- Computer program code for carrying out the operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Included are conventional procedural programming languages such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user's computer via any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, via the Internet using an Internet service provider).
- each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented by a dedicated hardware-based system that performs specified functions or operations. , or may be implemented by a combination of special purpose hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented by means of software or by means of hardware. Wherein, the name of the unit does not constitute a limitation on the unit itself in one case, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
- the functions described herein above may be performed at least in part by one or more hardware logic components.
- exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Parts, ASSP), System on Chip (System on Chip, SOC), Complex Programmable Logic Device (Complex Programming Logic Device, CPLD) and so on.
- a machine-readable medium may be a tangible medium, which may contain or store a program for use by or in combination with an instruction execution system, device, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
- Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard drives, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
- Example 1 provides a data processing method, the method includes: when receiving an instruction to determine a three-dimensional model corresponding to the target subject, start the acquisition device; when the preset is met When the acquisition conditions are set, on the display interface of the equipment to which the acquisition device belongs, generate and display the three-dimensional grid to be processed covering the target body; based on the relative acquisition angle between the acquisition device and the three-dimensional grid to be processed, adjust Target display information of the corresponding sub-grid in the 3D grid to be processed.
- Example 2 provides a data processing method, further comprising: the preset acquisition condition includes at least one of the following: the field of view of the acquisition device includes the the target subject; triggering the acquisition control; the display duration of the target subject in the field of view reaches a preset duration threshold.
- Example 3 provides a data processing method, further comprising: determining a center point corresponding to the three-dimensional grid to be processed based on the target subject; according to the center A 3D mesh to be processed covering the target body is generated and displayed.
- Example 4 provides a data processing method, further comprising: determining the center point according to the visual angle of the collection device and the plane to which the target subject belongs; or , according to the trigger operation on the display interface, determine the center point of the three-dimensional grid to be processed.
- Example 5 provides a data processing method, further comprising: the three-dimensional grid to be processed is a hemispherical grid, and according to the target subject in the image display area display size, determining the radius length of the 3D grid to be processed, so as to determine and display the 3D grid to be processed based on the radius length and the center point.
- Example 6 provides a data processing method, further comprising: the three-dimensional grid to be processed is a hemispherical grid, and according to the preset radius length and the center point , drawing a hemispherical grid corresponding to the target subject as a grid to be used; based on an adjustment operation on the grid to be used, determining and displaying the three-dimensional grid to be processed.
- [Example 7] provides a data processing method, further comprising: determining the identification information of each sub-grid in the three-dimensional grid to be processed, and determining the identity information of each sub-grid Material parameters.
- [Example 8] provides a data processing method, further comprising: setting a texture for each sub-grid in the three-dimensional grid to be processed.
- [Example 9] provides a data processing method, further comprising: adjusting the relative collection angle between the collection device and the target subject, and determining the collection position of the collection device; Based on the collection position and the display information of each sub-grid in the three-dimensional grid to be processed, determine the target sub-grid to be processed; determine the projection center point of the collection device on the plane to which the target sub-grid to be processed belongs , and when it is detected that the projection center point is within the preset projection threshold range, adjust the display information of the target sub-grid to be processed to the target display information; wherein the projection threshold range is based on the corresponding Handles the determination of the grid center point of the subgrid.
- Example 10 provides a data processing method, further comprising: determining the sub-grid to be determined with the smallest distance from the collection position in the 3D grid to be processed; if If the display information of the to-be-determined sub-grid is not the target display information, then determine the to-be-determined sub-grid as the target to-be-processed sub-grid.
- Example 11 provides a data processing method, further comprising: if the projection center point is within the preset projection threshold range, determining the relative acquisition The acquisition of the target body has been completed under the angle; the display information of the sub-grid to be processed of the target is adjusted to the display information of the target.
- Example 12 provides a data processing method, further comprising: the target display information is different from the original display information of the three-dimensional grid to be processed; the target display The information includes color information or pattern information.
- Example Thirteen provides a data processing method, which further includes: updating the serial number of the subgrid and the collected main body data as the target display information, correspondingly sending to the target device.
- Example 14 provides a data processing method, further comprising: when it is detected that the display information of the corresponding sub-grid is adjusted to the target display information, adjusting the display information on the display interface Describe the collection progress information of the target subject.
- Example 15 provides a data processing method, further comprising: when it is detected that the display information of each sub-grid in the three-dimensional grid to be processed is displayed for the target letter information, it is determined that the acquisition of the target subject is completed, and the three-dimensional data of the target subject is obtained.
- Example 16 provides a data processing method, further comprising: based on the three-dimensional data, constructing and displaying a three-dimensional model corresponding to the target subject, to The trigger operation of the three-dimensional model determines the images of the target subject under different viewing angles.
- Example 17 provides a data processing device, including: an acquisition device activation module configured to, when receiving an instruction to determine a three-dimensional model corresponding to a target subject, start The acquisition device; the display module of the three-dimensional grid to be processed is set to generate and display the three-dimensional grid to be processed covering the target body on the display interface of the device to which the acquisition device belongs when the preset acquisition condition is satisfied; target display information The adjustment module is configured to adjust the target display information of the corresponding sub-grid in the three-dimensional grid to be processed based on the relative acquisition angle between the acquisition device and the three-dimensional grid to be processed.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23756730.0A EP4468249A4 (en) | 2022-02-21 | 2023-02-13 | DATA PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE AND RECORDING MEDIUM |
| US18/840,057 US20250157149A1 (en) | 2022-02-21 | 2023-02-13 | Data processing method, apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210158271.4 | 2022-02-21 | ||
| CN202210158271.4A CN114549781A (zh) | 2022-02-21 | 2022-02-21 | 数据处理方法、装置、电子设备和存储介质 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2023158376A2 true WO2023158376A2 (zh) | 2023-08-24 |
| WO2023158376A3 WO2023158376A3 (zh) | 2023-10-12 |
Family
ID=81675363
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SG2023/050076 Ceased WO2023158376A2 (zh) | 2022-02-21 | 2023-02-13 | 数据处理方法、装置、电子设备和存储介质 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20250157149A1 (zh) |
| EP (1) | EP4468249A4 (zh) |
| CN (1) | CN114549781A (zh) |
| WO (1) | WO2023158376A2 (zh) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115311429B (zh) * | 2022-08-09 | 2023-05-02 | 北京飞渡科技股份有限公司 | 一种基于Revit的数据导出方法及系统 |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5106375B2 (ja) * | 2008-12-24 | 2012-12-26 | 日本放送協会 | 3次元形状復元装置及びそのプログラム |
| WO2019049331A1 (ja) * | 2017-09-08 | 2019-03-14 | 株式会社ソニー・インタラクティブエンタテインメント | キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法 |
| CN109741404B (zh) * | 2019-01-10 | 2020-11-17 | 奥本未来(北京)科技有限责任公司 | 一种基于移动设备的光场采集方法 |
| US10950032B2 (en) * | 2019-05-03 | 2021-03-16 | Fyusion, Inc. | Object capture coverage evaluation |
| US11783443B2 (en) * | 2019-01-22 | 2023-10-10 | Fyusion, Inc. | Extraction of standardized images from a single view or multi-view capture |
| CN111192362B (zh) * | 2019-12-17 | 2023-04-11 | 武汉理工大学 | 一种动态三维地理场景实时采集的虚拟复眼系统的工作方法 |
| US11562474B2 (en) * | 2020-01-16 | 2023-01-24 | Fyusion, Inc. | Mobile multi-camera multi-view capture |
| CN111507973B (zh) * | 2020-04-20 | 2024-04-12 | 上海商汤临港智能科技有限公司 | 目标检测方法及装置、电子设备和存储介质 |
| EP4139887A4 (en) * | 2020-04-23 | 2024-05-15 | Wexenergy Innovations Llc | System and method of measuring distances related to an object utilizing ancillary objects |
| EP3944192B1 (en) * | 2020-07-22 | 2025-10-22 | Dassault Systèmes | Method for 3d scanning of a real object |
| CN112348958B (zh) * | 2020-11-18 | 2025-02-21 | 北京沃东天骏信息技术有限公司 | 关键帧图像的采集方法、装置、系统和三维重建方法 |
| CN112486318B (zh) * | 2020-11-26 | 2024-07-26 | 北京字跳网络技术有限公司 | 图像显示方法、装置、可读介质及电子设备 |
| CN113850746B (zh) * | 2021-09-29 | 2024-11-22 | 北京字跳网络技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
-
2022
- 2022-02-21 CN CN202210158271.4A patent/CN114549781A/zh active Pending
-
2023
- 2023-02-13 EP EP23756730.0A patent/EP4468249A4/en active Pending
- 2023-02-13 WO PCT/SG2023/050076 patent/WO2023158376A2/zh not_active Ceased
- 2023-02-13 US US18/840,057 patent/US20250157149A1/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| EP4468249A2 (en) | 2024-11-27 |
| US20250157149A1 (en) | 2025-05-15 |
| CN114549781A (zh) | 2022-05-27 |
| WO2023158376A3 (zh) | 2023-10-12 |
| EP4468249A4 (en) | 2025-05-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110674022B (zh) | 行为数据获取方法、装置及存储介质 | |
| US11776209B2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
| CN108959361B (zh) | 表单管理的方法和装置 | |
| CN109451343A (zh) | 视频分享方法、装置、终端及存储介质 | |
| WO2015102866A1 (en) | Physical object discovery | |
| CN113536063B (zh) | 信息处理方法、装置、设备及存储介质 | |
| CN111327953A (zh) | 直播投票方法及装置、存储介质 | |
| JP2023533295A (ja) | 拡張現実の画像処理方法、装置、電子機器及び記憶媒体 | |
| CN112230907A (zh) | 程序生成方法、装置、终端及存储介质 | |
| WO2023030079A1 (zh) | 物品的展示方法、装置、电子设备和存储介质 | |
| WO2023104006A1 (zh) | 图像特效包的生成方法、装置、设备及存储介质 | |
| CN112770177A (zh) | 多媒体文件生成方法、多媒体文件发布方法及装置 | |
| WO2023140787A2 (zh) | 视频的处理方法、装置、电子设备、存储介质和程序产品 | |
| CN113223110B (zh) | 画面渲染方法、装置、设备及介质 | |
| WO2023158376A2 (zh) | 数据处理方法、装置、电子设备和存储介质 | |
| WO2023103999A1 (zh) | 3d目标点渲染方法、装置、设备及存储介质 | |
| WO2023109564A1 (zh) | 视频图像处理方法、装置、电子设备及存储介质 | |
| CN112529871B (zh) | 评价图像的方法、装置及计算机存储介质 | |
| CN111898048A (zh) | 展示信息的数据调整方法、装置、电子设备及存储介质 | |
| CN110070617A (zh) | 数据同步方法、装置、硬件装置 | |
| WO2025113515A1 (zh) | 一种增强现实方法、装置、电子设备及存储介质 | |
| CN111064657B (zh) | 关注账户分组的方法、装置、系统 | |
| CN111294320B (zh) | 数据转换的方法和装置 | |
| CN117152393A (zh) | 一种增强现实的呈现方法、系统、装置、设备及介质 | |
| WO2023088461A1 (zh) | 图像处理方法、装置、电子设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 18840057 Country of ref document: US |
|
| ENP | Entry into the national phase |
Ref document number: 2023756730 Country of ref document: EP Effective date: 20240821 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23756730 Country of ref document: EP Kind code of ref document: A2 |
|
| WWP | Wipo information: published in national office |
Ref document number: 18840057 Country of ref document: US |