WO2021237952A1 - Système et procédé d'affichage de réalité augmentée - Google Patents
Système et procédé d'affichage de réalité augmentée Download PDFInfo
- Publication number
- WO2021237952A1 WO2021237952A1 PCT/CN2020/109366 CN2020109366W WO2021237952A1 WO 2021237952 A1 WO2021237952 A1 WO 2021237952A1 CN 2020109366 W CN2020109366 W CN 2020109366W WO 2021237952 A1 WO2021237952 A1 WO 2021237952A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- processing module
- image
- user
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Definitions
- the invention relates to the field of augmented reality, and relates to an augmented reality display system and method.
- Augmented reality technology is a way of identifying and locating scenes and objects in the real world, and placing virtual three-dimensional objects in the real scene in real time.
- the goal of this technology is to integrate and interact with the virtual world and the real world.
- Augmented reality mainly relies on two key technologies: one is the real-time rendering and display of three-dimensional models, and the other is the perception of the shape and position of real objects.
- the mobile phone’s three-free gyroscope is used for positioning, but the user cannot move closer to or away from the virtual three-dimensional object;
- Positioning is performed by calculating the relative position between the front camera of the mobile phone and the preset picture.
- the user needs the picture to realize the six-degree-of-freedom positioning.
- the technical stability of the picture augmented reality is not high, and the front camera cannot directly When you see the picture, you see the deformed picture through the front lens.
- the picture must be within the visible range of the front camera to display the 3D model, which makes the tracking and positioning very unstable, and the user's moving range is determined by the location of the picture.
- the limitations are not high, and the front camera cannot directly When you see the picture, you see the deformed picture through the front lens.
- the picture must be within the visible range of the front camera to display the 3D model, which makes the tracking and positioning very unstable, and the user's moving range is determined by the location of the picture.
- the present invention provides an augmented reality display system, which is characterized in that it includes:
- a head-mounted display frame the head-mounted display frame has a circular ring shape and is used for wearing by the user;
- a groove, the opening of the groove is inclined upward, and one side of the groove is connected with the headset frame;
- a lens the lens is arranged below the groove and connected to the other side of the groove, and the lens is made of a semi-reflective and semi-transparent material;
- a portable terminal the portable terminal has a first side provided with a display unit and a second side provided with an image acquisition unit, the first side and the second side are arranged opposite to each other, the portable terminal is also Comprising a processing unit for processing real-time images collected by the image collecting unit and displaying them on the display unit;
- the size of the portable terminal is adapted to the size of the groove, and when the portable terminal is put into the groove, the first side of the portable terminal faces the lens;
- the processing unit specifically includes:
- Inertial measurement module used to collect and output real-time motion data
- a pose processing module connected to the inertial measurement module, and configured to determine the current pose of the portable terminal according to the real-time image collected by the image acquisition unit and the real-time motion data at the corresponding time;
- An image processing module connected to the pose processing module, for generating a virtual visual range and a virtual screen according to the current pose of the portable terminal, and reflecting the virtual screen to the display unit and the lens User view.
- the image acquisition unit obtains the real-time image by collecting a feature point area including a plurality of feature points;
- the processing unit also includes:
- a feature point processing module which is respectively connected to the image acquisition unit and the pose processing module, and is used to obtain the feature point in the real-time image and analyze the location of the feature point area, according to Outputting the characteristic points to the pose processing module as a result of the analysis;
- the pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal according to the real-time motion data at the corresponding time.
- the virtual visual range includes virtual angle information
- the image processing module includes:
- a first processing component connected to the pose processing module, and configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determine the spatial rotation angle of the image acquisition unit, And the included angle between the spatial rotation angle and the user, and the virtual angle information is generated according to the spatial rotation angle and the included angle and included in the virtual visual range for output.
- the virtual visual range includes virtual location information
- the image processing module includes:
- a second processing component connected to the pose processing module, configured to construct a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and select the center of the user's brow as a preset reference point And generating the virtual position information according to the offset between the portable terminal and the preset reference point and the interpupillary distance of the user and including the virtual position information for output in the virtual visual range.
- the virtual visual range includes virtual field angle information
- the image processing module includes:
- a third processing component connected to the pose processing module, for generating a preset virtual screen, and generating the virtual field of view according to the position difference between the curved edge of the preset virtual screen and the user
- the angle information is also included in the virtual visual range and output.
- An augmented reality display method applied to the display system according to any one of the above, characterized in that a head-mounted display frame, a groove, a lens and a portable terminal are provided in the display device;
- the display method includes:
- Step S1 the image acquisition unit acquires a real-time image of a plane directly above the portable terminal
- the inertial measurement module collects real-time motion data
- Step S2 the pose processing module processes the current pose of the portable terminal according to the real-time image and the real-time motion data
- Step S3 the image processing module generates a virtual visual range and a virtual screen according to the current pose, and sends the virtual screen to the display unit for display;
- Step S4 the virtual screen displayed on the display unit is reflected by the lens to the user for viewing.
- the step S1 includes:
- Step S11 the image acquisition unit obtains the real-time image by collecting a feature point region including a plurality of feature points;
- Step S12 the image acquisition unit acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
- step S2 If yes, go to step S2;
- step S13 the image acquisition unit generates a prompt instruction for indicating that the feature points are too few and feeds it back to the user, and then returns to the step S11.
- the image processing module determines the current pose through a vSLAM algorithm.
- the virtual viewing range includes virtual angle information
- the step S3 includes a first process of generating the virtual angle information
- the first process includes:
- Step S31A the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit in the spatial rectangular coordinate system;
- Step S32A acquiring the angle between the image acquisition unit and the user
- Step S33A generating the virtual angle information according to the spatial rotation angle and the included angle.
- the virtual angle information is expressed by the following formula:
- ⁇ is used to express the virtual angle information
- ⁇ X is used to represent the pitch angle in the space rotation angle
- ⁇ Y is used to represent the yaw angle in the space rotation angle
- ⁇ is used to indicate the included angle
- ⁇ Z is used to represent the roll angle in the spatial rotation angle.
- the virtual visual range includes virtual location information
- the step S3 includes a second process of generating the virtual location information
- the second process includes:
- step S31B the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit as the origin according to the current pose, and selects the center of the user’s brow as a preset reference point.
- Step S32B The image processing module adjusts the first position information according to the interpupillary distance of the user, generates second position information, and outputs the second position information as the virtual position information.
- the first location information is expressed by the following formula:
- ⁇ ′ (-B X ,-B Y ,-B Z )
- ⁇ ′ is used to express the first position information
- B X is used to indicate the projection of the offset on the X axis
- B Y is used to represent the projection of the offset on the Y axis
- B Z is used to indicate the projection of the offset on the Z axis.
- the second position information includes left-eye position information and right-eye position information
- the second location information is expressed by the following formula:
- ⁇ " 1 is used to express the left eye position information
- ⁇ " 2 is used to express the right eye position information
- B X is used to indicate the projection of the offset on the X axis
- B Y is used to represent the projection of the offset on the Y axis
- I is used to represent the interpupillary distance of the user
- B Z is used to indicate the projection of the offset on the Z axis.
- the virtual visual range includes virtual field of view angle information
- the step S3 includes a third process of generating the virtual field of view information
- the third process includes:
- Step S31C the image processing module generates a preset virtual screen and displays it on the lens
- Step S32C the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user
- step S33C the image processing module determines the virtual field of view information according to the position difference.
- FIG. 1 is a schematic diagram of the structure in a preferred embodiment of the present invention.
- FIG. 2 is a schematic diagram of a portable terminal before being placed in a preferred embodiment of the present invention
- FIG. 3 is a schematic diagram of a portable terminal in a preferred embodiment of the present invention after being placed;
- Figure 4 is a schematic diagram of the overall flow in a preferred embodiment of the present invention.
- FIG. 5 is a schematic flowchart of step S1 in a preferred embodiment of the present invention.
- FIG. 6 is a schematic flowchart of the first process in a preferred embodiment of the present invention.
- FIG. 7 is a schematic flowchart of the second process in a preferred embodiment of the present invention.
- FIG. 8 is a schematic flowchart of the third process in a preferred embodiment of the present invention.
- FIG. 9 is a schematic structural diagram of the third process in a preferred embodiment of the present invention.
- An augmented reality display system as shown in Figure 1 to Figure 3, includes:
- a headset frame 1 which has a circular ring shape and is used for users to wear;
- a groove 2 the opening of the groove 2 is inclined upward, and one side of the groove 2 is connected with the headset frame 1;
- a lens 3, the lens 3 is arranged under the groove 2 and connected to the other side of the groove 2, and the lens 3 is made of semi-reflective and semi-transparent material;
- a portable terminal 4 the portable terminal 4 has a first side provided with a display unit and a second side provided with an image acquisition unit 41, the first side and the second side are arranged opposite to each other, the portable terminal 4 further includes a processing unit , Used to process the real-time images collected by the image collection unit 41 and display them on the display unit;
- the size of the portable terminal 4 is adapted to the size of the groove 2.
- the first side of the portable terminal 4 faces the lens 3;
- the processing unit specifically includes:
- Inertial measurement module used to collect and output real-time motion data
- the pose processing module connected to the inertial measurement module, is used to determine the current pose of the portable terminal 4 according to the real-time image collected by the image acquisition unit 41 and the real-time motion data at the corresponding time;
- the image processing module connected to the pose processing module, is used to generate a virtual visual range and a virtual screen according to the current pose of the portable terminal 4, and reflect the virtual screen to the user through the display unit and the lens 3 for viewing.
- the display device in the prior art often sets the processing unit to perform data interaction in the display device, and uses the three-free gyroscope of the mobile phone or preset pictures for positioning, which results in the user being unable to move closer to or away from the virtual
- the tracking and positioning of three-dimensional objects is very unstable, and the user's range of movement is limited by the location of the picture.
- This technical solution provides an augmented reality display system.
- the portable terminal 4 collects real-time images through the image acquisition unit 41 and the inertial measurement module.
- Real-time motion data then the current pose is determined by the pose processing module and the image processing module, and the virtual visual range and virtual screen are generated according to the current pose.
- the virtual screen is reflected to the user through the display unit and the lens 3, and the user
- the virtual picture and the real environment are observed through the lens 3, and the superposition and fusion between the virtual picture and the real environment is realized, and the purpose of augmented reality is achieved.
- a mobile phone can be selected as the portable terminal 4, so as to realize the rapid acquisition of real-time images and real-time motion data, and the generation of virtual visual ranges and virtual images.
- the image processing unit needs to obtain the current pose of the portable terminal 4 and construct a spatial rectangular coordinate system, and determine the virtual visual range based on the user’s real visual range to generate an appropriate virtual visual range.
- the screen is displayed on the lens 3.
- the first side of the portable terminal 4 faces the lens 3, and the area in the groove 2 that fits the first side of the portable terminal 4 can be either a hollow design or a hollow design. It is made of light-transmitting materials to display a virtual screen on the lens 3 for easy viewing by the user.
- a hook 21 matching fixing device such as a fixing rope, can be provided on one side of the groove to assist in fixing the portable terminal 4, so as to avoid the position and angle of the portable terminal 4 that may be caused by the user's posture changes during use. Change.
- the image collection unit 41 obtains a real-time image by collecting a feature point area including a plurality of feature points;
- the processing unit also includes:
- the feature point processing module is connected to the image acquisition unit 41 and the pose processing module respectively, and is used to obtain the feature points in the real-time image and analyze the location of the feature point area, and output the feature points to the pose processing module according to the analysis result ;
- the pose processing module uses the position of the feature point area as a reference, and determines the current pose of the portable terminal 4 according to the real-time motion data at the corresponding time.
- the image acquisition unit 41 acquires real-time images and outputs the real-time images to the image processing unit.
- the image processing unit extracts feature points in the image and analyzes the regions corresponding to the feature points, and generates an instruction according to the analysis result to enable the image acquisition unit 41 collects more feature points with spatial recognition, so as to finally determine the corresponding area of the virtual screen.
- the virtual visual range includes virtual angle information
- the image processing module includes:
- a first processing component connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, determine the spatial rotation angle of the image acquisition unit 41, and the spatial rotation angle with the user
- the included angle between the virtual angle information is generated according to the space rotation angle and the included angle and included in the virtual visual range for output.
- the virtual visual range includes virtual location information
- the image processing module includes:
- a second processing component connected to the pose processing module, is used to construct a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and select the center of the user’s brow as the preset reference point, according to the portable terminal 4 and the preset reference point. It is assumed that the offset between the reference points and the interpupillary distance of the user generates virtual position information and is included in the virtual visual range for output.
- the virtual visual range includes virtual field of view information
- the image processing module includes:
- a third processing component connected to the pose processing module, is used to generate a preset virtual screen, and generate virtual field of view information according to the position difference between the curved edge of the preset virtual screen and the user, and include it in the virtual visual Output in the range.
- FIG. 4 An augmented reality display method, applied to any one of the above-mentioned display systems, as shown in FIG. 4, a head-mounted display frame 1, a groove 2, a lens 3, and a portable terminal 4 are arranged in the display device;
- the display methods include:
- Step S1 the image acquisition unit 41 acquires a real-time image of the plane directly above the portable terminal 4, and
- Inertial measurement module collects real-time motion data
- Step S2 the pose processing module processes the current pose of the portable terminal 4 according to the real-time image and real-time motion data
- Step S3 the image processing module generates a virtual visual range and a virtual picture according to the current pose, and sends the virtual picture to the display unit for display;
- Step S4 the virtual picture displayed on the display unit is reflected by the lens 3 to the user for viewing.
- an augmented reality display method is provided.
- the user wears the head-mounted display frame 1, the user's eyes are facing the lens 3 of the display device, the image acquisition unit 41 of the portable terminal 4 faces upward, and the real-time image above the portable terminal 4 is collected.
- the inertial measurement module measures the portable
- the real-time motion data of the terminal 4 finally determines the current pose of the portable terminal 4.
- the actual visual range and the virtual visual range of the user are determined according to the current pose of the portable terminal 4.
- the real visual range here refers to the visual range of the user’s human eyes to observe the real environment ahead through the lens 3
- the virtual visual range refers to the simulation in the virtual space by the image processing module in the process of generating the virtual screen.
- the sight range corresponding to the virtual human eye. use the virtual visual range to generate a suitable virtual screen.
- the virtual visual range is determined according to the actual visual range.
- the virtual visual range here includes virtual position information, virtual angle information, and virtual field of view angle information.
- the virtual position information, virtual angle information, and virtual field angle information in the virtual visual range correspond to the actual position information, real angle information, and real field angle information in the display visual range, it can ensure that the virtual space
- the virtual screen is reflected on the lens 3, it can be reflected in the user’s eyes together with the real environment, allowing the user to observe The virtual picture is more in line with the ideal effect.
- step S1 includes:
- Step S11 the image acquisition unit 41 obtains a real-time image by collecting a feature point region including a plurality of feature points;
- Step S12 the image acquisition unit 41 acquires the feature points in the real-time image and analyzes whether the location of the feature point area meets the viewing angle requirement:
- step S2 If yes, go to step S2;
- step S13 the image acquisition unit 41 generates a prompt instruction for indicating that there are too few feature points and feeds it back to the user, and then returns to step S11.
- the preset program in the portable terminal 4 will guide the user to walk around in the space, and the corresponding image acquisition unit 41 starts and collects the upper image, extracts the feature points in the real-time image, and analyzes the features Whether the corresponding area covered by the point, the robustness of the feature point and other parameters meet the viewing angle requirements, if yes, go to step S2, if not, then generate a guidance instruction to guide the user to change the current position, travel direction and line of sight direction to fill the collected In the image corresponding to the area with fewer feature points, the specific position of the virtual picture in the real scene is finally determined.
- feature points can also be preset through an external feature point device.
- the feature point device includes a launch unit and a diffusion unit.
- the diffusion unit is arranged above the launch unit.
- a transparent disc can be selected from the diffusion unit and multiple feature points can be added to the transparent disc.
- the launch unit can select a laser emission
- the device projects the characteristic points of the diffusion unit above the space where the display device is located, and the image processing module then performs characteristic point analysis according to the upper image collected by the image acquisition unit 41.
- step S2 the image processing module determines the current pose through the vSLAM algorithm.
- the virtual viewing range includes virtual angle information
- Step S3 includes the first process of generating virtual angle information
- the first process includes:
- step S31A the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and determines the spatial rotation angle of the image acquisition unit 41 in the spatial rectangular coordinate system;
- Step S32A acquiring the angle between the image acquisition unit 41 and the user
- Step S33A generating virtual angle information according to the spatial rotation angle and the included angle.
- the virtual angle information is expressed by the following formula:
- ⁇ is used to express virtual angle information
- ⁇ X is used to represent the pitch angle in the space rotation angle
- ⁇ Y is used to represent the yaw angle in the space rotation angle
- ⁇ is used to indicate the included angle
- ⁇ Z is used to represent the roll angle in the space rotation angle.
- the virtual visual range includes virtual location information
- Step S3 includes a second process of generating virtual location information
- the second process includes:
- step S31B the image processing module constructs a spatial rectangular coordinate system with the image acquisition unit 41 as the origin according to the current pose, and selects the center of the user's brow as a preset reference point, according to the offset between the image acquisition unit 41 and the preset reference point Quantities, generate the first position information;
- step S32B the image processing module adjusts the first position information according to the user's interpupillary distance, generates second position information, and outputs the second position information as virtual position information.
- the roll angle of the image acquisition unit 41 also changes accordingly, and the angle of the up and down rotation corresponds to the roll angle.
- the pitch angle of the image acquisition unit 41 also changes accordingly. Change, the angle of the front and back rotation corresponds to the pitch angle, and when the user rotates the head left and right, the image acquisition unit 41 performs a circular rotation at the current position, and there is a deviation between the angle of the left and right rotation and the yaw angle.
- the space coordinate system is constructed with the image acquisition unit 41 at this time as the origin, and the space rotation angle of the image acquisition unit 41 in the space coordinate system is determined.
- Euler angles are used to express as ( ⁇ X , ⁇ Y , ⁇ Z )
- the angle ⁇ between the image acquisition unit 41 and the user’s line of sight on the y-axis is calculated by formula (1), thereby determining the virtual angle information ⁇ in the virtual visual range, and according to the virtual angle information, to ensure that the virtual space
- the angle information of the virtual human eye overlaps with the angle information of the user’s eye in the real environment, and the virtual visual range corresponds to the user’s real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
- the first location information is expressed by the following formula:
- ⁇ ′ is used to express the first position information
- B X is used to indicate the projection of the offset on the X axis
- B Y is used to indicate the projection of the offset on the Y axis
- B Z is used to indicate the projection of the offset on the Z axis.
- the second position information includes left-eye position information and right-eye position information
- the second location information is expressed by the following formula:
- ⁇ ′′ 1 is used to express the position information of the left eye
- ⁇ ′′ 2 is used to express the position information of the right eye
- B X is used to indicate the projection of the offset on the X axis
- B Y is used to indicate the projection of the offset on the Y axis
- I is used to represent the interpupillary distance of the user
- B Z is used to indicate the projection of the offset on the Z axis.
- the vSLAM algorithm is often used to fuse the image collected by the image acquisition unit 41 with the data collected by the sensor to calculate the six degrees of freedom information of the device, and the determined position information in the virtual visual range is the image acquisition unit 41 current position. Due to the large deviation between the user's eyes in the real environment and the position information of the image acquisition unit 41, when the user wears the display device to observe the virtual screen, obvious screen misalignment may occur.
- step S3 a second process of determining virtual position information is set in step S3, considering that the distance between the lens 3 and the center of the eyebrows, and the distance between the center of the eyebrows and the left and right eyes are basically The center of the brow is selected as the reference point.
- step S31B it is first determined that the position information of the image acquisition unit 41 in the spatial rectangular coordinate system is (0, 0, 0), and then the position between the image acquisition unit 41 and the center of the brow is acquired.
- the relative position thereby determining the first position information, that is, the position information (-B X , B Y , B Z ) of the center of the eyebrows in the spatial rectangular coordinate system, which can be regarded as the generated virtual human eye position from the image
- the acquisition unit 41 turns to the center of the eyebrows, and then obtains the pupil distance I of the real user's eyes in step S32B, and then adjusts the first position information to generate second position information, that is, the position information of the left and right eyes in the spatial rectangular coordinate system (-B X , -B Y -I/2, -B Z ) and (-B X , -B Y +I/2, -B Z ) to ensure that the position information of the virtual human eye in the virtual space is consistent with that in the real environment
- the position information of the user's eyes overlaps, and the virtual visual range corresponds to the user's real visual range, so as to realize the superimposition and fusion of the virtual screen and the real scene.
- the virtual viewing range includes virtual viewing angle information
- Step S3 includes a third process of generating virtual field of view information
- the third process includes:
- Step S31C the image processing module generates a preset virtual picture and displays it on the lens 3;
- Step S32C the image processing module calculates the position difference between the curved edge of the preset virtual screen and the user
- step S33C the image processing module determines the virtual field of view information according to the position difference.
- the portable terminal 4 in the process of determining the virtual field of view information, forms a preset virtual screen on the lens 3 through the display unit, and respectively calculates the curved edge of the preset virtual screen to the corresponding user person. The distance of the eye finally determines the virtual field of view information.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention concerne le domaine de la réalité augmentée, et concerne un système et un procédé d'affichage de réalité augmentée. Le système comprend : un cadre d'affichage monté sur la tête, un évidement, une lentille, et un terminal portable. Lorsque le terminal portable est placé dans l'évidement, une première surface du terminal portable peut être tournée vers la lentille. Une unité de traitement comprend spécifiquement : un module de mesure d'inertie ; un module de traitement de pose ; et un module de traitement d'image, connecté au module de traitement de pose et utilisé pour générer une plage visuelle virtuelle et un écran virtuel en fonction de la pose actuelle du terminal portable et refléter l'écran virtuel au moyen d'une unité d'affichage et de la lentille vers un utilisateur pour le visualiser. La solution technique présente les effets bénéfiques suivants : le positionnement peut être rapidement mis en œuvre, l'écran virtuel est généré et l'ajustement spatial est mis en œuvre.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010477543.8 | 2020-05-29 | ||
| CN202010477543.8A CN111491159B (zh) | 2020-05-29 | 2020-05-29 | 一种增强现实的显示系统及方法 |
| CN202020950665.XU CN212012916U (zh) | 2020-05-29 | 2020-05-29 | 一种增强现实的显示设备 |
| CN202020950665.X | 2020-05-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021237952A1 true WO2021237952A1 (fr) | 2021-12-02 |
Family
ID=78722957
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2020/109366 Ceased WO2021237952A1 (fr) | 2020-05-29 | 2020-08-14 | Système et procédé d'affichage de réalité augmentée |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021237952A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114743419A (zh) * | 2022-03-04 | 2022-07-12 | 广州容溢教育科技有限公司 | 一种基于vr的多人虚拟实验教学系统 |
| CN114967156A (zh) * | 2022-06-29 | 2022-08-30 | 苏州理湃科技有限公司 | 一种带图像位置调节的双目ar眼镜 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140287806A1 (en) * | 2012-10-31 | 2014-09-25 | Dhanushan Balachandreswaran | Dynamic environment and location based augmented reality (ar) systems |
| CN107037587A (zh) * | 2016-02-02 | 2017-08-11 | 迪士尼企业公司 | 紧凑型增强现实/虚拟现实显示器 |
| CN108022302A (zh) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | 一种Inside-Out空间定位的AR立体显示装置 |
| CN111491159A (zh) * | 2020-05-29 | 2020-08-04 | 上海鸿臣互动传媒有限公司 | 一种增强现实的显示系统及方法 |
-
2020
- 2020-08-14 WO PCT/CN2020/109366 patent/WO2021237952A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140287806A1 (en) * | 2012-10-31 | 2014-09-25 | Dhanushan Balachandreswaran | Dynamic environment and location based augmented reality (ar) systems |
| CN107037587A (zh) * | 2016-02-02 | 2017-08-11 | 迪士尼企业公司 | 紧凑型增强现实/虚拟现实显示器 |
| CN108022302A (zh) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | 一种Inside-Out空间定位的AR立体显示装置 |
| CN111491159A (zh) * | 2020-05-29 | 2020-08-04 | 上海鸿臣互动传媒有限公司 | 一种增强现实的显示系统及方法 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114743419A (zh) * | 2022-03-04 | 2022-07-12 | 广州容溢教育科技有限公司 | 一种基于vr的多人虚拟实验教学系统 |
| CN114743419B (zh) * | 2022-03-04 | 2024-03-29 | 国育产教融合教育科技(海南)有限公司 | 一种基于vr的多人虚拟实验教学系统 |
| CN114967156A (zh) * | 2022-06-29 | 2022-08-30 | 苏州理湃科技有限公司 | 一种带图像位置调节的双目ar眼镜 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9728010B2 (en) | Virtual representations of real-world objects | |
| JP6195893B2 (ja) | 形状認識装置、形状認識プログラム、および形状認識方法 | |
| US9779512B2 (en) | Automatic generation of virtual materials from real-world materials | |
| JP6860488B2 (ja) | 複合現実システム | |
| US9961335B2 (en) | Pickup of objects in three-dimensional display | |
| JP5844880B2 (ja) | ヘッドマウントディスプレイ、校正方法及び校正プログラム、並びに記録媒体 | |
| US20140152558A1 (en) | Direct hologram manipulation using imu | |
| JP6250024B2 (ja) | キャリブレーション装置、キャリブレーションプログラム、およびキャリブレーション方法 | |
| JP6349660B2 (ja) | 画像表示装置、画像表示方法、および画像表示プログラム | |
| WO2014128747A1 (fr) | Dispositif, programme et procédé d'entrée/sortie | |
| JP6250025B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
| WO2014128751A1 (fr) | Appareil, programme et procédé visiocasque | |
| CN111491159B (zh) | 一种增强现实的显示系统及方法 | |
| WO2021237952A1 (fr) | Système et procédé d'affichage de réalité augmentée | |
| JP2017191546A (ja) | 医療用ヘッドマウントディスプレイ、医療用ヘッドマウントディスプレイのプログラムおよび医療用ヘッドマウントディスプレイの制御方法 | |
| JP6446465B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
| JP6479835B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
| JP2016057634A (ja) | ヘッドマウントディスプレイ、校正方法及び校正プログラム、並びに記録媒体 | |
| JP6608208B2 (ja) | 画像表示装置 | |
| JP2017215597A (ja) | 情報表示方法及び情報表示装置 | |
| JP6479836B2 (ja) | 入出力装置、入出力プログラム、および入出力方法 | |
| JP2017111721A (ja) | クリーンルーム用ヘッドマウントディスプレイ、クリーンルーム用ヘッドマウントディスプレイの制御方法、およびクリーンルーム用ヘッドマウントディスプレイの制御プログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937651 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20937651 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/06/2023) |