WO2017209829A2 - Systèmes rv/ra photoniques hybrides - Google Patents
Systèmes rv/ra photoniques hybrides Download PDFInfo
- Publication number
- WO2017209829A2 WO2017209829A2 PCT/US2017/022459 US2017022459W WO2017209829A2 WO 2017209829 A2 WO2017209829 A2 WO 2017209829A2 US 2017022459 W US2017022459 W US 2017022459W WO 2017209829 A2 WO2017209829 A2 WO 2017209829A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- channelized
- real
- image constituent
- world
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/212—Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/24—Constructional details thereof, e.g. game controllers with detachable joystick handles
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5255—Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0816—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
- G02B26/0833—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/10—Beam splitting or combining systems
- G02B27/12—Beam splitting or combining systems operating by refraction only
- G02B27/126—The splitting element being a prism or prismatic array, including systems based on total internal reflection
Definitions
- the present invention relates generally to video and digital image and data processing devices and networks which generate, transmit, switch, allocate, store, and display such data, as well as non-video and non-pixel data processing in arrays, such as sensing arrays and spatial light modulators, and the application and use of data for same, and more specifically, but not exclusively, to digital video image displays, whether flat screen, flexible screen, 2D or 3D, or projected images, and non-display data processing by device arrays, and to the spatial forms of organization and locating these processes, including compact devices such as flat screen televisions and consumer mobile devices, as well as the data networks which provide image capture, transmission, allocation, division, organization, storage, delivery, display and projection of pixel signals or data signals or aggregations or collections of same.
- the field of the present invention is not single, but rather combines two related fields, augmented reality and virtual reality, but addressing and providing an integrated mobile device solution that solves critical problems and limitations of the prior art in both fields.
- a brief review of the background of these related fields will make evident the problems and limitations to be solved, and set the stage for the proposed solutions of the present disclosure.
- VIRTUAL REALITY "A realistic simulation of an environment, including three- dimensional graphics, by a computer system using interactive software and hardware.
- VR Real-Time simulated simulation of an environment, including three- dimensional graphics, by a computer system using interactive software and hardware.
- AUGMENTED REALITY "An enhanced image or environment as viewed on a screen or other display, produced by overlaying computer- generated images, sounds, or other data on a real-world environment. AND: "A system or technology used to produce such an enhanced environment. Abbreviation: AR"
- Virtual reality sometimes referred to as immersive multimedia, is a computer- simulated environment that can simulate physical presence in places in the real world or imagined worlds. Virtual reality can recreate sensory experiences, including virtual taste, sight, smell, sound, touch etc.
- Augmented reality is a live direct or indirect view of a physical, real- world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.
- simulators mobile navigation of simulated worlds (virtual reality).
- a sub-category of simulators then would be "personal simulators," or at most, “partial virtual reality,” in which a stationary user is equipped with an immersive HMD (head mounted display) and haptic interface (e.g., motion-tracked gloves), which enable a partial "virtual-reality-like" navigation of a simulated world.
- immersive HMD head mounted display
- haptic interface e.g., motion-tracked gloves
- a CAVE system would, on the other hand, qualify schematically as a limited virtual reality system, as navigation past the dimensions of the CAVE would only be possible by means of a moveable floor, and once the limits of the CAVE itself were reached, what would follow would be another form of "partial virtual reality.”
- an essential, defining characteristic is that there is a mapping of the simulation, whether entirely synthetic or hybrid, to a real space.
- a real space may be as basic as a room inside a laboratory or soundstage, and simply a grid that maps and calibrates, in some ratio, to the simulated world.
- the next step is to identify the implicit question of by what means is a "mobile point of view” realized.
- the answer is, to provide a view of the simulation which is mobile requires two components, themselves realized by a combination of hardware and software: a moving image display means, by which the simulation can be viewed, and motion- tracking means, which can track the movement of the device which includes the display in 3 axes of motion, which means to measure position over time of a 3-dimensional viewing device from a minimum of three tracking points (two, if the measurements the device is mapped so that a the third position on a third axis can be inferred), and in relation to a 3-axis frame of reference, which can be any arbitrary 3D coordinate system mapped to a real space, although for practical purposes of mechanically navigating the space, the 2 axes will form a plane that is a ground plane,
- VR applications - where the user is immersed in a synthetic environment - are more concerned with relative tracking than in absolute accuracy. Since the user's world is completely synthetic and self-consistent the fact that his/her head just turned 0.1 degrees is much more important than knowing within even 10 degrees that it is now pointing due North.
- AR systems such as AUGSIM, do not have this luxury.
- AR tracking must have good resolution so that virtual elements appear to move smoothly in the real world as the user's head turns or vehicle moves, and it must have good accuracy so that virtual elements correctly overlay and are obscured by objects in the real world.
- These "synthetic elements,” superimposed by the flat panel displays in the field of view, can include altered portions of the landscape (or the entire landscape, altered digitally). In effect, those portions of "synthetic" landscape that replace what is really there are generated based on original 3D photographic "captures” of every part of the resort. (See #7 below).
- As accurate, photo-based geometric "virtual spaces” in the computer it is possible to digitally alter them in any way, while maintaining the photo-real quality and geometric/spatial accuracy of the original capture. This makes for accurate combination of live digital photography of the same space and altered digital portions.
- "Haptic" interfaces that provide motion-sensor and other data, as well as vibrational and resistance feedback, allow real-time interaction of real people with virtual people, creatures, and magic.
- a haptic device in the form of a "prop" sword haft provides data while the guest/player is swinging it, and physical feedback when the guest/player appears to "strike” the virtual ogre, to achieve the illusion of combat. All of this is combined in realtime and displayed through the binocular flat panel displays.
- LAAS Local Area Augmentation System
- AI - or artificial intelligence - software such as the Massive software used to animate the armies in The Lord of the Rings movies), generate realistic water, clouds, fire, etc, and otherwise integrate and combine all elements, just as computer games and military simulation software do.
- the "base” virtual locations are indistinguishable from the real world, as they are derived from photographs and the real lighting of the location when "captured.”
- a small set of high-quality digital images, combined with data from light probes and laser-range finding data, and the appropriate "image -based” graphics software are all that are needed to recreate a photo-real virtual 3D space in the computer that matches the original exactly.
- the computer can "line up" its virtual resort with what the guest/player or employee/actor sees before they put in the VR goggles. And therefore, through a semi-transparent version of the binocular flat panel displays, if the virtual version were superimposed over the real resort, the one would match up with the other very precisely.
- VR HMD displays the user views a single panel or two separate displays.
- the typical shape of such HMD's typically is that of a goggle or face-mask, although many VR HMD's have the appearance of a welder's helmet with a bulky enclosed visor. To ensure optimal video quality, immersion and lack of distraction, such systems are fully-enclosed, with the periphery around the displays a light-absorbent material.
- the distinction between “video see-through” and “optical see-through” is the distinction between the user looking directly through a transparent or semi-transparent pixel array and display, which is disposed directly in front of the viewer, as part of the glasses optic itself, and looking through a semi-transparent projected image on an optic element also disposed directly in front of the viewer, generated from a (typically directly adjacent) micro- display and conveyed through forms of optical relay to the facing optic piece.
- the main and possibly only partly-practical type of direct view-through display a transparent or semi-transparent display system has (historically) been an LCD configured without an illumination backplane - therefore, specifically, the AR video view-through glasses hold a viewing optic(s) which includes a transparent optical substrate onto which has been fabricated a LCD light modulator pixel array.
- a key aspect of perspective, from any viewing point, in addition to relative size, is realistic lighting/shading, including drop shadows, depending on lighting direction.
- occlusion of objects from any given viewing positioning is a key optical characteristic of perceived perspective and relative distance and positioning.
- Stationary VR gear has generally been employed for night-vision systems in vehicles, including aircraft; mobile night-vision goggles, however, can be considered a form of mediated viewing similar to mobile VR, because essentially what the wearer is viewing is a real scene (IR- imaged) in real-time, but through a video screen(s), and not in a form of "view-through.”
- This sub-type is similar to what Barrilleaux defined, in the same referenced 1999 retrospective, as an "indirect view display.” He offered his definition with respect to a proposed AR HMD in which there is no actual "view-through,” but rather what is viewed is exclusively a merged/processed real/virtual image on a display, presumably as contained as any VR-type or night- vision system.
- a night vision system is not a fusion or amalgam of virtual- synthetic landscape and real, but rather a direct-transmitted video image of IR sensor data as interpreted, through video signal processing, as a monochrome image of varying intensity, depending on the strength of the IR signature.
- a video image it does lend itself to real-time text/graphics overlay, in the same simple form in which the Eyetap was originally conceived, and as Google has stated is the intended primary purpose for its Glass product.
- a mixed reality space image generation apparatus for generating a mixed reality space image formed by superimposing virtual space images onto a real space image obtained by capturing a real space, includes an image composition unit (109) which superimposes a virtual space image, which is to be displayed in consideration of occlusion by an object on the real space of the virtual space images, onto the real space image, and an annotation generation unit (108) which further imposes an image to be displayed without considering any occlusion of the virtual space images.
- Gao begins his survey of the field of view-through HMDS for AR with the following observations:
- ST-HMDs There are two types of ST-HMDs: optical and video (J. Rolland and H. Fuchs, "Optical versus video see-through head mounted, displays," In Fundamentals of Wearable
- the major drawbacks of the video see- through approach include: degradation of the image quality of the see-through view; image lag due to processing of the incoming video stream; potentially loss of the see-through view due to hardware/software malfunction.
- the optical see-through HMD (OST-HMD) provides a direct view of the real world through a beamsplitter and thus has minimal affects to the view of the real world. It is highly (preferred in demanding applications where a user's awareness to the live environment is paramount.
- the Gao proposal is to employ two display-type devices, as the specification of the spatial light modulator which will selectively reflect or transmit the live image is essentially the specification of an SLM for the same purposes as they are in any display application, operatively.
- Gao specifies a duplication of what he refers to as “folded optics,” but is nothing essentially other than a dual version of the Mann Eyetap scheme, requiring in total two "folding optics” elements (e.g., planar grating/HOE or other compact prism or “flat” optics, one each for each source, plus two objective lens (one for wave-front from the real view, one at the other end for focus of the conjoined image, and a beam-splitter combiner).
- Folding optics e.g., planar grating/HOE or other compact prism or “flat” optics, one each for each source, plus two objective lens (one for wave-front from the real view, one at the other end for focus of the conjoined image, and a beam-splitter combiner).
- multiple optical elements are required to: 1) collect light of the real scene via a first reflective/folding optic (planar-type grating/mirror, HOE, TIR prism, or other "flat” optics) and from there to the objective lens, pass it to the next planar-type grating/mirror, HOE, TIR prism, or other "flat” optics to "fold” the light path again, all of which is to ensure that the overall optical system is relatively compact and contained in a schematic set of two rectangular optical relay zones; from the folding optics, the beam is passed through the beam- splitter/combiner to the SLM; which then reflects or transmits, on a pixelated (sampled) basis, and thus passes the variably (variation from the real image contrast and intensity to modify grey scale, etc) modulated, now pixellated real-image back to the beam splitter/combiner.
- a first reflective/folding optic planar-type grating/mirror, HOE, TIR prism, or other "flat” optic
- the display generates, in sync, the virtual or synthetic/CG image, presumably also calibrated to ensure ease of integration with the modified, pixelated/sampled real wave-front, and is passed through the beam-splitter to integrate, pixel-for-pixel, with the multi-step, modified and pixelated sample of the real scene, from thence through an eyepiece objective lens, and then back to another "folding optics" element to be reflected out of the optical system to the viewers eye.
- Digital projection free-space optical beam-combining systems which combine the outputs of high-resolution (2k or 4k) red, green and blue image engines (typically, images generated by DMD or LCoS SLM's are expensive achieving and maintaining these alignments are non-trivial. And some designs are simpler than in the case of the 7-element let of the Gao scheme.
- the occluded pixel would simply be left "off," although this is not specified by Gao, nor are the details of how the SLM will accomplish its image- altering function related.
- the position of the reflective optical element that passes the real-scene wave-front portion to the objective lens has a real perspective position in relation to the scene which is, first, not identical to the perspective position of the viewer in the scene, as it is not flat nor positioned at dead center, and it is only a wave-front sample, not what the position. And furthermore, when mobile, also moving, and also not known to the synthetic image processing unit in advance. The number of variables in this system is extremely large by virtue of these facts alone.
- the system design becomes slightly more simplified only with use of view-through, rather than reflective, SLM's; but even with the faster FeLCoS micro-displays, the frame rate and image speed is still substantially less than that of the MEMS device such as TI's DLP (DMD).
- DMD TI's DLP
- a recourse to a high-resolution DMD such as ⁇ s 2k or 4k device means recourse to a very expensive solution, as DMD's with that feature size and number are known to have low yields, higher defect rates than can be typically tolerated for mass-consumer or business production and costs, a very high price point for systems in which they are employed now, such as digital cinema projectors marketed commercially by TI OEM's Barco, Christie, and NEC.
- tags must in addition reflect not just relative position of the tagged elements in a perspective view of the real space, but also a degree of both automated (based on pre-determined or software-calculated) priority and real-time, user assigned priority, size of tags and degree of transparency, to name but two major visual cues employed by graphical systems to reflect informational hierarchy, must be managed and implemented as well.
- Passive optical pass-through HMD's must then be considered an incomplete model for implementing mobile AR HMD and will become, in retrospect, seen as only a transitional stepping stone to an active system.
- Oculus Rift VR (Facebook) HMD Somewhat paralleling the impact of the Google Glass product-marketing campaign, but with the difference that Oculus had actually also led the field in solving and/or beginning to substantially solve some of the significant threshold barriers to a practical VR HMD (rather than following Lumus and BAE, in the case of Google), the Oculus Rift VR HMD at the time of this writing is the leading pre-mass-release VR HMD product entering and creating the market for widely- accepted consumer and business/industrial VR.
- Low Persistence which is a form of buffering to help keep the video stream smooth, working in combination with the higher- switching speed OLED display.
- All of the systems are implemented for essentially in-place or highly-constrained mobility.
- the systems with the greatest difference from the Oculus VR scheme are Avegant's Glyph and the Vrvana Totem.
- the Glyph actually implements a display solution which follows the previously established optical view-through HMD solution and structure, employing a Texas Instruments DLP DMD to generate a projected micro-image onto a reflective planar optic element, in configuration and operation the same as the planar optical elements of existing optical view-through HMDs, with the difference that a high-contrast, light absorbent backplane structure is employed to realize a reflective/indirect micro-projector display type, with an video image belonging in the general category of opaque, non-transparent display images.
- Eli Peli official consultant to Google, followed up an earlier warning in an interview with online site BetaBeat (May 19, 2014) to Google Glass users to anticipate some eye strain and discomfort with a revised warning (May 29, 2014) that sought to limit the cases and scope of potential usage.
- the demarcation was on eye muscles used in ways they are not designed or used to for prolonged periods of time, and proximate cause of this in the revised statement was the location of the small display image, forcing the user to look up.
- Vrvana Totem the departure from the Oculus VR Rift is in adopting the scheme of Jon Barrilleaux's "indirect view display,” by adding binocular, conventional video cameras to allow toggling between a video-captured forward image capture and the generated simulation on the same optically- shrouded OLED display panel.
- Vrvana have indicated in marketing materials that they may implement this very basic "indirect view display," exactly following the Barrilleaux-identified schematic and pattern, for AR. It is evident that virtually any of the other VR HMD's of the present Oculus VR generation could be mounted with such conventional cameras, albeit with impacts on weight and balance of the HMD, at a minimum.
- Oculus VR has implemented a "low persistence" buffering system in pat to compensate for the still insufficiently-high pixel switching/ frame rate of the OLED displays which are employed at the time of this writing.
- a further impact on the performance of existing VR HMD's is due to the resolution limitations of existing OLED and LCD panel displays, which in part contributes to the requirement of using 5-7" diagonal displays and mounting them at a distance from the viewing optics (and viewers eyes) to achieve a sufficient effective resolution), contributes to the bulk, size and balance of existing and planned offerings, significantly larger, bulkier, and heavier than most other optical headwear products.
- Video HMD's employed for viewing video content but not interactively or with any motion sensing capability, and thus without the capability for navigating a virtual or hybrid (mixed reality/ AR) world.
- Such video HMD's have essentially improved over the past fifteen years, increasing in effective FOV and resolution and viewing comfort/ergonomics, and providing a development path and advances that current VR HMD's have been able to leverage and build upon for. But these, too, have been limited by the core performance of the display technologies employed, in pattern following the limitations observed for OLED, LCD and DMD-based reflective/deflective optical systems.
- ⁇ "High-acuity" VR has improved in substantially in many respects, from FOV, latency, head/motion tracking, lighter-weight, size and bulk.
- ⁇ VR based on an enclosed version of the optical view-through system, but configured as a lateral projection-deflection system in which an SLM projects an image into the eye via a series of three optical elements, is limited in performance to the size of the reflected image, which is expanded but not much bigger than the output of the SLM (DLP DMD, other MEMS, or FeLCoS/LCoS), as compared to the total area of a standard eyeglass lens. Eye-strain risks from extended viewing of what is an extremely-intense version of "close-up work" and the demands this will make on the eye muscles is a further limitation on practical acceptance. And SLM-type and size displays are also limit a practical path to improved resolution and overall performance by the scaling costs of higher resolution SLM's of the technologies referenced.
- Optical view-through systems generally suffer from the same potential for eye-strain by confinement of the eye-muscle usage to a relatively small area, and requiring relatively small and frequent eye-tracking adjustments within those constraints, and for more than brief period of usage.
- Google Glass was designed to reflect expectations of limited duration usage by positioning the optical element up, and out of the direct rest position of the eyes looking straight ahead. But users have reported eye-strain none-the-less, as has been widely document in the press by means of text and interviews from Google Glass Explorers.
- Optical view-through systems are limited in overlaid, semi-transparent information density due to the need to organize tags with real-world objects in a perspective view.
- the demands of mobility and information density make passive optical-view through limited even for graphical information-display applications.
- the number of optical elements intervening in the optical routing of the initial wave- front portion (also, a point to be added here, much smaller than the optical area of a conventional lens in a conventional pair of glasses), which is seven or close to that number, introduces both opportunities for image aberration, artifacts, and losses, but requires a complex system of optical alignments in a field in which such complex free- space alignments of many elements are not common and when they are required, are expensive, hard to maintain, and not robust.
- the method by which the SLM is expected to manage the alteration of the wave-front of the real scene is also not specified nor validated for the specific requirement.
- a 100% "indirect-view display” will have similar demands in key respects to the Gao proposal, with the exception of the number of display-type units and particulars of the alignment, optical system, pixel-system matching, and perspective problems, and thus throws into question the degree to which all key parameters of such a system should require "brute force" calculations of the stored synthetic CG 3D mapped space in coordination with the real-time, individual perspective real-time view-through image.
- a wide FOV ideally including peripheral view, of 120-150 degrees.
- ⁇ High frame rate ideally 60 fps/eye, to minimize latency and other artifacts that are typically due to the display.
- a display-optics system which enables a fast compositing process, within the context of the human visual system, between the real scene wave-front and any synthetic elements.
- many passive means should be employed as possible to minimize the burden on either on-board (to the HMD and wearer) and/or external processing systems.
- a display-optics system that is relatively simple and rugged, with few optical elements, few active device elements, and simple active device designs which are both of minimal weight and thickness, and robust under mechanical and thermal stress.
- ⁇ A system which can toggle, variably, between a VR experience, while retaining full mobility, and a variable-occlusion, perspective-integrated hybrid viewing AR system.
- ⁇ A system which can both manage incoming wavelengths for the HVS and obtain effective information from those wavelengths of interest, via sensors, and hybrids of these. IR, visible and UV are typical wavelengths of interest.
- Enbodiments of this invention may involve decomposing the components of an integrated pixel-signal "modulator” into discrete signal processing stages and thus into a telecom- type network, which may be compact or spatially remote.
- the operatively most basic version proposes a three-stage "pixel-signal processing" sequence, comprising: pixel logic "state” encoding, which is typically accomplished in an integrated pixel modulator, which is separated from the color modulation stage, which is in turn separated from the intensity modulation stage.
- a more detailed pixel-signal processing system is further elaborated, which includes sub-stages and options, and which is more detailed and specifically-tailored to the efficient implementation of magneto-photonic systems, and consist in 1) an efficient illumination source stage in which bulk light, preferably non- visible near-IR, is converted to appropriate mode(s) and launched into channelized array and which supplies stage 2), pixel-logic processing and encoding; followed by 3) optional non-visible energy filter and recovery stage; 4) optional signal-modification stage to improve/modify attributes such as signal splitting and mode modification; 5) frequency/wavelength modulation/shifting and additional bandwidth and peak intensity management; 6) optional signal amplification/gain; 7) optional analyzer for completing certain MO-type light-valve switching; 8) optional configurations for certain wireless (stages) of Pixel- signal Processing and Distribution.
- stages/devices in combination with other pixel-signal processing stages/devices including especially frequency/wavelength modulation/shifting stages and devices, which may be realized in a robust range of embodiments, are also included improved and novel hybrid magneto-optic/photonic components, not restricted to classic or non-linear Faraday Effect MO effects but more broadly encompassing non-reciprocal MO effect and phenomena and combinations therefrom, and also including hybrid Faraday/slow-light effects and Kerr effect-based and hybrids of Faraday and MO Kerr effect-based devices and other MO effects; and also including improved "light-baffle" structures in which the path of the modulated signal is folded in-plane with the surface of the device to reduce overall device feature size; and also including quasi 2D and 3D photonic crystal structures and hybrids of multi-layer film PC and surface grating/poled PC; and also hybrids of MO and Mach- Zehnder interferometer devices.
- the present disclosure proposes a telecom-type or telecom- structured, pixel-signal processing system of the following process-flow of pixel signal processing (or, equally, PIC, sensor, or telecom signal processing) stages and thus, architectures (and variants thereof) characterizing the system of the present disclosure:
- any of the embodiments described herein may be used alone or together with one another in any combination.
- Inventions encompassed within this specification may also include embodiments that are only partially mentioned or alluded to or are not mentioned or alluded to at all in this brief summary or in the abstract.
- the embodiments of the invention do not necessarily address any of these deficiencies.
- different embodiments of the invention may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
- FIG. 1 illustrates an imaging architecture that may be used to implement
- FIG. 2 illustrates an embodiment of a photonic converter implementing a version of the imaging architecture of FIG. 1 using a photonic converter as a signal processor;
- FIG. 3 illustrates a general structure for a photonic converter of FIG. 2;
- FIG. 4 illustrates a particular embodiment for a photonic converter
- FIG. 5 illustrates a generalized architecture for a hybrid photonic VR/AR system
- FIG. 6 illustrates an embodiment architecture for a hybrid photonic VR/AR system.
- Embodiments of the present invention provide a system and method for re-conceiving the process of capture, distribution, organization, transmission, storage, and presentation to the human visual system or to non-display data array output functionality, in a way that liberates device and system design from compromised functionality of non-optimized operative stages of those processes and instead de-composes the pixel-signal processing and array-signal processing stages into operative stages that permits the optimized function of devices best- suited for each stage, which in practice means designing and operating devices in frequencies for which those devices and processes work most efficiently and then undertaking efficient frequency/wavelength
- the term “or” includes “and/or” and the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
- a set refers to a collection of one or more objects.
- a set of objects can include a single object or multiple objects.
- Objects of a set also can be referred to as members of the set.
- Objects of a set can be the same or different.
- objects of a set can share one or more common properties.
- adjacent refers to being near or adjoining. Adjacent objects can be spaced apart from one another or can be in actual or direct contact with one another. In some instances, adjacent objects can be coupled to one another or can be formed integrally with one another.
- connect refers to a direct attachment or link. Connected objects have no or no substantial intermediary object or set of objects, as the context indicates.
- Coupled objects can be directly connected to one another or can be indirectly connected to one another, such as via an intermediary set of objects.
- the terms “substantially” and “substantial” refer to a considerable degree or extent. When used in conjunction with an event or circumstance, the terms can refer to instances in which the event or circumstance occurs precisely as well as instances in which the event or circumstance occurs to a close approximation, such as accounting for typical tolerance levels or variability of the embodiments described herein.
- the term "functional device” means broadly an energy dissipating structure that receives energy from an energy providing structure.
- the term functional device encompasses one-way and two-way structures.
- a functional device may be component or element of a display.
- display means, broadly, a structure or method for producing display constituents.
- the display constituents are a collection of display image
- a display image primitive precursor emits an image constituent signal which may be received by an
- a signal in this context means an output of a signal generator that is, or is equivalent to, a display image primitive precursor.
- these signals are preserved as signals within various signal-preserving propagating channels without transmission into free space where the signal creates an expanding wavefront that combines with other expanding wave fronts from other sources that are also propagating in free space.
- a signal has no handedness and does not have a mirror image (that is there is not a reversed, upside-down, or flipped signal while images, and image portions, have different mirror images). Additionally, image portions are not directly additive (overlapping one image portion on another is difficult, if at all possible, to predict a result) and it can be very difficult to process image portions.
- the term "signal" refers to an output from a signal generator, such as a display image primitive precursor, that conveys information about the status of the signal generator at the time that the signal was generated.
- a signal is a part of the display image primitive that, when perceived by a human visual system under intended conditions, produces an image or image portion.
- a signal is a codified message, that is, the sequence of states of the display image primitive precursor in a communication channel that encodes a message.
- a collection of synchronized signals from a set of display image primitive precursors may define a frame (or a portion of a frame) of an image.
- Each signal may have a characteristic (color, frequency, amplitude, timing, but not handedness) that may be combined with one or more characteristics from one or more other signals.
- HVS human visual system
- the term "human visual system” (HVS) refers to biological and psychological processes attendant with perception and visualization of an image from a plurality of discrete display image primitives, either direct view or projected.
- the HVS implicates the human eye, optic nerve, and human brain in receiving a composite of propagating display image primitives and formulating a concept of an image based on those primitives that are received and processed.
- the HVS is not precisely the same for everyone, but there are general similarities for significant percentages of the population.
- FIG. 1 illustrates an imaging architecture 100 that may be used to implement embodiments of the present invention.
- Some embodiments of the present invention contemplate that formation of a human perceptible image using a human visual system (HVS) - from a large set of signal generating structures includes architecture 100.
- DIPPs display image primitive precursors
- DIPs display image primitives
- An aggregation/collection of DIPs 120 j (such as 1 or more image constituent signals 115i occupying the same space and cross-sectional area) that will form a display image 125 (or series of display images for animation/motion effects for example) when perceived by the HVS.
- the HVS reconstructs display image 125 from DIPs 120 j when presented in a suitable format, such as in an array on a display or a projected image on a screen, wall, or other surface.
- a display image primitive precursor 1 lOi will thus correspond to a structure that is commonly referred to as a pixel when referencing a device producing an image constituent signal from a non-composite color system and will thus correspond to a structure that is commonly referred to as a sub-pixel when referencing a device producing an image constituent signal from a composite color system.
- Many familiar systems employ composite color systems such as RGB image constituent signals, one image constituent signal from each RGB element (e.g., an LCD cell or the like).
- pixel and sub-pixel are used in an imaging system to refer to many different concepts - such as a hardware LCD cell (a sub-pixel), the light emitted from the cell (a sub-pixel), and the signal as it is perceived by the HVS (typically such sub-pixels have been blended together and are configured to be imperceptible to the user under a set of conditions intended for viewing).
- Architecture 100 distinguishes between these various "pixels or sub-pixels" and therefore a different terminology is adopted to refer to these different constituent elements.
- Architecture 100 may include a hybrid structure in which image engine 105 includes different technologies for one or more subsets of DIPPs 110. That is, a first subset of DIPPs may use a first color technology, e.g., a composite color technology, to produce a first subset of image constituent signals and a second subset of DIPPS may use a second color technology, different from the first color technology, e.g., a different composite color technology or a non-composite color technology) to produce a second subset of image constituent signals.
- a first color technology e.g., a composite color technology
- DIPPS may use a second color technology, different from the first color technology, e.g., a different composite color technology or a non-composite color technology
- Architecture 100 further includes a signal processing matrix 130 that accepts image constituent signals 115i as an input and produces display image primitives 120 j at an output.
- matrix 130 includes a plurality of signal channels, for example channel 135-channel 160.
- Each channel is sufficiently isolated from other channels, such as optical isolation that arises from discrete fiber optic channels, so signals in one channel do not interfere with other signals beyond a crosstalk threshold for the implementation/embodiment.
- Each channel includes one or more inputs and one or more outputs. Each input receives an image constituent signal 115 from DIPP 110.
- Each output produces a display image primitive 120.
- each channel directs pure signal information, and that pure signal information at any point in a channel may include an original image constituent signal 115, a disaggregation of a set of one or more processed original image constituent signals, and/or an aggregation of a set of one or more processed original image constituent signals, each "processing" may have included one or more aggregations or
- aggregation refers to a combining signals from an SA number, SA > 1, of channels (these aggregated signals themselves may be original image constituent signals, processed signals, or a combination) into a TA number (1 ⁇ TA ⁇ SA) of channels and disaggregation refers to a division of signals from an S D number, S D ⁇ 1, of channels (which themselves may be original image constituent signals, processed signals, or a combination) into a T D number (S D ⁇ o) of channels.
- SA may exceed N, such as due to an earlier disaggregation without any aggregation and S D may exceed M due a subsequent aggregation.
- architecture 100 allows many signals to be aggregated which can produce a sufficiently strong signal that it may be disaggregated into many channels, each of sufficient strength for use in the implementation.
- Aggregation of signals follows from aggregation (e.g., joining, merging, combining, or the like) of channels or other arrangement of adjacent channels to permit joining, merging, combining or the like of signals propagated by those adjacent channels and disaggregation of signals follows from disaggregation (e.g., splitting, separating, dividing, or the like) of a channel or other channel arrangement to permit splitting, separating, dividing or the like of signals propagated by that channel.
- Channel 135 illustrates a channel having a single input and a single input.
- Channel 135 receives a single original image constituent signal 115 k and produces a single display image primitive 120 k -
- the processing may include a transformation of physical characteristics.
- the physical size dimensions of input of channel 135 is designed to match/complement an active area of its corresponding/associated DIPP 110 that produces image constituent signal 115k.
- the physical size of the output is not required to match the physical size dimensions of the input - that is, the output may be relatively tapered or expanded, or a circular perimeter input may become a rectilinear perimeter output.
- transformations include repositioning of the signal - while image constituent signal 115i may start in a vicinity of image constituent signal 115 2 , display image primitive 1201 produced by channel 135 may be positioned next to a display image primitive 120 x produced from a previously "remote" image constituent signal 115 x . This allows a great flexibility in interleaving signals/primitives separated from the
- Channel 140 illustrates a channel having a pair of inputs and a single output
- Channel 140 receives two original image constituent signals, signal 115 3 and signal 115 4 for example, and produces a single display image primitive 120 2 , for example.
- Channel 140 allows two amplitudes to be added so that primitive 120 2 has a greater amplitude than either constituent signal.
- Channel 140 also allows for an improved timing by
- each constituent signal may operate at 30 Hz but the resulting primitive may be operated at 60 Hz, for example.
- Channel 145 illustrates a channel having a single input and a pair of outputs
- Channel 140 receives a single original image constituent signal, signal 115 5 , for example, and produces a pair of display image primitives - primitive 120 3 and primitive 120 4 .
- Channel 145 allows a single signal to be reproduced, such as split into two parallel channels having many of the characteristics of the disaggregated signal, except perhaps amplitude. When amplitude is not as desired, as noted above, amplitude may be increased by aggregation and then the disaggregation can result in sufficiently strong signals as demonstrated in others of the representative channels depicted in FIG. 1.
- Channel 150 illustrates a channel having three inputs and a single output. Channel 150 is included to emphasize that virtually any number of independent inputs may be aggregated into a processed signal in a single channel for production of a single primitive 120s, for example.
- Channel 155 illustrates a channel having a single input and three outputs.
- Channel 150 is included to emphasize that a single channel (and the signal therein) may be disaggregated into virtually any number of independent, but related, outputs and primitives, respectively.
- Channel 155 is different from channel 145 in another respect - namely the amplitude of primitives 120 produced from the outputs.
- each amplitude may be split into equal amplitudes (though some disaggregating structures may allow for variable amplitude split).
- primitive 120 6 may not equal the amplitude of primitive 120 7 and 120 8 (for example, primitive 120 6 may have an amplitude about twice that of each of primitive 120 7 and primitive 120 8 because all signals are not required to be disaggregated at the same node).
- the first division may result in one-half the signal producing primitive 120 6 and the resulting one-half signal further divided in half for each of primitive 120 7 and primitive 120 8 .
- Channel 160 illustrates a channel that includes both aggregation of a trio of inputs and disaggregation into a pair of outputs.
- Channel 160 is included to emphasize that a single channel may include both aggregation of signals and disaggregation of signal.
- a channel may thus have multiple regions of aggregations and multiple regions of disaggregation as necessary or desirable.
- Matrix 130 is thus a signal processor by virtue of the physical and signal characteristic manipulations of processing stage 170 including aggregations and disaggregations.
- matrix 130 may be produced by a precise weaving process of physical structures defining the channels, such as a Jacquard weaving processes for a set of optical fibers that collectively define many thousands to millions of channels.
- embodiments of the present invention may include an image generation stage (for example, image engine 105) coupled to a primitive generating system (for example, matrix 130).
- the image generation stage includes a number N of display image primitive precursors 110.
- Each of the display image primitive precursors 110i generate a corresponding image constituent signal 115i.
- These image constituent signals 115i are input into the primitive generating system.
- the primitive generating system includes an input stage 165 having M number of input channels (M may equal N but is not required to match - in FIG. 1 for example some signals are not input into matrix 130).
- An input of an input channel receives an image constituent signal 115 x from a single display image primitive precursor 110 x .
- each input channel has an input and an output, each input channel directing its single original image constituent signal from its input to its output, there being M number of inputs and M number of outputs of input stage 165.
- the primitive generating system also includes a distribution stage 170 having P number of distribution channels, each distribution channel including an input and an output.
- each input of a distribution channel is coupled to a unique pair of outputs from the input channels.
- each output of an input channel is coupled to a unique pair of inputs of the distribution channels.
- FIG. 2 illustrates an embodiment of an imaging system 200 implementing a version of the imaging architecture of FIG. 1.
- Systems 200 includes a set 205 of encoded signals, such as a plurality of image constituent signals (at IR/near IR frequencies) that are provided to a photonic signal converter 215 that produces a set 220 of digital image primitives 225, preferably at visible frequencies and more particularly at real-world visible imaging frequencies.
- a set 205 of encoded signals such as a plurality of image constituent signals (at IR/near IR frequencies) that are provided to a photonic signal converter 215 that produces a set 220 of digital image primitives 225, preferably at visible frequencies and more particularly at real-world visible imaging frequencies.
- FIG. 3 illustrates a general structure for photonic signal converter 215 of FIG. 2.
- Converter 215 receives one or more input photonic signals and produces one or more output photonic signals.
- Converter 215 adjusts various characteristics of the input photonic signal(s), such as signal logic state (e.g., ON/OFF), signal color state (IR to visible), and/or signal intensity state.
- FIG. 4 illustrates a particular embodiment for a photonic converter 400.
- Converter 405 includes an efficient light source 405.
- Source 405 may, for example, include an IR and/or near- IR source for optimal modulator performance in subsequent stages (e.g., LED array emitting in IR and/or near-IR).
- Converter 400 includes an optional bulk optical energy source homogenizer 410.
- Homogenizer 410 provides a structure to homogenize polarization of light from source 405 when necessary or desirable.
- Homogenizer 410 may be arranged for active and/or passive homogenization.
- Encoder 415 provides logic encoding of light from source 405, that may have been homogenized, to produce encoded signals.
- Encoder 405 may include hybrid magneto-photonic crystals (MPC), Mach-Zehnder, transmissive valve, and the like.
- Encoder 415 may include an array or matrix of modulators to set the state of a set of image constituent signals.
- the individual encoder structures may operate equivalent to display image primitive precursors (e.g., pixels and/or sub-pixels, and/or other display optical-energy signal generator.
- Converter 400 includes an optional filter 420 such as a polarization filter/analyzer (e.g., photonic crystal dielectric mirror) combined with planar deflection mechanism (e.g., prism array/grating structure(s)).
- a polarization filter/analyzer e.g., photonic crystal dielectric mirror
- planar deflection mechanism e.g., prism array/grating structure(s)
- Converter 400 includes an optional energy recapturer 425 that recaptures energy from source 405 (e.g., IR - near-IR deflected energy) that is deflected by elements of filter 420.
- source 405 e.g., IR - near-IR deflected energy
- Converter 400 includes an adjuster 430 that modulates/shifts wavelength or frequency of encoded signals produced from encoder 415 (that may have been filtered by filter 420).
- Adjuster 430 may include phosphors, periodically-poled materials, shocked crystals, and the like.) Adjuster 430 takes IR/near-IR frequencies that are generated/switched and converts them to one or more desired frequencies (e.g., visible frequencies). Adjuster 430 is not required to shift/modulate all input frequencies to the same frequency and may shift/modulate different input frequencies in the IR/near- IR to the same output frequency. Other adjustments are possible.
- Converter 400 optionally includes a second filter 435, for example for IR/near- IR energy and may then optionally include a second energy recapturer 440.
- Filter 435 may include photonic crystal dielectric mirror) combined with planar deflection structure (e.g., prism
- Converter 400 may also include an optional amplifier/gain adjustment 445 for adjusting a one or more parameters (e.g., increasing a signal amplitude of encoded, optionally filtered, and frequency shifted signal). Other, or additional, signal parameters may be adjusted by adjustment 445.
- an optional amplifier/gain adjustment 445 for adjusting a one or more parameters (e.g., increasing a signal amplitude of encoded, optionally filtered, and frequency shifted signal).
- Other, or additional, signal parameters may be adjusted by adjustment 445.
- FIG. 5 illustrates a generalized architecture 500 for a hybrid photonic VR/AR system 505.
- Architecture 500 exposes system 505 to ambient real world composite electromagnetic wave fronts and produces a set of display image primitives 510 for a human visual system (HVS).
- Set of display image primitives 510 may include or use information from the real world (an AR mode) or the set of display image primitives may include information wholly produced by a synthetic world (a VR mode).
- System 505 may be configured to be selectively operable in either or both modes.
- system 500 may be configured such that a quantity of real world information used in the AR mode may be selectively varied.
- System 505 is robust and versatile.
- System 505 may be implemented in many different ways.
- One embodiment produces image constituent signal from the synthetic world and interleaves the synthetic signals, in an AR mode, with image constituent signals produced from the real world ("real world signals"). These signals may be channelized, processed, and distributed as described in incorporated patent application 12/371,461 using a signal processing matrix of isolated optic channels.
- System 505 includes a signal processing matrix that may incorporate various passive and active signal manipulation structures in addition to any distribution, aggregation, disaggregation, and/or physical characteristic shaping.
- These signal manipulation structures may also vary based upon a particular arrangement and design goal of system 505.
- these manipulation structures may include a real world interface 515, an augmenter 520, a visualizer 525, and/or an output constructor 530.
- Interface 515 includes a function similar to that of a display image primitive precursor in converting the complex composite electromagnetic wave fronts of the real world into a set of real world image constituent signals 535 that are channelized and distributed and presented to augmenter 520.
- system 505 is quite versatile and there are many different embodiments. Characteristics and functions of the manipulation structures may be influenced by a wide range of considerations and design goals. All of these cannot be explicitly detailed herein but some representative embodiments are set forth. As described in the incorporated patent applications and herein, architecture 500 is enabled to employ a combination of technologies (e.g., hybrid) that each may be particularly advantageous for one part of the production of set of DIPs 510 to produce an overall result that is superior than relying on a single technology for all parts of the production.
- technologies e.g., hybrid
- the complex composite electromagnetic wave fronts of the real world include both visible and invisible wavelengths. Since set of DIPs 510 also include visible wavelengths, it may be thought that signals 535 must be visible as well. As explained herein, not all embodiments will be able to achieve superior results when signals 535 are in the visible spectrum.
- System 505 may be configured for use including visible signals 535. There are advantages for some embodiments to provide signals 535 using wavelengths that are not visible to the HVS. As used herein, the following ranges the electromagnetic spectrum are relevant: a) [0256] Visible radiation (light) is electromagnetic radiation with a wavelength
- Infrared (IR) radiation is invisible (to HVS) electromagnetic radiation with a wavelength between 1 mm and 760 nm (300 GHz - 400 THz) and includes far-infrared (1 mm - 10 ⁇ ), mid-infrared (10 - 2.5 ⁇ ), and near-infrared (2.5 ⁇ - 750 nm).
- UV radiation is invisible (to HVS) electromagnetic radiation with a wavelength between 380 nm - 10 nm (790 THz - 30 PHz)
- Interface 515 of a non-visible real- world signal embodiment produces signals 535 in the infrared/near- infrared spectrum.
- the non-visible signals 535 are produced using a spectrum map that maps particular wavelengths or bands of wavelengths of the visible spectrum to predetermined particular wavelengths or bands of wavelengths in the infrared spectrum. This offers an advantage of allowing signals 535 to be efficiently processed within system 505 as infrared wavelengths and includes an advantage of allowing system 505 to restore signals 535 to real-world colors.
- Interface 515 may include other functional and/or structural elements such as a filter to remove IR and/or UV components from the received real-world radiation. In some applications, such as for a night- vision mode using IR radiation, interface 515 will exclude an IR filter or will have an IR filter that allows some IR radiation of the received real- world radiation to be sampled and processed.
- Interface 515 will also include real- world sampling structures to convert the filtered received real- world radiation into a matrix of processed real world image constituent signals (similar to a matrix of display image primitive precursors) with these processed real world image constituent signals channelized into a signal distribution and processing matrix.
- the signal distribution and processing matrix may also include frequency/wavelength conversion structures to provide the processed real world image constituent signals in the IR spectrum (when desired).
- interface 515 may also preprocess selected characteristics of the filtered real world image constituent signals, such as including a polarization filtering function (e.g., polarization-filter the IR/UV filtered real world image constituent signals or polarization-filter, sort, and polarization homogenize, and the like).
- interface 515 may prepare signals 535 appropriately.
- it may be desirable to have a default signal amplitude at a maximum value e.g., default "ON”
- it may be desirable to have a default signal amplitude at a minimum e.g., default "OFF”
- others may be have some channels that provide defaults in different conditions and not all in a default ON or a default OFF.
- Setting polarization states of signals 535, whether visible or not, is one role of interface 515.
- augmenter 520 is a special structure in system 505 for further signal processing. This signal processing may be multifunction that operates on signals 535, some or all may be considered "pass- through" signals based upon how augmenter 520 operates upon them.
- These multiple functions may include: a) manipulating signals 535, such as, for example, independent amplitude control of each individual real world image constituent signal, setting/modifying frequency/wavelength, and/or logic state, and the like, b) producing a set of independent synthetic world image constituent signals with desired characteristics, and c) interleaving, at a desired ratio, some or all of the "passed through" real world image constituent signals with the produced set of synthetic world image constituent signals to produce a set of interleaved image constituent signals 540.
- manipulating signals 535 such as, for example, independent amplitude control of each individual real world image constituent signal, setting/modifying frequency/wavelength, and/or logic state, and the like
- Augmenter 520 is a producer of the set of synthetic world image constituent signals in addition to a processor of received image constituent signals (e.g., real world).
- System 505 is configured such that all signals may be processed by augmenter 520.
- augmenter 520 is a multi-layer optical device composite defining a plurality of radiation valving gates (each gate related to one signal), some gates, configured for possible pass through, receive, individually, some of the real world signals for controllable pass through and some gates configured for production of the synthetic world signals receive a background radiation, isolated from the pass through signals, for production of the synthetic world image constituent signals.
- the gates for the production of the synthetic world in such an implementation thus create the synthetic world signals from the background radiation.
- architecture 500 includes multiple, e.g., two, independent sets of display image primitive precursors that are selectively and controllably processed and merged.
- Interface 515 functions as one set of display image primitive precursors and augmenter 520 functions as a second set of display image primitive precursors.
- the first set produces image constituent signals from the real world and the second set produces image constituent signals from the synthetic world.
- architecture 500 permits additional sets of display image primitive precursors (1 or more making a total of three or more display image primitive precursors) to be available in system 505 that can make additional channelized set(s) of image constituent signals available to augmenter 520 for processing.
- augmenter 520 defines a master set of display image primitive precursors that produces the interleaved signals 540 wherein some of the interleaved signals were initially produced by one or more preliminary sets of display image precursors (e.g., interface 515 producing real world image constituent signals) and some are produced directly by augmenter 520.
- Architecture 500 does not require that all display image primitive precursors employ the same or complementary technologies.
- architecture 500 may provide a powerful, robust, and versatile solution to one or more of the range of drawbacks, limitations, and disadvantages to current AR/VR systems.
- the channelized signal processing and distribution arrangement may aggregate, disaggregate, and/or otherwise process individual image constituent signals as the signals propagate through system 505. A consequence of this is that the number of signal channels in signals 540 may be different from a sum of the number of pass through signals and the number of generated signals.
- Augmenter 520 interleaves a first quantity of real world pass through signals with a second quantity of synthetic signals (for the pure VR mode of system 505, the first quantity is zero).
- Interleaved in this context includes, broadly, that both types of signals are present and is not meant to require that each real world pass through signal be present in a channel that is physically adjacent to another channel including a synthetic world signal. Routing is independently controllable via the channel distribution properties of system 505.
- Visualizer 525 receives interleaved signals 520 and outputs a set of visible signals 545.
- synthetic world image constituent signals of signals 540 were produced in a non- visible range of the electromagnetic spectrum (e.g., IR or near IR).
- some or all of the real world signals 535 passed through by augmenter 520 had been converted to a non- visible range of the electromagnetic spectrum (which may also be overlapping or wholly or partially included in the range for the synthetic world signals).
- Visualizer 525 performs frequency/wavelength modulation and/or conversion of non- visible signals.
- the signals, synthetic and real- world are defined and produced using a false color map of the non- visible, appropriate colors are restored to the frequency-modified real world signals and the synthetic world may be visualized in terms of real world colors.
- Output constructor 530 produces the set of display image primitives 510 from visible signals 545 for perception by the HVS, whether for example by direct view or projection.
- Output constructor 530 may include consolidation, aggregation, disaggregation, channel
- Constructor 530 may also include amplification of some or all of visible signals 545, bandwidth modification (e.g., aggregation and time multiplexing of multiple channels having signals with a preconfigured timing relationship - that is they may be produced out of phase and combined as signals to produce a stream of signals at a multiple of the frequency of any of the streams), and other image constituent signal manipulations.
- bandwidth modification e.g., aggregation and time multiplexing of multiple channels having signals with a preconfigured timing relationship - that is they may be produced out of phase and combined as signals to produce a stream of signals at a multiple of the frequency of any of the streams
- image constituent signal manipulations Two streams at 180 degree phase difference relationship may double the frequency of each streams.
- merged streams that are in phase with each other may increase the signal amplitude (e.g., two in- phase streams may double the signal amplitude, and the like).
- FIG. 6 illustrates a hybrid photonic VR/AR system 600 implementing an embodiment of system 500.
- System 600 includes dashed boxes mapping corresponding structures beween system 600 and system 505 of FIG. 5.
- System 6oo includes an optional filter 605, a "signalizer” 610, a realworld signal processor 615, radiation diffuser 620 powerered by a radiation source 625 (e.g., IR radiation), a magneto photonic encoder 630, a frequency/wavelength converter 635, signal processor 640, signal consolidator 645, and output shaper optics 650.
- a radiation source 625 e.g., IR radiation
- a magneto photonic encoder 630 e.g., IR radiation
- Filter 605 removes unwanted wavelenths from ambient real world illumation incident on interface 515. What is unwanted depends on the application and design goals (e.g., night vision goggles may want some or all IR radiation while other AR systems may desire to remove UV/IR radiation.
- Signalizer 610 functions as a display image primitive precursor to convert the filtered incident realworld radiation into real world image constituent signals and to insert individual signals into optically isolated channels of a signal distributor stage. These signals may be based upon a composite or non-composite imaging model.
- Processor 615 may include a polarization structure to filter polarization and/or filter, sort, and homogenize polarization, a wavelength/frequency converter when some or all of the real world pass through image constituent signals are going to be converted to a different frequency (e.g., IR).
- a polarization structure to filter polarization and/or filter, sort, and homogenize polarization, a wavelength/frequency converter when some or all of the real world pass through image constituent signals are going to be converted to a different frequency (e.g., IR).
- Diffuser 620 takes radition from radiation source and sets up a background radiation environment for encoder 630 to generate synthetic world image constituent signals. Diffuser 620 maintains the background radiation isolated from the real world pass through channels.
- Encoder 630 concurrently receives and processes the real world pass through signals (e.g., it is capable of modulating these signals among other things) and produces the synthetic world signals.
- Encoder 630 interleaves/alternates signals from the real world and from the synthetic world and maintains them in optically isolated channels.
- the real world signals are depicted as filled-in arrows and the synthetic world signals are depicted as unfilled arrows to illustrate the interleaving/alternating.
- FIG. 6 is not meant to imply that encoder 630 is required to reject a significant portion of the real world signals.
- Encoder 630 may include a matrix of many display image primitive precursor-type structures to process all the real world signals and all the synthetic world signals.
- Converter 635 when present, converts the non-visible signals to visible signals.
- Converter 635 may thus process synthetic world signals, real world signals, or both. In other words, this conversion may be enabled on individual ones of the signal distribution channels.
- Signal processor 640 when present, may modify signal amplitude/gain, bandwidth, or other signal modification/modulation.
- Signal consolidator 645 when present, may organize (e.g., aggregate, disaggregate, route, group, cluster, duplicate, and the like) signals from visualizer 525.
- Output shaper optics 650 when present, performs any necessary or desirable signal shaping or other signal manipulation to produce the desired display image primitives to be perceived by the HVS. This may include direct view, projection, reflection, a combination, and the like. The routing/grouping may enable 3D imaging or other visual effect.
- System 600 may be implemented as a stack, sometimes integrated, of functional photonic assemblies that receive, process, and transmit signals in discrete optically isolated channels from a time that they are produced until, and if, they are included in a display image precursor for propagation to the HVS as part of other signals in other display image precursors.
- the field of the present invention is not single, but rather combines two related fields, augmented reality and virtual reality, but addressing and providing an integrated mobile device solution that solves critical problems and limitations of the prior art in both fields.
- a brief review of the background of these related fields will make evident the problems and limitations to be solved, and set the stage for the proposed solutions of the present disclosure.
- VIRTUAL REALITY "A realistic simulation of an environment, including three- dimensional graphics, by a computer system using interactive software and hardware.
- Abbreviation: VR
- AUGMENTED REALITY "An enhanced image or environment as viewed on a screen or other display, produced by overlaying computer- generated images, sounds, or other data on a real-world environment. AND: "A system or technology used to produce such an enhanced environment. Abbreviation: AR"
- Virtual reality sometimes referred to as immersive multimedia, is a computer- simulated environment that can simulate physical presence in places in the real world or imagined worlds. Virtual reality can recreate sensory experiences, including virtual taste, sight, smell, sound, touch etc.
- Augmented reality is a live direct or indirect view of a physical, real- world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.
- simulators mobile navigation of simulated worlds (virtual reality).
- a sub-category of simulators then would be "personal simulators," or at most, “partial virtual reality,” in which a stationary user is equipped with an immersive HMD (head mounted display) and haptic interface (e.g., motion-tracked gloves), which enable a partial "virtual-reality-like" navigation of a simulated world.
- immersive HMD head mounted display
- haptic interface e.g., motion-tracked gloves
- a CAVE system would, on the other hand, qualify schematically as a limited virtual reality system, as navigation past the dimensions of the CAVE would only be possible by means of a moveable floor, and once the limits of the CAVE itself were reached, what would follow would be another form of "partial virtual reality.”
- an essential, defining characteristic is that there is a mapping of the simulation, whether entirely synthetic or hybrid, to a real space.
- a real space may be as basic as a room inside a laboratory or soundstage, and simply a grid that maps and calibrates, in some ratio, to the simulated world.
- This differentiation is not evaluative, as a partial VR which provides real-time natural interface (head-tracking, haptic, auditory, etc.) without being mobile or mapping to an actual, real topography, whether natural, man-made, or hybrid, is not fundamentally less valuable than a partial VR system which simulates physical interaction and provides sensory immersion.
- a podiatric feedback system or more universally, a full-body, range-of-motion feedback system, and/or a dynamically-deformable mechanical interface-interaction surface which supports the users simulated but (to their senses) full-body movement over any terrain, any stationary, whether standing, sitting, or reclining, VR system is by definition, "partial.”
- the next step is to identify the implicit question of by what means is a "mobile point of view” realized.
- the answer is, to provide a view of the simulation which is mobile requires two components, themselves realized by a combination of hardware and software: a moving image display means, by which the simulation can be viewed, and motion- tracking means, which can track the movement of the device which includes the display in 3 axes of motion, which means to measure position over time of a 3-dimensional viewing device from a minimum of three tracking points (two, if the measurements the device is mapped so that a the third position on a third axis can be inferred), and in relation to a 3-axis frame of reference, which can be any arbitrary 3D coordinate system mapped to a real space, although for practical purposes of mechanically navigating the space, the 2 axes will form a plane that is a ground plane,
- VR applications - where the user is immersed in a synthetic environment - are more concerned with relative tracking than in absolute accuracy. Since the user's world is completely synthetic and self-consistent the fact that his/her head just turned 0.1 degrees is much more important than knowing within even 10 degrees that it is now pointing due North.
- AR systems such as AUGSIM, do not have this luxury.
- AR tracking must have good resolution so that virtual elements appear to move smoothly in the real world as the user's head turns or vehicle moves, and it must have good accuracy so that virtual elements correctly overlay and are obscured by objects in the real world.
- LAAS Local Area Augmentation System
- AI - or artificial intelligence - software such as the Massive software used to animate the armies in The Lord of the Rings movies), generate realistic water, clouds, fire, etc, and otherwise integrate and combine all elements, just as computer games and military simulation software do.
- the computer can "line up" its virtual resort with what the guest/player or employee/actor sees before they put in the VR goggles. And therefore, through a semi-transparent version of the binocular flat panel displays, if the virtual version were superimposed over the real resort, the one would match up with the other very precisely.
- VR HMD displays the user views a single panel or two separate displays.
- the typical shape of such HMD's typically is that of a goggle or face-mask, although many VR HMD's have the appearance of a welder's helmet with a bulky enclosed visor. To ensure optimal video quality, immersion and lack of distraction, such systems are fully-enclosed, with the periphery around the displays a light-absorbent material.
- the distinction between “video see-through” and “optical see-through” is the distinction between the user looking directly through a transparent or semi-transparent pixel array and display, which is disposed directly in front of the viewer, as part of the glasses optic itself, and looking through a semi-transparent projected image on an optic element also disposed directly in front of the viewer, generated from a (typically directly adjacent) micro- display and conveyed through forms of optical relay to the facing optic piece.
- the main and possibly only partly-practical type of direct view-through display a transparent or semi-transparent display system has (historically) been an LCD configured without an illumination backplane - therefore, specifically, the AR video view-through glasses hold a viewing optic(s) which includes a transparent optical substrate onto which has been fabricated a LCD light modulator pixel array.
- a key aspect of perspective, from any viewing point, in addition to relative size, is realistic lighting/shading, including drop shadows, depending on lighting direction.
- occlusion of objects from any given viewing positioning is a key optical characteristic of perceived perspective and relative distance and positioning.
- Stationary VR gear has generally been employed for night-vision systems in vehicles, including aircraft; mobile night-vision goggles, however, can be considered a form of mediated viewing similar to mobile VR, because essentially what the wearer is viewing is a real scene (IR- imaged) in real-time, but through a video screen(s), and not in a form of "view-through.”
- a night vision system is not a fusion or amalgam of virtual- synthetic landscape and real, but rather a direct-transmitted video image of IR sensor data as interpreted, through video signal processing, as a monochrome image of varying intensity, depending on the strength of the IR signature.
- a video image it does lend itself to real-time text/graphics overlay, in the same simple form in which the Eyetap was originally conceived, and as Google has stated is the intended primary purpose for its Glass product.
- a mixed reality space image generation apparatus for generating a mixed reality space image formed by superimposing virtual space images onto a real space image obtained by capturing a real space, includes an image composition unit (109) which superimposes a virtual space image, which is to be displayed in consideration of occlusion by an object on the real space of the virtual space images, onto the real space image, and an annotation generation unit (108) which further imposes an image to be displayed without considering any occlusion of the virtual space images. In this way, a mixed reality space image which can achieve both natural display and convenient display can be generated.”
- the purpose of this system was designed to enable combination of a fully-rendered industrial product, such as a camera, to be superimposed on a mockup (stand-in prop); both a pair of optical view-through HMD glasses and the mockup are equipped with positional sensors.
- a realtime pixel-by-pixel look-up comparison process is employed to matte out the pixels from the mockup so that the CG-generated virtual model can be superimposed on a composited video feed (buffer-delayed, to enable the layering with a slight lag).
- Annotation graphics are also added by the system.
- Computer graphics The essential sources of data to determine matting and thus ensure correct and not erroneous occlusion in the composite is the motion sensor on the mockup and the pre-determined lookup table that compares pixels to pull a hand matte and a mockup matte.
- Gao begins his survey of the field of view-through HMDS for AR with the following observations: [0385] There are two types of ST-HMDs: optical and video (J. Rolland and H. Fuchs, "Optical versus video see-through head mounted, displays," In Fundamentals of Wearable
- the major drawbacks of the video see- through approach include: degradation of the image quality of the see-through view; image lag due to processing of the incoming video stream; potentially loss of the see-through view due to hardware/software malfunction.
- the optical see-through HMD (OST-HMD) provides a direct view of the real world through a beamsplitter and thus has minimal affects to the view of the real world. It is highly (preferred in demanding applications where a user's awareness to the live environment is paramount.
- the Gao proposal is to employ two display-type devices, as the specification of the spatial light modulator which will selectively reflect or transmit the live image is essentially the specification of an SLM for the same purposes as they are in any display application, operatively.
- Gao specifies a duplication of what he refers to as “folded optics,” but is nothing essentially other than a dual version of the Mann Eyetap scheme, requiring in total two "folding optics” elements (e.g., planar grating/HOE or other compact prism or “flat” optics, one each for each source, plus two objective lens (one for wave-front from the real view, one at the other end for focus of the conjoined image, and a beam-splitter combiner).
- Folding optics e.g., planar grating/HOE or other compact prism or “flat” optics, one each for each source, plus two objective lens (one for wave-front from the real view, one at the other end for focus of the conjoined image, and a beam-splitter combiner).
- multiple optical elements are required to: 1) collect light of the real scene via a first reflective/folding optic (planar-type grating/mirror, HOE, TIR prism, or other "flat” optics) and from there to the objective lens, pass it to the next planar-type grating/mirror, HOE, TIR prism, or other "flat” optics to "fold” the light path again, all of which is to ensure that the overall optical system is relatively compact and contained in a schematic set of two rectangular optical relay zones; from the folding optics, the beam is passed through the beam- splitter/combiner to the SLM; which then reflects or transmits, on a pixelated (sampled) basis, and thus passes the variably (variation from the real image contrast and intensity to modify grey scale, etc) modulated, now pixellated real-image back to the beam splitter/combiner.
- a first reflective/folding optic planar-type grating/mirror, HOE, TIR prism, or other "flat” optic
- the display generates, in sync, the virtual or synthetic/CG image, presumably also calibrated to ensure ease of integration with the modified, pixelated/sampled real wave-front, and is passed through the beam-splitter to integrate, pixel-for-pixel, with the multi-step, modified and pixelated sample of the real scene, from thence through an eyepiece objective lens, and then back to another "folding optics" element to be reflected out of the optical system to the viewers eye.
- Digital projection free-space optical beam-combining systems which combine the outputs of high-resolution (2k or 4k) red, green and blue image engines (typically, images generated by DMD or LCoS SLM's are expensive achieving and maintaining these alignments are non-trivial. And some designs are simpler than in the case of the 7-element let of the Gao scheme.
- the occluded pixel would simply be left "off," although this is not specified by Gao, nor are the details of how the SLM will accomplish its image- altering function related.
- the position of the reflective optical element that passes the real-scene wave-front portion to the objective lens has a real perspective position in relation to the scene which is, first, not identical to the perspective position of the viewer in the scene, as it is not flat nor positioned at dead center, and it is only a wave-front sample, not what the position. And furthermore, when mobile, also moving, and also not known to the synthetic image processing unit in advance. The number of variables in this system is extremely large by virtue of these facts alone.
- the system design becomes slightly more simplified only with use of view-through, rather than reflective, SLM's; but even with the faster FeLCoS micro-displays, the frame rate and image speed is still substantially less than that of the MEMS device such as TI's DLP (DMD).
- DMD TI's DLP
- a recourse to a high-resolution DMD such as ⁇ s 2k or 4k device means recourse to a very expensive solution, as DMD's with that feature size and number are known to have low yields, higher defect rates than can be typically tolerated for mass-consumer or business production and costs, a very high price point for systems in which they are employed now, such as digital cinema projectors marketed commercially by TI OEM's Barco, Christie, and NEC.
- tags must in addition reflect not just relative position of the tagged elements in a perspective view of the real space, but also a degree of both automated (based on pre-determined or software-calculated) priority and real-time, user assigned priority, size of tags and degree of transparency, to name but two major visual cues employed by graphical systems to reflect informational hierarchy, must be managed and implemented as well.
- Passive optical pass-through HMD's must then be considered an incomplete model for implementing mobile AR HMD and will become, in retrospect, seen as only a transitional stepping stone to an active system.
- Oculus Rift VR (Facebook) HMD Somewhat paralleling the impact of the Google Glass product-marketing campaign, but with the difference that Oculus had actually also led the field in solving and/or beginning to substantially solve some of the significant threshold barriers to a practical VR HMD (rather than following Lumus and BAE, in the case of Google), the Oculus Rift VR HMD at the time of this writing is the leading pre-mass-release VR HMD product entering and creating the market for widely- accepted consumer and business/industrial VR.
- the Glyph actually implements a display solution which follows the previously established optical view-through HMD solution and structure, employing a Texas Instruments DLP DMD to generate a projected micro-image onto a reflective planar optic element, in configuration and operation the same as the planar optical elements of existing optical view-through HMDs, with the difference that a high-contrast, light absorbent backplane structure is employed to realize a reflective/indirect micro-projector display type, with an video image belonging in the general category of opaque, non-transparent display images.
- Eli Peli official consultant to Google, followed up an earlier warning in an interview with online site BetaBeat (May 19, 2014) to Google Glass users to anticipate some eye strain and discomfort with a revised warning (May 29, 2014) that sought to limit the cases and scope of potential usage.
- the demarcation was on eye muscles used in ways they are not designed or used to for prolonged periods of time, and proximate cause of this in the revised statement was the location of the small display image, forcing the user to look up.
- Vrvana Totem the departure from the Oculus VR Rift is in adopting the scheme of Jon Barrilleaux's "indirect view display,” by adding binocular, conventional video cameras to allow toggling between a video-captured forward image capture and the generated simulation on the same optically- shrouded OLED display panel.
- Vrvana have indicated in marketing materials that they may implement this very basic "indirect view display," exactly following the Barrilleaux-identified schematic and pattern, for AR. It is evident that virtually any of the other VR HMD's of the present Oculus VR generation could be mounted with such conventional cameras, albeit with impacts on weight and balance of the HMD, at a minimum.
- Oculus VR has implemented a "low persistence" buffering system in pat to compensate for the still insufficiently-high pixel switching/ frame rate of the OLED displays which are employed at the time of this writing.
- a further impact on the performance of existing VR HMD's is due to the resolution limitations of existing OLED and LCD panel displays, which in part contributes to the requirement of using 5-7" diagonal displays and mounting them at a distance from the viewing optics (and viewers eyes) to achieve a sufficient effective resolution), contributes to the bulk, size and balance of existing and planned offerings, significantly larger, bulkier, and heavier than most other optical headwear products.
- Video HMD's employed for viewing video content but not interactively or with any motion sensing capability, and thus without the capability for navigating a virtual or hybrid (mixed reality/ AR) world.
- Such video HMD's have essentially improved over the past fifteen years, increasing in effective FOV and resolution and viewing comfort/ergonomics, and providing a development path and advances that current VR HMD's have been able to leverage and build upon for. But these, too, have been limited by the core performance of the display technologies employed, in pattern following the limitations observed for OLED, LCD and DMD-based reflective/deflective optical systems.
- ⁇ "High-acuity" VR has improved in substantially in many respects, from FOV, latency, head/motion tracking, lighter-weight, size and bulk.
- ⁇ VR based on an enclosed version of the optical view-through system, but configured as a lateral projection-deflection system in which an SLM projects an image into the eye via a series of three optical elements, is limited in performance to the size of the reflected image, which is expanded but not much bigger than the output of the SLM (DLP DMD, other MEMS, or FeLCoS/LCoS), as compared to the total area of a standard eyeglass lens. Eye-strain risks from extended viewing of what is an extremely-intense version of "close-up work" and the demands this will make on the eye muscles is a further limitation on practical acceptance. And SLM-type and size displays are also limit a practical path to improved resolution and overall performance by the scaling costs of higher resolution SLM's of the technologies referenced.
- Optical view-through systems generally suffer from the same potential for eye-strain by confinement of the eye-muscle usage to a relatively small area, and requiring relatively small and frequent eye-tracking adjustments within those constraints, and for more than brief period of usage.
- Google Glass was designed to reflect expectations of limited duration usage by positioning the optical element up, and out of the direct rest position of the eyes looking straight ahead. But users have reported eye-strain none-the-less, as has been widely document in the press by means of text and interviews from Google Glass Explorers.
- Optical view-through systems are limited in overlaid, semi-transparent information density due to the need to organize tags with real-world objects in a perspective view.
- the demands of mobility and information density make passive optical-view through limited even for graphical information-display applications.
- a display-optics system which enables a fast compositing process, within the context of the human visual system, between the real scene wave-front and any synthetic elements.
- many passive means should be employed as possible to minimize the burden on either on-board (to the HMD and wearer) and/or external processing systems.
- a display-optics system that is relatively simple and rugged, with few optical elements, few active device elements, and simple active device designs which are both of minimal weight and thickness, and robust under mechanical and thermal stress.
- ⁇ A system which can both manage incoming wavelengths for the HVS and obtain effective information from those wavelengths of interest, via sensors, and hybrids of these. IR, visible and UV are typical wavelengths of interest.
- STRUCTURAL/ARCHITECTURES FOR TREATING AND PROCESSING PASS-THROUGH (REAL WORLD) ILLUMNATION While other permutations and versions are enabled by the general features of the present disclosure, the primary differences of the two preferred embodiments essentially differ in the processing of the incoming natural light, and the channel(s) in the structured optics which convey that light, through subsequent processing stages, to the output surface of the inward/viewer facing composite optics surfaces - in one case, all real-world, pass-through illumination is down-converted to IR and/or near IR "false colors" for efficient processing; in another case, the real-world, pass-through visible frequency illumination is processed/controlled directly, without frequency/wavelength shifting.
- ARRAYS this preferably a hybrid-magneto-photonic, pixel-signal processing and photonic encoding system.
- the same overall method, sequence and process is applied to the pass-through light channels in the version and case in which all the pass-through light is down-converted to IR and/or near-IR.
- a wearable HMD "glasses” or “visor” has a first optical element, which in preferred form is a binocular element, either left and right separate elements or one visor-like connected element, which intercepts the view-through, real- world wave-front(s) of optical rays emanating from the external world relatively forward of the viewer/wearer.
- This first element is a composite or structured (e.g., either a substrate/structural optic on which is deposited layers of materials/films or which is itself a periodic or non-periodic but complex-2D or 3D structured material, or hybrid of composite and directly- structured), which implements IR and/or near-IR filtering and
- UV filtering may be gratings/structures (photonic crystal structures) and or bulk films whose chemical composition implements reflection and/or absorption of the unwanted frequencies. These options for materials structuring are well-known to the relevant arts, with many options commercially available.
- IR filtering is eliminated and some elements of the sequence of functional stages are altered in order, eliminated or modified, following the pattern and structure of the present disclosure. Details of this category and version of embodiment are treated latterly in the following.
- MAXIMUM INPUT or PASS-THROUGH ILLUMNATION STAGE A similar filter, which optimally follows the first filters in optical line-up sequence, the next element to the relative right of
- FIG is either a polarization filter OR polarization sorting stage. This may be again a bulk
- polystyrene foam or polarizer film or deposited material, and/or a polarization grating structure or any other polarization filtering structure and/or material which offers the best combination of practical features and benefits for any given embodiment, i.e., in terms of efficiency, cost of manufacture, weight, durability and other parameters for which optimization trade-offs may be required.
- Polarization filtering option results: After this sequence of optical elements disposed across the entire extent of the optical/optical structural elements, the incident wave-front has been frequency-bracketed, and it has been polarization-mode bracketed and sorted/separated by mode. For visible light frequencies, the net brightness per mode channel has been reduced by the magnitude of the polarization filtering means, which for sake of simplicity, reflecting the current efficiency of periodic gratings-structured materials, is practically becoming close to 100% filtering efficiency meaning, that roughly 50% of the light is eliminated per channel.
- PIXELLIZATION or SUB-PIXELIZATION OF THE REAL-WORLD PASS-THROUGH ILLUMINATION AND CHANNELS IMPLEMENTING THESE A sub-pixel subdivision of the incoming wave-front, an optical passive or active structure or operative stage implemented along with the preceding, and preferably following, as it will tend to reduce fabrication expense.
- This subdivision may be implemented by a wide variety of methods known to the art, as well as others yet to be devised, and including deposition of differential index bulk materials, employing photochemical resist-mask-etch processes or materials fabrication of nano-particles in colloidal solution via electro- static/van der Waals Force-based methods and other self-assembly methods; focused ion bam etching, or embossing, and via etching, cutting and embossing methods in particular, fabrication of capillary micro-hole arrays implementing wave-guiding by modified total index of refraction, or fabrication of other periodic structures implementing a photonic-crystal Bragg-grating type structure, or other periodic gratings or other structures fabricated in a bulk material.
- a sub-pixel sub-division/guiding material-structure to form an array over the area of the macro-optic/structure element may be fabricated by assembly of constituent parts, such as optical fibers and other optical-element precursors, including by methods disclosed elsewhere by the author of the present disclosure, as well as methods proposed by Fink and Bayindir, for fiber- device-structured preform assembly, or fused glass or composites assembly methods.
- pixel/subpixel components of the total view-field array provided to the viewer using the system of the present proposal and two differing, "branched” processing sequences and operative structures, en route to the final pixel presentation to the viewer. And that it is one of the first stages and requirements for the present compound structure and sequence(s) of operative processes that pixel- by-pixel, and sub-pixel-by-sub-pixel, light-path control is implemented, at their appropriate stages.
- STRUCTURAL/ARCHITECTURES FOR TREATING ANDN PROCESSING PASS-THROUGH (REAL WORLD) ILLUMNATION While other permutations and versions are enabled by the general features of the present disclosure, the primary differences of the two preferred embodiments essentially differ in the processing of the incoming natural light, and the channel(s) in the structured optics which convey that light, through subsequent processing stages, to the output surface of the inward/viewer facing composite optics surfaces - in one case, all real-world, pass-through illumination is downconverted to IR and/or near IR "false colors" for efficient processing; in another case, the real-world, pass-through visible frequency illumination is processed/controlled directly, without frequency/wavelength shifting. [0508] a.
- the visible light channel(s), which have been UV and IR filtered and polarization mode- sorted (and optionally, filtered to knock down the overall intensity of the pass-through illumination), are frequency- shifted to IR or near-IR but in either case non- visible frequencies, implementing a "false color" range of the same proportional band positioning width and intensity.
- the HVS would detect and see nothing after the photonic pixel signal processing method of frequency/wavelength modulation and down- shifting.
- the subsequent photonic pixel signal processing of these channels then is essentially the same as is proposed for the generated pixel-signal channels, as disclosed in the following section.
- the pass-through channels are not frequency/wavelength modulated and down-converted to invisible IR and/or near IR.
- the preferred default configuration and pixel-logic state of the pass-through channels is "on," e.g,, in the case of a conventional linear Faraday-rotation switching scheme for pixel-state- encoding/modulation is employed, including input and output polarization filtering means, for any given polarization model-sorted sub-channel, the analyzer (or output polarization means) will be essentially identical to the input polarization means, such that when the operative linear Faraday- effect pixel logic state encoder is addressed and activated, the operation is to reduce the intensity pass-through channel. Details of some of the features and requirements of this embodiment are disclosed in subsequent sections, following the details provided for operative function and structure of generated channels).
- polarization filtering is combined with this preferred embodiment and variation, rather than mode sorting and implementation of separate mode channels which are then combined into a consolidated channel by polarization rotation means to preserve as much as the original pixelated pass-through illumination as possible, such as by means of passive components (e.g., half- wave plates) and/or active magneto-optic or other mode/polarization angle modulation means, then the overall brightness of the pass-through illumination will be reduced by typically around 50%, which in some instances will be more preferred given the relative visible-range performance as of the present writing of magneto-optic materials, as a preferred class and method.
- passive components e.g., half- wave plates
- active magneto-optic or other mode/polarization angle modulation means active magneto-optic or other mode/polarization angle modulation means
- the background pass-through illumination brightness maxima therefore being reduced proportionally, it may be correspondingly easier for the sub-system which provides the "generated” (artificial, non-pass-through) sub-pixel channels and related methods and apparatus to match and integrate and harmonize the generated image elements within a generally comfortable and realistic overall illumination range for the "augmented reality" imagery and view.
- the pass-through channels can be configured in a default "off configuration, such that if employing the typical linear Faraday-rotator scheme, the input polarization means (polarizer) and output means (analyzer) are opposite or "crossed.”
- polarizer input polarization means
- output means analyzer
- frequency-dependent MO materials or other photonic modulation means, to the extent that the employ frequency dependent/performance determined materials
- UV is also an included option and may in the future be employed in some cases to shift input visible illumination to a convenient non- visible spectral domain for intermediate processing before final output.
- ARRAYS First, we consider the image generation pixel-signal component, or in other words, the pixel- signal-processing structure, operative sequence, which is preferably a hybrid-magneto- photonic, pixel-signal processing and photonic endcoding system.
- the next structure, process and element in the sequence is an optical IR and/or near-IR planar illumination dispersion structure and pixel-signal processing stage.
- an optical surface and structure (a film deposited or mechanically laminated to a structural/substrate, or a patterning or deposition of materials, or combination of methods known to the art, on the substrate directly) evenly distributes IR and/or near-IR illumination evenly across the full optical area of the 100+ FOV binocular lens or continuous visor-type form-factor.
- the IR and/or near IR illumination is distributed evenly by such means as: 1) a combination of leaky-fiber disposed on the X-Y plane of the structure, either all in the X or Y directions or in a grid.
- Leaky fiber such has been developed and is commercially-available by companies such as Physical optics, leaks illumination transmitted substantially through the fiber core transversely in a substantially even fashion over a specified design distance, combined with a diffusion layer, such as non-periodic 3D bump structured film (embossed non-periodic micro- surface) commercially available from Luminit, Inc., and/or other diffusion materials and structures known to the art; 2) side illumination from IR and/or Near IR LED edge arrays or IR and/or Near IR edge laser arrays, such as VCSEL arrays, projecting to intercept as bulk illumination, such planar sequential beam expander/spreader optics as planar periodic gratings structures, including holographic element (HOE) structures, such as is commercially available from Lumus, BAE and other commercial suppliers referenced herein and in the previous referenced pending applications, and other backplane diffusion structures, materials and means; and in general, other display backplane illumination methods, means and structures known to the art and which may be developed in the future
- stage/structure in the sequence of operations and pixel- signal processing is to launch IR and/or near IR backplane illumination which is confined to the relative interior of the compound optical/materials structure as proposed thus far, with the IR and/or near-IR filter(s) reflecting the injected IR and/or near-IR illuminiation to the illumination layer/structure.
- the illumination source of the IR and/or Near IR may be LED, laser (such as VCSEL array), or hybrid of both, or other means known to the art or which may be developed in the future.
- the injected IR and/or near-IR illumination is also of a single polarization mode, preferable plane polarized light.
- a polarization harmonization means by splitting the IR and/or near-IR LED and/or laser and/or other illumination source(s) with a polarization splitter or filter/reflector sequence, such as a fiber-optic splitter, and passing one of the plane-polarized components through either a passive and/or active polarization rotation means, such as a bulk magneto-optic or magneto-photonic rotator, or a sequence of passive means, such as a combination of half-wave plates, or a hybrid of these.
- a passive and/or active polarization rotation means such as a bulk magneto-optic or magneto-photonic rotator, or a sequence of passive means, such as a combination of half-wave plates, or a hybrid of these.
- a polarization filter such as an efficient grating or 2D or 3D periodic photonic crystal-type structure set at an angle to the incident light may bounce the rejected light into the polarization rotation optical sequence and channel, which then re- combines with the unaltered portion of the original illumination.
- a waveguide, planar or fiberoptic in which the polarization modes (plane polarized) are separated, one branch passes through the polarization harmonization means and then rejoins the other branch subsequently.
- the source illumination may also be constrained in its own structure to produce only light plane-polarized at a given angle or range.
- the light may be generated and/or harmonized locally, in the HMD, or remotely from the HMD (such as a wearable vest with electrical power storage means) and conveyed via fiber-optics to the HMD.
- the illumination and/or harmonization stage and structures/means may be immediately adjacent to the compound optical structure described, or somewhere else in the HMD and conveyed optically, by optical fiber if more remote and/or via planar waveguides if closer.
- the "on" state is encoded by rotating the angle of polarization of the incoming plane-polarized light, such that when that light passes through a later stage of the pixel-signal processing system, a subsequent and opposite polarization filtering means (known as an "analyzer,”), the light will pass through the analyzer.
- the light passes through a medium or structure and material subjected to a magnetic field, uniform/bulk or structured photonic crystal or meta-material, typically solid, (although it may also pass through an encapsulated cavity containing a gas or rarified vapor, or liquid), which possess an effective figure of merit which measures the efficiency of the medium or material/structure to enable the rotation of the angle of polarization.
- the hybrid MPC pixel-signal-processing system implements a memory or "latching," no-power until the pixel-logic state requires changing system. This is accomplished by means of the following tuning and implementation of magneic "remanence" methods, known to the art, in which the magnetic materials are fabricated, either in bulk processing (e.g., Integrated Photonics commercially available latching LPE thick MO Bi-YIG film [REFERENCE pull from our other disclosures]; and/or implement of the Levy et al permanent domain latching periodic ID gratings [REFERENCE pull from our other disclosures]; or composite magnetic materials, combining a relatively "harder” magnetic material in juxtaposition/mixing with an optimized MO material, such that an applied field latches the low-coercivity, rectilinear hysteresis curve material, which as an intermediate, maintains the magneticization (latching) of the MO/MPC material.
- the intermediate material may surround the MO/MPC material, or it may
- a second essential aspect and element of the preferred pixel-signal- processing, pixel-logic-encoding stage and method is efficient generation of the magnetic field which switches the magnetic state of the sub-pixel (being the fundamental primitive of color systems such as RGB, so for convenience when discussing the conventional components of a final color pixel, the naming convention is retained more generally, and distinctions made when needed).
- the field-generation structure e.g., "coil”
- the field-generation structure be disposed in the path of the pixel transmission axis, rather than on the sides.
- Transparent materials may include such available materials as ITO and other newer and forthcoming conductive materials which are transparent to the relevant frequencies. And/or, other materials, which are not necessarily transparent in bulk but which, in a periodic structure of the appropriate periodic element size, geometry, and periodicity, such as metals, may also be deposited or formed in the modulation region/sub-pixel transmission path.
- a third significant element of the preferred hybrid MPC pixel-signal processing solution for the pixel- signal-processing sub-system is the method of addressing an array of the sub-pixels.
- the preferred method, as referenced in the preceding, is found in pending US
- Wireless Addressing and Power of Device Arrays may be sufficient to consolidate the powering of the wireless array (sub-pixel) element, given the low power requirements, dispensing with a wireless power method via low-frequency magnetic resonance, although micro-ring-resonators may be more efficient, depending on materials choices and design details, than powering through micro-antennas.
- Wireless powering of the HMD or wearable device as whole is a preferred method of powering the overall unit while reducing mead-mounted weight and bulk, especially when combined with local high-power density meta-capacitor systems, or other capacity technologies, that can be powered-up by the wireless low-frequency pack.
- a basic low-frequency magnetic resonance solution is available from Witricity, Inc. For more complex systems, reference is made to the US Patent Application , Wireless Power Relay.
- Such other pixel-signal-processing-pixel-logic-encoding means including Mach-Zehnder interferometer-based modulators, whose efficiencies are typically also frequency- materials system based and most efficient in IR and/or near-IR, may also be employed, though less preferable, as well as any number of other pixel- signal-logic encoding means design in a
- pixel-logic- state-encoding stage of the operative structure and process is an optional signal gain stage. The cases when this option is relevant will be covered at what will be an evident point in the following presentation.
- Wavelength/frequency shifting stage for the present particular version of the preferred Hybrid MPC Pixel-signal Processing system, a frequency upconverting stage follows, employing a preferred nano-phosphor and/or quantum-dot (e.g., QD Vision) augmented phosphor color system (although a periodically-poled device/materials systems is also specified as an option in the referenced disclosures).
- a frequency upconverting stage follows, employing a preferred nano-phosphor and/or quantum-dot (e.g., QD Vision) augmented phosphor color system (although a periodically-poled device/materials systems is also specified as an option in the referenced disclosures).
- QD Vision quantum-dot
- Commercially available basic technologies include from suppliers such as GE, Cree, and a wide range of other vendors known to commercial practice.
- a virtue of the employing the hybrid MPC pixel-signal processing method is the high-speed of the native MPC modulation speed, which has been demonstrated as sub- 10 ns for a significant period of time, and sub-ns is currently the relevant benchmark.
- the speed of the phospher excitation-emission response is comparably fast, if not as fast, but in aggregate and net, the total full- color modulation speed is sub 15 ns and theoretically will be optimized to an even lower net-time- duration measurement.
- a variant on the proposed structure adds a band-filter to each of the IR and/or near-IR sub-pixel channels which will, at the end of the processing sequence, be either "on” or "off for upscaling to R, G, or B.
- This variant while adding the complexity of a filter element, may be preferred if 1) the hybrid MPC stage itself, in composition of materials, is an array of tailored materials which respond more efficiently to different sub-bands in the IR and/or near-IR domain, even thought his is not likely to be the case, due to the almost 100% transmission efficiency and very-low-power polarization rotation of even bulk LPE MO films commercially available in that wavelength domain, or much more likely, 2) if the efficiency of different nano-phosphor and/or quantum-dot augmented nano-phoshpor/phosphor materials formulations is significantly great enough that a more precisely bracketed IR and/or near-IR frequency band for each ultimate R, G and B sub-pixel constituent is merited.
- gg Following this color processing stage, a sub-pixel group realized from the initial IR and/or near-IR illumination source continues through the consolidated optical pixel channel.
- the output pixel will be, as may be required, depending on design choices for the modulation and color stage component dimensions, optional pixel-expansion, preferably by diffusion means, including those referenced and as disclosed in the referenced applications, may be necessary (pixel spot-size reduction being far less likely, which requires an optical focusing or other method, as known to the relevant arts and as disclosed in certain of the referenced applications, especially [2008].
- collimating optical elements including lenslet arrays, optical fiber arrays embedded in textile-composites with the fibers disposed parallel to the optical transmission axis; "flat" or planar inverse-index meta-material structures, and other optical methods known to the art, are employed.
- all elements are fabricated or realized in composite layers on the macro- optical element/structure, rather than requiring additional bulk optical eyepiece elements/structures.
- Further questions of fiber-type methods vs. laminate composites or deposition-fabricated multi-layer structures, or combinations/hybrids of more than one, are treated in the following section under structural /mechanical systems.
- Each final pixel may include at least two pixel components, (beyond the color-system RGB sub-pixels described in the foregoing): one, the components, disposed in an array, which do generate the ab-initio video image, which may include simple text and digital graphics, but for the full purpose of the present system, is capable of generating a high-resolution image from either CGI or relatively remote live or archived digital imagery, or composites and hybrids of same. This is as described in the foregoing.
- FREQUENCY PASS-THROUGH (i.e, not down-converted to IR/near-IR): Returning to the transmission and processing of real-world, non-generated light rays from the field of view through the structured and operative optics and photonics structures and stages;
- optical channels convey the wave-front portions, with low-loss of wave-front by employing available efficient methods of division.
- Surface lenslet arrays or mirror- funnel arrays may be employed in combination with the proposed subdivision methods, enabling very close-to-edge-to-edge ray capture efficiency, such that the captured wave-front portion is then coupled efficiently to the relative "core" of the subdivided/pixelated guidance optic/array structure.
- the area of the pixelated array formation devoted to the coupling means will receive a minimized percentage of wave-front, minimizing loss.
- Efficient wave-front capture, routing, and guided/pixelated segmentation requires, for certain versions and operating modes of the present system, broadband optical elements that focus and/or reflect visible AND IR and/or near-IR frequencies - and, as will be seen, this is despite the proposal to implement the IR and/or near-IR filter as the initial and first optical filtering structure and means in the optical line-up and sequence.
- the IR and near-IR illumination stage there will be, interspersed through that stage, guiding structures for the "pass-through” captured illumination which are transparent to IR and/or near-IR, but provide visible frequency light-guiding/path confinement, so that IR and/or near-IR can be evenly distributed while not interfering with the channelized "pass-through” pixel components.
- MPC Pixel signal processing in which an energized gain material is pumped to implement an energy gain in the gain medium, either optically, electrically, sonically, mechanically, or magnetically, as detailed in the referenced applications, and by other methods as may be known to the art or devised in the future, to augment the intensity of the transmitted "pass-through" component of the final pixel as it passes through the gain medium. It is not preferred that this is a variable, addressable stage, but rather a blanket gain-increase setting, if this design option is chosen.
- IR filter in which the IR filter is removable, it is the goal to pass IR and/or near-IR light from the incoming real-world wave-front to the active modulating array sequence, so that the incoming "real" IR is passed through the pixel-signal-processing modulator and directly, to the extent that IR output is visible in the field of view, generated an analogous color (monochrome or false-color IR gradient) image for the viewer, without requiring the intermediation of a sensor array.
- a gain stage may be implemented to boost the intensity of the pass-through IR (+ near IR, if beneficial) to the wavelength/frequency shifting stage.
- a base IR and/or near-IR background illumination, modulated intensity to set an appropriate base level may be turned on through the normal full-color operating mode, to the degree that the input IR radiation does not reach a threshold to activate the
- wavelength/frequency shifting stage and media are examples of wavelength/frequency shifting stage and media.
- the removal/deactivation of the IR filtering means may be implemented mechanically, if a passive optical element deployed in a hinged or cantilevered-hinged device, which can be "flipped up”; or as an active component, de-activated, such as in an electrophoretic-type- activated bulk, encapsulated layer, in which (as proposed here) electro-statically (mechanically) rotates a plurality of relatively flat filtering micro-elements, such that the minimum angle of incidence is passed and the plurality of rotated elements no longer filters the IR).
- Other passive or active activation/removal methods maybe employed.
- the IR filter and polarization filter may both removed, depending on whether the generative system is employed "actively,” not just to generate a threshold, and superimpose data over some portions of the incident real IR wave-front portions in the pixelated array. If employed actively, the preferred digital pixel-signal processing system, to maximize efficiency of the generative source, requires the initial polarization filter to implement the optical switch/modulator which encodes the pixel-logic-state in the signal.
- the disadvantage for the pass-through system is that it reduces the intensity of the incoming IR and/or near-IR.
- An alternative embodiment of the present system which is designed to address this problem, disposes a gain stage prior to the pixel-signal-processing, pixel-logic- state- encoding stage, to boost the incoming signal.
- a three-component system which includes component sub-channesl for the generative means, an incident visible light component, and an incident IR component which has not been polarization filtered.
- a pixelated poliarziation filter element which leaves this third sub-channel/component without a polarizsation filter element, must be implemented to realize this variant.
- a frequency splitter in sequence after the lenslet or alternative optical capture means for maximizing the capture of the incoming real wave-front, or integrated with the lenslet, is a frequency splitter.
- One method is to implement opposing filters, one band-filter for visible light, allowing only IR and/or near-IR light, and an adjacent filter for IR and/or near-IR light.
- Grating structures are a preferred method of implementing the dual-filter-splitter arrangement, but other methods are known to the art as well, based on bulk-materials formulations, which may be deposited, by various methods known to the art and to be developed, in sequential stages to implement the two filtering surfaces.
- NB that UV is filtered before this stage, but preferably after the IR.
- the IR and polarizer phases are first and second the UV filter is third; in others, IR is followed by UV and then polarizer.
- Different arrangements have different value for different use cases, and different impacts on fabrication cost and particular sequences of processes).
- the two component optical channels are, as has been indicated, co-located and output together preferably into/at a pixel harmonizing means (diffusion and/or other mixing method and as may be available by other methods known to the art, or which will be devised in the future), such that the generative source is combined with the pass-through source and, just as with RGB sub- pixels of a conventional color-vision artificial additive color display system, form a final composite pixel.
- a pixel harmonizing means diiffusion and/or other mixing method and as may be available by other methods known to the art, or which will be devised in the future
- a composite color component for the final integrated pixel this one formed by the "generative" pixel component, which begins as a non-visible IR and/or near-IR “interior,” "injected” rear illumination, which is turned on or off, for each sub-pixel, at sub 10-ns speeds (and currently, sub 1-ns). That IR and/or near-IR sub-pixel then activates a composite phosphor material/structure, employing best current materials and systems available for producing the widest possible gamut.
- the generative component is a high frame-rate, hi dynamic range, low- power, broad color gamut pixel switching technology.
- the second component of the composite pixel is the pass-through component, which begins as an efficient high-percentage of the sub-divided portion of the overall wave-front impinging on the forward optical surface of the present HMD, incoming from the facing direction of the wearer.
- These wave-front portions are filtered for UV and IR, in normal mode, as well as polarization sorted or filtered (which is chosen will depend on the design strategy selected, either reduced real-world illumination base or maximized base). With reduced base, i.e., polarization filtering, this results in reducing the overall brightness of the visible field of view substantially (on the order of 1/3 to 1 ⁇ 2, depending on the composition of polarization modes incident and the efficiency of the polarizer.
- the mobile HMD With the pass-through pixel components sub-channels designed in a default "off scheme (i.e., polarizer and analyzer in the preferred polarization modulation form are "crossed” rather than the same), and conveying no pass-through wave-front portions, the mobile HMD, given calibration with the real landscape and motion tracking, can function in mobile VR mode. As will be seen, in combination with the proposed sensor and related processing systems, the HMD can function as Barrilleaux's "indirect view display," with the pass-through turned off.
- variable-transmission means of the pass-through system can be augmented into being a direct- view system. Its disadvantage will be in dynamic range, and without a generative means to supplement, a relatively low-light limitation by comparison;
- the optimized system is one which combines an efficient generative component with a variable intensity, but lower- light level overall, pass-through component.
- IR and near-IR, if desired passed-through the pixel-state system without loss and, with the optional gain stage boosting the IR signal strength, and/or the IR/near-iR interior- injected illumination component raising the threshold/base intensity, on top of which the incoming pixelated IR strength will be added/superposed, the IR/near-IR passes through the pixel-state system without loss and, with the optional gain stage boosting the IR signal strength, and/or the IR/near-iR interior- injected illumination component raising the threshold/base intensity, on top of which the incoming pixelated IR strength will be added/superposed, the IR/near-IR passes through the
- wavelength/frequency shifting means (preferred phosphor-type system) and, with either the system set to monochrome or false-color, a direct- view low-light or night vision system is realized.
- the generative system can operate and add graphics and full imagery, compensating for the reduced intensity of the incoming IR with either a signal from an auxiliary sensor system (see following), or simply adding a base level, as proposed in the other configuration, to ensure that the energy input into the wavelength/frequency shift is enough to produce a sufficient output.
- the Bayindir/Fink "optical fabric" camera developed at ⁇ , is an example of validation of a particular physical method of implementing a distributed array.
- a distributed textile-composite camera array disposed in the structure of the HMD mechanical frame - and, as per the following, doing double duty by also adding to the structural system solution, rather than serving as a non-contributing load on the system - is a preferred version of implementing the advantageous multi-device array system which provides for parallel, distributed data capture.
- a multi-point miniature sensor array which can include multiple miniature camera optics-sensor array devices, is another preferred implementation of multi-perspective systems.
- Auxiliary IR sensors again preferably arranged in multiple, lower resolution device arrays, is, as has been indicated, can either provide an override low-light/night-vision feed to the display system, or providing corrective and supplementary data to the generative system to work in harmony and coordination with the real IR pass-through.
- a Lytro-type light field system based on the same arrangement, in pattern at the general level, for visible spectrum may be employed for sensors in other frequency bands, which, depending on the application, can include not only low-light/night vision, but also field analytics for other applications and use-cases, such as UV or microwave.
- a spatial reconstruction from non-visible, or non-visible supplemented by GPS/LID AR reference data may be generated, and other dimensional data collection correlations obtained, in performing sensor scans of complex environments.
- Compact mass-spectrometry now being realized in smaller and smaller form factors and miniaturization, can also be contemplated for integration into an HMD, as miniaturization proceeds.
- one or more micro "light-probe” which is a reflective sphere whose surface can be imaged to extract a compact global reflectance map, positioned for instance at key vertices of the HMD (right and left corners, or solely the center, paired with multiple imagers to capture the entire reflected surface; alternatively, a concave reflective partial hemispherical "hole” can also be utilized, alone or preferably in combination with a sphere, either held in pace via magnetic fields, or on a strong spindle or mostly hidden-mounting, to extract lighting data from compact, compressed reflection surface), can provide a highly- accelerated method, in conjunction with the other related methods from photogrammetry, to parameterize the lighting, materials and geometry of a space - not only to accelerate fast graphic integration (shading, lighting, perspective rendering, including occlusion, etc.) of live and generated CGI
- One preferred embodiment of structural-functional integration, with benefits to weight, bulk, size, balance, ergonomics, and cost is implementation of a textile-composite structure of tensioned thin-films in combination with flexible optical- structural substrate, in particular preferable an HMD frame formed of Corning Willow Glass, which is folded (and preferably, sealed) with all processing and functional electronics that must be integrated into the HMD, which can include power supply in less preferred versions which do not use wireless powering, fabricate on the folded glass frame.
- a protective coating is applied/wrapped or otherwise added to the functional-optical- structural members, such as shockwave-system-based D30, which is when non-shocked, soft and resilient, but when impacted, the Shockwave solidifies the material, providing an protective barrier to the less durable (though appreciably durable) Willow-glass structural/functional system).
- the folded Willow glass with the interior surface being the location of system-on-glass electronics, is shaped in a cylindrical form, or semi-cylinder, for added strength and to better protect the electronics from shock, and to also thereby enable a thinner substrate.
- Optical fiber data and illumination is delivered via flexible, textile- wrapped and protected (with preferably, D30 as an outer composite layer, or other shock-resistant composite component) cable, from illumination, powering (preferably wireless), and data processing units in a pocket or integrated into an intelligent textile composite wearable article on the users body, and thereby flattened and weight distributed and balanced.
- the optical fiber (data, light, and optionally power) cable is integrated with the composite Willow glass frame, the optical fiber is bonded as a composite, preferable to the more expensive and unnecessary thermal fusing, to the data input points for E-0 data transfer, and for the illumination insert points on the display face.
- the display frame structural elements are, in this version, also Willow Glass or Willow-Glass type materials systems with optional additional composite elements: but instead of solid glass or polymer lenses forming the optical-form-factor elements (binocular pair or continuous visor), these are thin films composite layers, following a lens-type preform to help form desired surface geometries; compression ribs may also be employed to implement appropriate curvatures.
- optical channel elements such as optical fibers
- a preferred option is to implement optical channel elements, such as optical fibers, as part of an aerogel-tensioned membrane matrix.
- optical channel elements such as optical fibers
- a hollow IR/near-IR rigid shell may be employed, with solid (or semi-flexible) optical channels for the IR- pass through to the IR generative channel, and the visible pass through channel, infiltrating the hollow and spaces in-between with aerogel, and including aerogel under positive pressure, will realize and extremely strong, low density, lightweight reinforced structural system.
- Aerogel-filament composites have been commercially developed and advances in this category of composite aerogel systems continue to be made, providing a wide-range of materials options for silica and other aerogels, and now fabricated in low-cost, manufacturing methods (Cabot, Aspen Aerogel, etc.).
- a further option, and/or which can be employed in hybrid form with the Willow glass, is a graphene-CNT (carbon nanotube) functional-structural system, alone or preferable again in composite with aerogels.
- graphene, CNT, and preferably graphene-CNT combinations as compression elements provide preferred light[-weight, integrated structural systems with superior substrate qualities.
- the semi-flexible Willow Glass, or similar glass products from Asahi, Schott, and others as they are likely to be developed, and also but less preferably near-term, polymer or polymer glass hybrids, may also serve as the depositional substrate.
- drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Cardiology (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Eye Examination Apparatus (AREA)
- Processing Or Creating Images (AREA)
Abstract
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018568159A JP2019521387A (ja) | 2016-03-15 | 2017-03-15 | ハイブリッドフォトニックvr/arシステム |
| CN201780030255.4A CN109564748B (zh) | 2016-03-15 | 2017-03-15 | 混合光子vr/ar系统 |
| CN202211274817.9A CN115547275A (zh) | 2016-03-15 | 2017-03-15 | 混合光子vr/ar系统 |
| JP2022031648A JP2022081556A (ja) | 2016-03-15 | 2022-03-02 | ハイブリッドフォトニックvr/arシステム |
Applications Claiming Priority (16)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662308361P | 2016-03-15 | 2016-03-15 | |
| US201662308687P | 2016-03-15 | 2016-03-15 | |
| US201662308825P | 2016-03-15 | 2016-03-15 | |
| US201662308585P | 2016-03-15 | 2016-03-15 | |
| US62/308,585 | 2016-03-15 | ||
| US62/308,361 | 2016-03-15 | ||
| US62/308,687 | 2016-03-15 | ||
| US62/308,825 | 2016-03-15 | ||
| US15/457,980 US20180031763A1 (en) | 2016-03-15 | 2017-03-13 | Multi-tiered photonic structures |
| US15/458,009 | 2017-03-13 | ||
| US15/457,967 | 2017-03-13 | ||
| US15/458,009 US20180122143A1 (en) | 2016-03-15 | 2017-03-13 | Hybrid photonic vr/ar systems |
| US15/457,991 | 2017-03-13 | ||
| US15/457,967 US20180035090A1 (en) | 2016-03-15 | 2017-03-13 | Photonic signal converter |
| US15/457,991 US9986217B2 (en) | 2016-03-15 | 2017-03-13 | Magneto photonic encoder |
| US15/457,980 | 2017-03-13 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2017209829A2 true WO2017209829A2 (fr) | 2017-12-07 |
| WO2017209829A3 WO2017209829A3 (fr) | 2019-02-28 |
Family
ID=60477756
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2017/022459 Ceased WO2017209829A2 (fr) | 2016-03-15 | 2017-03-15 | Systèmes rv/ra photoniques hybrides |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017209829A2 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115022666A (zh) * | 2022-06-27 | 2022-09-06 | 北京蔚领时代科技有限公司 | 一种虚拟数字人的互动方法及其系统 |
| US12046166B2 (en) | 2019-11-26 | 2024-07-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Supply of multi-layer extended reality images to a user |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103460256B (zh) * | 2011-03-29 | 2016-09-14 | 高通股份有限公司 | 在扩增现实系统中将虚拟图像锚定到真实世界表面 |
| US10203762B2 (en) * | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
-
2017
- 2017-03-15 WO PCT/US2017/022459 patent/WO2017209829A2/fr not_active Ceased
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12046166B2 (en) | 2019-11-26 | 2024-07-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Supply of multi-layer extended reality images to a user |
| CN115022666A (zh) * | 2022-06-27 | 2022-09-06 | 北京蔚领时代科技有限公司 | 一种虚拟数字人的互动方法及其系统 |
| CN115022666B (zh) * | 2022-06-27 | 2024-02-09 | 北京蔚领时代科技有限公司 | 一种虚拟数字人的互动方法及其系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2017209829A3 (fr) | 2019-02-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20180122143A1 (en) | Hybrid photonic vr/ar systems | |
| US12277658B2 (en) | Systems and methods for mixed reality | |
| CN109564748B (zh) | 混合光子vr/ar系统 | |
| US10297071B2 (en) | 3D light field displays and methods with improved viewing angle, depth and resolution | |
| Hainich et al. | Displays: fundamentals & applications | |
| CN111869204B (zh) | 为基于积分成像的光场显示来渲染光场图像的方法 | |
| US11051006B2 (en) | Superstereoscopic display with enhanced off-angle separation | |
| KR100809479B1 (ko) | 혼합 현실 환경을 위한 얼굴 착용형 디스플레이 장치 | |
| KR101926577B1 (ko) | 확장형 3차원 입체영상 디스플레이 시스템 | |
| WO2018076661A1 (fr) | Appareil d'affichage tridimensionnel | |
| CN113302547A (zh) | 具有时间交错的显示系统 | |
| US20210294119A1 (en) | Display apparatus for rendering three-dimensional image and method therefor | |
| Kara et al. | Connected without disconnection: Overview of light field metaverse applications and their quality of experience | |
| CN112739438A (zh) | 娱乐环境中的显示系统 | |
| US20220163816A1 (en) | Display apparatus for rendering three-dimensional image and method therefor | |
| US10802281B2 (en) | Periodic lenses systems for augmented reality | |
| Zhang et al. | Design and implementation of an optical see-through near-eye display combining Maxwellian-view and light-field methods | |
| Hua | Past and future of wearable augmented reality displays and their applications | |
| WO2017209829A2 (fr) | Systèmes rv/ra photoniques hybrides | |
| Wu et al. | Backward compatible stereoscopic displays via temporal psychovisual modulation | |
| Ebner et al. | Off-axis layered displays: Hybrid direct-view/near-eye mixed reality with focus cues | |
| Hua | Advances in Head‐Mounted Light‐Field Displays for Virtual and Augmented Reality | |
| Pastoor et al. | Mixed reality displays | |
| WO2018142418A1 (fr) | Appareil, procédé et système de visualisation de réalité augmentée et mixte | |
| CN207625713U (zh) | 视觉显示系统以及头戴显示装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref document number: 2018568159 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17807150 Country of ref document: EP Kind code of ref document: A2 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/01/19) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17807150 Country of ref document: EP Kind code of ref document: A2 |