CN120188198A - Dynamically changing avatar bodies in virtual experiences - Google Patents

Dynamically changing avatar bodies in virtual experiences Download PDF

Info

Publication number
CN120188198A
CN120188198A CN202480004666.6A CN202480004666A CN120188198A CN 120188198 A CN120188198 A CN 120188198A CN 202480004666 A CN202480004666 A CN 202480004666A CN 120188198 A CN120188198 A CN 120188198A
Authority
CN
China
Prior art keywords
avatar
cage
avatar body
virtual experience
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202480004666.6A
Other languages
Chinese (zh)
Inventor
亚当·塔克·伯尔
陈思
卢卡斯·库津斯基
阿德里安·保罗·朗兰兹
大卫·雷德基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robles Corp
Original Assignee
Robles Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robles Corp filed Critical Robles Corp
Publication of CN120188198A publication Critical patent/CN120188198A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating three-dimensional [3D] models or images for computer graphics
    • G06T19/20Editing of three-dimensional [3D] images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/20Three-dimensional [3D] animation
    • G06T13/40Three-dimensional [3D] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20044Skeletonization; Medial axis transform
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Some embodiments relate to methods, systems, and computer-readable media for dynamically changing an avatar body during runtime of an avatar associated with the avatar body participating in a virtual experience. In some implementations, the method includes identifying a first avatar body having a first body cage, identifying a target avatar body having a target body cage, and performing interpolation between the first body cage and the target body cage to obtain a second body cage corresponding to the second avatar body, thereby providing a transformation of the first avatar body to the second avatar body. The avatar body may also change in the configuration environment. Changing the avatar body may involve interpolating between the body cages of a pair of cages, or by directly manipulating the body cages of the avatar body.

Description

Dynamically changing avatar bodies in a virtual experience
Cross Reference to Related Applications
The present application claims priority from U.S. provisional application No. 63/532,556, entitled "DYNAMICALLY CHANGING AVATAR BODIES IN A VIRTUAL EXPERIENCE," filed on 8/14 at 2023, the contents of which are incorporated herein in their entirety.
Technical Field
The present disclosure relates generally to computer graphics, and more particularly, but not exclusively, to methods, systems, and computer readable media for dynamically changing an avatar body (including a representation of apparel worn by the avatar body) in a three-dimensional (3D) virtual environment.
Background
A multi-user electronic game or other type of virtual experience environment may involve the use of avatars, where the avatars represent users in a virtual experience. Different three-dimensional (3D) avatars vary in geometry/shape. For example, the avatars may have different body shapes (e.g., tall, short, robust, thin, etc.), may have different types (e.g., male, female, human, animal, alien, etc.), may have any number and type of limbs, etc. The avatar may be customized in terms of the number of garments and/or accessories worn (e.g., shirt worn on the torso, jacket worn on the shirt, scarf worn outside the jacket, hat worn on the head, etc.).
When a user wishes to change the respective avatar body and/or some visual aspect of the clothing (including accessories) worn by the avatar body while participating in a virtual experience or other type of three-dimensional environment, it is difficult to obtain satisfactory results in a computationally efficient manner.
Some embodiments are envisaged according to the above.
The background description provided herein is for the purpose of presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Disclosure of Invention
Embodiments of the present disclosure relate to techniques for dynamically changing a visual aspect of a user avatar (e.g., a visual appearance of an avatar associated with a user while the user is engaged in a virtual experience). The entire (original) avatar body may be changed or otherwise integrally transformed to a new (different) avatar body, or only regions/portions of the original avatar body (e.g., only the head or other body parts) may be selectively changed, while other regions/portions of the original avatar body remain unchanged. The various techniques also provide specific methods for implementing aspects of dynamic changes, such as skin deformation and facial motion encoding system (facial action coding system, FACS) pose deformation, in near real-time using techniques that efficiently manage computing resources.
A system of one or more computers may be used to perform particular operations or actions because the system has software, firmware, hardware, or a combination thereof installed thereon, which in operation causes the system to perform the actions. One or more computer programs may be used for performing certain operations or actions because the computer programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform actions.
According to one aspect, a computer-implemented method for modifying a three-dimensional (3D) avatar body is provided that includes identifying a first avatar body having a first body cage, identifying a target avatar body having a target body cage, and performing interpolation between the first body cage and the target body cage to obtain a second body cage corresponding to a second avatar body, thereby providing a transformation of the first avatar body to the second avatar body.
Various embodiments of computer-implemented methods are described herein.
In some embodiments, performing interpolation includes performing interpolation to generate a second body cage that exactly matches the target body cage, thereby providing a complete transformation.
In some implementations, performing interpolation includes transforming the first avatar body into a second avatar body, the second avatar body being a mixture between the first avatar body and the target avatar body, thereby providing a partial transformation.
In some implementations, performing interpolation includes deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
In some implementations, deforming a portion of the first avatar body that is less than the entirety of the first avatar body includes deforming a portion of the first avatar body to perform a partial transformation of the portion of the first avatar body.
In some implementations, the first avatar body is part of a virtual experience, interpolation is performed as the avatar participates in the virtual experience, and a target avatar body is selected from a plurality of target avatar bodies in the virtual experience.
In some implementations, interpolation is performed in the configuration environment and the target avatar body is selected from a plurality of target avatar bodies in a library in the configuration environment.
In some implementations, the configuration environment includes a transformation tool that enables a user to control a transformed amount of the first avatar body to obtain a second avatar body, and the interpolation is performed based on the transformed amount.
In some implementations, the computer-implemented method further includes identifying a binding of the first avatar body, the binding including identifying a skeleton of the first avatar body and a skin of the first avatar body, updating the binding of the first avatar body to correspond to the second body cage after performing the interpolation, and animating the first avatar body by moving the updated bound skeleton and deforming the updated bound skin.
In some implementations, moving the updated bound skeleton and deforming the updated bound skin includes reusing skin weights from the skin of the first avatar body based on determining areas of the updated bound skin affected by the skeleton in the skeleton of the first avatar body.
According to another aspect, a computer-implemented method for modifying a three-dimensional (3D) avatar body is provided, the computer-implemented method comprising identifying a first avatar body having a corresponding first body cage, and performing an operation on the first body cage to generate a second body cage corresponding to a second avatar body to provide a transformation of the first avatar body to the second avatar body, wherein the operation comprises repositioning portions of the first body cage.
Various embodiments of computer-implemented methods are described herein.
In some embodiments, the operations are performed in a configuration environment, and wherein the configuration environment includes a transformation tool that enables a user to control aspects of the operations of the first body cage to obtain the second body cage, and wherein the operations are performed based on the aspects of the operations.
In some implementations, the computer-implemented method further includes identifying a binding of the first avatar body, the binding including identifying a skeleton of the first avatar body and a skin of the first avatar body, updating the binding of the first avatar body to correspond to the second body cage after performing the operation, and animating the first avatar body by moving the updated bound skeleton and deforming the updated bound skin.
In some implementations, the computer-implemented method further includes moving the updated bound skeleton and deforming the updated bound skin includes reusing skin weights from the skin of the first avatar body based on determining areas of the updated bound skin affected by the skeleton in the skeleton of the first avatar body.
In some implementations, transforming the first avatar body into the second avatar body includes performing interpolation between the first body cage and the second body cage.
In some implementations, transforming the first avatar body into the second avatar body includes deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
According to another aspect, a system is disclosed that includes a memory having instructions stored thereon, and a processing device coupled to the memory for accessing the memory, wherein the instructions, when executed by the processing device, cause the processing device to perform operations including identifying a first avatar body having a first body cage, identifying a target avatar body having a target body cage, and performing interpolation between the first body cage and the target body cage to obtain a second body cage corresponding to the second avatar body, thereby providing a transformation of the first avatar body to the second avatar body.
Various embodiments of systems are described herein.
In some embodiments, performing interpolation includes performing interpolation to generate a second body cage that exactly matches the target body cage, thereby providing a complete transformation.
In some implementations, performing interpolation includes transforming the first avatar body into a second avatar body, the second avatar body being a mixture between the first avatar body and the target avatar body, thereby providing a partial transformation.
In some implementations, performing interpolation includes deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
According to yet another aspect, portions, features, and implementation details of the systems, methods, and non-transitory computer-readable media may be combined to form other aspects including, but not limited to, omitting and/or modifying some or portions of individual components or features, including additional components or features, and/or other modified aspects, all of which are within the scope of the present disclosure.
Drawings
FIG. 1 is a diagram of an example system architecture including a 3D environment platform that may support a 3D avatar with apparel adapted thereon, according to some embodiments.
FIG. 2 illustrates an example body cage according to some embodiments.
FIG. 3 illustrates another example body cage according to some embodiments.
Fig. 4 illustrates examples of portions of a body cage grouped into respective body parts according to some embodiments.
Fig. 5 illustrates an example of a clothing layer deformed over a body cage according to some embodiments.
FIG. 6 illustrates an example of an outer cage formed based on portions of the clothing layer and body cage of FIG. 5, according to some embodiments.
FIG. 7 illustrates an example of interpolation between two body cages to obtain a new body cage, according to some embodiments.
FIG. 8 illustrates an example of generating a new body cage according to some embodiments.
FIG. 9 illustrates an example of a transformation of an avatar body during a virtual experience in accordance with some embodiments.
FIG. 10 illustrates another example of a transformation of an avatar body during a virtual experience in accordance with some embodiments.
FIG. 11 illustrates another example of a transformation of an avatar body during a virtual experience in accordance with some embodiments.
FIG. 12 illustrates an example of a transformation of an avatar in a configuration environment, according to some embodiments.
FIG. 13 illustrates an example of layered apparel for an avatar body in a virtual experience according to some embodiments.
Fig. 14-17 illustrate examples of transformations and animations of avatars according to some embodiments.
FIG. 18 is a flowchart illustrating a computer-implemented method for changing an avatar body, according to some embodiments.
FIG. 19 is a flowchart illustrating another computer-implemented method for changing a three-dimensional (3D) avatar body, according to some embodiments.
FIG. 20 is a flowchart illustrating a computer-implemented method for performing skin deformation, according to some embodiments.
Fig. 21 is a flowchart illustrating a computer-implemented method for performing facial motion coding system (FACS) gesture morphing.
FIG. 22 is a block diagram illustrating an example computing device according to some implementations.
Detailed Description
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, like numerals generally identify like components unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. The aspects of the present disclosure as generally described herein and illustrated in the figures may be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein.
References in the specification to "one embodiment," "an embodiment," "one example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, such feature, structure, or characteristic may be implemented in connection with other embodiments whether or not explicitly described.
The present disclosure describes techniques for dynamically changing a visual aspect of a user avatar (e.g., a visual appearance of an avatar associated with a user while the user is engaged in a virtual experience). For example, while participating in a virtual experience, a user may change an avatar from a humanoid avatar body to an animal (non-humanoid) avatar body or other different avatar body. The entire (original) avatar body may be changed or otherwise integrally transformed to a new (different) avatar body, or only regions/portions of the original avatar body (e.g., only the head or other body parts) may be selectively changed, while other regions/portions of the original avatar body remain unchanged.
The transformation from the original avatar body to the target avatar body may be a complete transformation, wherein the original (first) avatar body is completely transformed into the target avatar body, which becomes the new (second) avatar body. The transformation may also be a partial transformation, wherein the new (second) avatar body is a bond (mixture) or other type of blending between the original (first) avatar body and the target avatar body.
According to various embodiments, a new avatar body may be obtained by performing interpolation between the body cage of the original (first) avatar body and the body cage of the target avatar body such that the obtained new avatar body has its own body cage, which is interpolated/generated from the body cages of the original avatar body and the target avatar body.
Dynamic editing/changing of the avatar body may be performed when there is no clothing on the avatar and/or may be performed when there is one or more layers of clothing on the avatar body. When a garment is present on the avatar's body, dynamically changing the avatar's body (e.g., changing the shape of the body) may also cause a corresponding change in one or more layers of the garment worn by the avatar.
For example, the original avatar may be a humanoid avatar wearing a baseball cap, thereby giving the baseball cap a circular appearance. If the user changes the head of a humanoid avatar to a alien head (e.g., an alien avatar with a tapered head), the baseball cap may also dynamically deform accordingly to change from an original circular appearance to a more pointed appearance that matches the tapered head of the alien (target) avatar. The dynamically deformed avatar's clothing (including accessories) may also be cage-based, as will be explained below.
According to various embodiments, during runtime that participates in a virtual experience, a user may select a target avatar body and its clothing. For example, the user may select a target avatar body by selecting (e.g., clicking on) another avatar in the virtual experience, by selecting a target avatar body from a library, by directly manipulating the original (current) avatar body (e.g., by changing the body cage) without selecting a target avatar body in the virtual experience or from a library, and so forth. An adjustment tool, such as a slider bar, may be provided on the user interface to enable the user to control the amount of transformation between the two avatar bodies.
In some implementations, the adjustment tool and/or some other transformation tool may be used to dynamically change/transform the current avatar body in a straightforward manner without involving interpolation between the two avatar bodies. That is, and by way of example, a user may change the shape of the avatar's head from a human head to a geometric head (e.g., a block) using a transformation tool—in some embodiments, such transformation may be performed by directly changing (e.g., moving or other manipulation) the line segments and vertices of the cage of the human head using a transformation tool. The application of this technique may neither require the presence or use of a body cage of a geometric head as a reference (target), nor the need to perform interpolation between such a reference body cage and a body cage of a humanoid head. Thus, the technique may be viewed as a "free-form" method of independently changing the appearance of an avatar, wherein the change in appearance is independent of any other avatar.
Various techniques described herein for dynamically changing an avatar body (with or without a clothing layer thereon) may be applied to avatars used in virtual experiences. Such virtual experiences are sometimes described herein in the context of electronic games. It is noted that these embodiments are described in the context of an electronic game for convenience purposes only to provide examples and illustrations.
The techniques described herein may be used for other types of virtual experiences in a three-dimensional (3D) environment that do not necessarily involve an electronic game having one or more players represented by avatars. Examples of virtual experiences may include Virtual Reality (VR) conferences, 3D sessions (e.g., online lectures or other types of presentations involving 3D avatars), augmented reality (augmented reality, AR) sessions, or in other types of 3D environments where one or more users are represented by one or more 3D avatars in the 3D environment.
For layered garments, an automatic cage-to-cage adaptation (cage-to-CAGE FITTING) technique may be used for 3D avatars. The technique allows adapting any body geometry to any garment geometry, including adapting each layer of garment to an underlying garment, thereby providing customization, not limited by predefined geometry, nor requiring complex calculations to make the garment compatible with any body shape of an avatar or with other clothing items.
The cage-to-cage adaptation is also performed using various techniques employed by the gaming platform or gaming software (or other virtual experience platform/software for providing a 3D environment) without requiring the avatar creator (also referred to as avatar body creator or body creator) or the clothing item creator to perform complex calculations. The term "garment (clothing)" or "piece of garment (piece of clothing)" or other similar terms as used herein should be understood to include graphical representations of garments and accessories, as well as any other items that may be placed on an avatar in relation to a particular portion of an avatar cage.
During the runtime of a game or other virtual experience session, a player/user accesses a body library to select a particular avatar body and accesses a clothing library to select clothing to be placed on the selected body. The 3D virtual environment platform rendering the avatar adjusts a piece of clothing using a cage-to-cage adaptation technique (by automatically determining the appropriate deformation) to adapt the shape of the body, thereby automatically adapting the piece of clothing to the body (and any intermediate layers that the avatar may wear).
When a piece of apparel is fitted to the body and/or underlying clothing of an avatar, the techniques described herein may be performed to more accurately fit the piece of apparel to the avatar, such as in terms of size (e.g., coordination), shape, etc., by deforming or fitting. The user may further select additional garments to fit onto the underlying garment, wherein the additional garments are deformed to match the geometry of the underlying garment.
The embodiments described herein are based on the concept of "cage" and "mesh". The body mesh (or rendering mesh) is the actual visible geometry of the avatar. The body mesh includes a graphical representation of body parts such as arms, legs, torso, head, etc., and may have any shape, size, and geometric topology. Similarly, a garment grid (or rendering grid) may be any grid that graphically represents a piece of garment (e.g., shirt, pants, hat, shoe, etc.) or portion thereof.
In contrast, a cage represents an envelope (envelope) of feature points around the avatar body that is simpler than the body mesh and has weaker correspondence with the corresponding vertices of the body mesh. As will be explained in further detail below, the cage may be used to represent not only a set of feature points on an avatar's body, but also a set of feature points on a piece of clothing.
In some embodiments, there is a dynamic body part modification mechanism implemented by expanding the layered garment framework of the layered garment system, allowing the avatar body part to be deformed using a user-specified cage. This core functionality can be applied in at least two technologies. First, there may be an insert for studio applications that enables a user to change the overall body shape of one avatar based on the overall body shape of another avatar, for example by toggling/adjusting a slider that interpolates between the two body shapes. Second, there may be a virtual experience in which the player can gradually update the avatar body by clicking on other avatar body parts in the virtual experience.
Some prior art techniques for dynamically changing avatars may include linear hybrid skinning (linear blend skinning, LBS), facial motion coding system (FACS), and affine skinning (AFFINE SKINNING) techniques. These techniques underlie current methods, including skin deformation and facial motion coding system (FACS) pose deformation.
In linear hybrid skin (LBS), pass through in vertex shaderThe i-th deformation position p' i is calculated. In this equation, the skin weights w i,j and the vertex binding (bind) positions p i are constants and cached in the GPU. If fewer than 4 bones are used, w i,j may be 0. The 3x4 skeletal transformation M j is computed on the CPU as follows and copied to the GPU frame by frame:
inverse binding transformation Is a constant of the cache, and the gesture transformation P j is updated every frame. Global transforms B j and P j are computed from the hierarchy (called skeleton) of local transforms LB j and LP j, B j=Bj′·LB、Pj=Pj′·LBj·LPj, where j' is the index of the parent or root portion of the jth node. The skeletal hierarchy and transformation is provided by other engine systems (e.g., a physical engine for the body or FACS for dynamic head). These engine systems are various sources of animation that pose an avatar.
The FACS system provides LBS data for a dynamic head grid. The skin weights w, skeleton hierarchy j', and local binding transformations LB j come from a content delivery network (content delivery network, CDN) grid data structure with grids and control-to-joint-DRIVER DATA, which is the mapping of a given grid from FACS control values to joint positions and rotations. The partial pose transform LP j consists of a 3x3 rotation matrix R j,k and a translation vector T j,k per frame, LP j=Rj,k|Tj,k, where,The euler angles r j,k and translation vector t j,k also come from the grid data structure ControlToJointDriver structure, which defines a map of the grid that converts FACS control values into joint positions and rotations of the skin. The data structure may be an MxN matrix, where M is the number of FACS data channels and N is the number of joint transform values. The matrix is used to transform FACS control into transformed values that drive the facial skeletal joints.
The rotation interpolates in euler coordinates, so R j,k is rigid. 17< = n < = 50 shape weights s : s '= applyCorrectives (s, C) are calculated, where the applyCorrectives function expands 17 original FACS pose weights s up to 50 shape weights s' based on CustomCorrections parameter C also from the ControlToJointDriver structure. For a selected combination of 2 or 3 original FACS weights, the default linear average shape may be replaced with a custom shape constructed for that combination. This serves to enhance art control and deformation quality.
Heretofore, both binding transformations and gesture transformations were of the rigid CorrdinateFrame type. The rigid transformation (also called euclidean transformation) is a geometric transformation of the euclidean space that preserves the euclidean distance between each pair of points. This approach was applicable because each Data Model (DM) and FACS transformation is rigid. With this knowledge, the rotation can be transposed when calculating the inverse of the rotation. This may be a significant optimization, for example, in a physical update process, when transform inversion is a critical issue. But for LBS, the only inversionIs cached as a constant, so this optimization has no significant gain on the technique.
Approximately elastic skin is a common use case for LBS, but the skin does not deform rigidly. Elastic deformation is typically approximately modeled by a rigid bone transformation, which is natural when the deformation is driven by a rigid bone. However, facial muscles are not rigid. Facial muscles may shear and unevenly scale, thus eliminating the need for physical reasons to limit facial "bones" to rigid transformations. Because affine transformation is a superset of rigid transformations, affine transformation can be used without changing existing bindings while increasing the flexibility so that it can be exploited in future bindings.
The weight map of the dynamic head divides the face into multiple regions, providing the necessary degrees of freedom for achieving 17 to 50 poses. The pose transformation of the dynamic head links these regions in response to 17 FACS controls. If the shape of the dynamic head changes and a good visual effect needs to be maintained, the bone transformation may need to change. Skin weights can typically be reused if shape changes do not change the semantics of the vertices, i.e., cheek vertices do not become nose vertices, etc.
If skin weights are reused, the surface area of each bone effect can be represented. It is also noted that gesture transformation will animate the direction and amount associated with the shape of the surface of the area affected by the gesture. This effect includes not only its own weight, but also the weight of any offspring bone in the bone hierarchy. For example, the influence of the head joint includes the influence of the lip joint, etc., because the lip joint rotates with the neck when the neck rotates. In addition, because vertices affect the shape of the connection surface, these sets of effects extend one edge. Unlike LBS, these sets of impact weights are not standardized.
For each bone, the skin deformation may comprise a 3x4 affine transformation that best adapts the deformation of its affected point cloud using some linear algebra calculation. This correction is applied to the bone transformation in the skin calculation: Wherein the correction is a matrix M j. This approach may be effective, but these techniques are not intended to alter the skin flow, but rather are intended to update the binding. Thus, bubbling D / is computed over the skeleton and D / is incorporated into a new local binding transform LB 'j, which generates the same result LB' j=Bj -1·Dj·Bj′·LBj. For example, this may be a simple algebraic operation for solving a new local binding transformation LB' j that produces the same result as the previous formula, but removes the M j term.
Note that after this LB' j is no longer a rigid transformation, so the results can be normalized orthogonally for any data model skeleton, but for dynamic heads, all affine transformations are preserved and the results are significantly improved. This makes the result dependent on affine skin variations discussed above.
With respect to FACS pose deformation, the present FACS binding contains many poses, and each pose includes a set of local transforms P j for each joint in the head binding. Skin deformation techniques update their binding transformations LB' j, changing the parent space of these local pose transformations.
Affine correction transformation D j changes the translation of the resulting transformation M j, including its direction and size, but affine correction transformation D j does not change the rotation of the resulting transformation M j. To further improve the results, the rotation r j,k of each joint per pose will also be updated. In this process, each pose, translation t j,k of each joint, may also be further fine tuned.
In some embodiments, fine tuning of one gesture may be performed at a time. First, an original LBS deformed mesh pose is calculated from the original head shape. For each joint in the pose, its skin weighting points are projected onto the nearest point on the pose grid, providing a new set of points. These points represent the locations on the original mesh closest to the "destination" of the joint in the pose. The same 3x4 affine adaptation function can then be reused to calculate how the points are transformed between the original shape and the modified shape.
In some embodiments, the transformed rigid components are extracted and the final translation and rotation of the gestured joint is fine tuned. To minimize the variation of the euler interpolation, appropriate changes are made to the euler angle that matches the rotation. For example, euler rotation may express the same rotation matrix in an infinite number of ways, thus using appropriate techniques to alter the original angle as little as possible. These angles are calculated by decomposing the rotation matrix and inserting corrections step by step.ThenThen x =x1+x2,y=y1+y2, and z =z1+z2.
FIG. 1-System architecture
FIG. 1 is a diagram of an example system architecture including a 3D environment platform that may support a 3D avatar with apparel adapted thereon, according to some embodiments. FIG. 1 and the other figures use like reference numerals to identify like elements. The reference character, e.g. "110", indicates that the text refers specifically to the element with that particular reference character. Reference numerals without characters (e.g., "110") in the text refer to any or all elements in the drawings that bear the reference numerals (e.g., "110" in the text refers to the reference numerals "110a", "110b", and/or "110n" in the drawings).
The system architecture 100 (also referred to herein as a "system") includes an online virtual experience server 102, a data store 120, client devices 110a, 110b, and 110n (referred to herein generally as "client devices 110"), and developer devices 130a and 130n (referred to herein generally as "developer devices 130"). Virtual experience Server 102, data store 120, client device 110, and developer device 130 are coupled via network 122. In some implementations, the client device 110 and the developer device 130 may refer to the same or the same type of device.
Further, online virtual experience server 102 may include a virtual experience engine 104, one or more virtual experiences 106, and a graphics engine 108, among others. In some implementations, the graphics engine 108 may be a system, application, or module that allows the online virtual experience server 102 to provide graphics and animation functions. In some implementations, graphics engine 108 and/or virtual experience engine 104 may perform one or more operations described below in connection with the flowcharts shown in fig. 18-21. Client device 110 may include a virtual experience application 112 and an input/output (I/O) interface 114 (e.g., an input/output device). The input/output devices may include one or more of a microphone, speaker, headphones, display device, mouse, keyboard, game controller, touch screen, virtual reality console, and the like.
Developer device 130 may include a virtual experience application 132 and an input/output (I/O) interface 134 (e.g., an input/output device). The input/output devices may include one or more of a microphone, speaker, headphones, display device, mouse, keyboard, game controller, touch screen, virtual reality console, and the like.
The system architecture 100 is provided for illustration. In different embodiments, the system architecture 100 may include the same, fewer, more, or different elements configured in the same or different manner as shown in FIG. 1.
In some implementations, the network 122 may include a public network (e.g., the internet), a private network (e.g., a local area network (local area network, LAN) or wide area network (wide area network, WAN)), a wired network (e.g., ethernet), a wireless network (e.g., an 802.11 networkA network, or a wireless LAN (WIRELESS LAN, WLAN)), a cellular network (e.g., a 5G network, a long term evolution (long term evolution, LTE) network, etc.), a router, hub, switch, server computer, or combination thereof.
In some implementations, the data store 120 can be a non-transitory computer readable memory (e.g., random access memory), a cache, a drive (e.g., hard drive), a flash drive, a database system, or another type of component or device capable of storing data. The data store 120 can also include storage components (e.g., drives or databases) that can also span multiple computing devices (e.g., server computers). In some implementations, the data store 120 can include cloud-based storage.
In some implementations, the online virtual experience server 102 can include a server (e.g., cloud computing system, rack-mounted server, server computer, physical server cluster, etc.) having one or more computing devices. In some implementations, the online virtual experience server 102 may be a stand-alone system, may include multiple servers, or be part of another system or server.
In some implementations, online virtual experience server 102 may include one or more computing devices (e.g., rack-mounted servers, router computers, server computers, personal computers, mainframe computers, laptop computers, tablet computers, desktop computers, etc.), data storage areas (e.g., hard disks, memory, databases), networks, software components, and/or hardware components that may be used to perform operations on online virtual experience server 102 and provide users with access to online virtual experience server 102. The online virtual experience server 102 may also include a website (e.g., web page) or application back-end software that may be used to provide users with access to content provided by the online virtual experience server 102. For example, a user may access the online virtual experience server 102 using the virtual experience application 112 on the client device 110.
In some implementations, virtual experience session data is generated by online virtual experience server 102, virtual experience application 112, and/or virtual experience application 132 and stored in data store 120. The virtual experience session data may include related metadata, such as a virtual experience identifier, device data associated with the participant, crowd information for the participant, virtual experience session identifiers, chat records, session start time, session end time, and session duration for each participant, relative positions of participant avatars within the virtual experience environment, purchases of one or more participants in the virtual experience, accessories used by the participant, and the like, subject to permissions of the virtual experience participants.
In some implementations, the online virtual experience server 102 may be a social network that provides an inter-user connection, or a user-generated content system that allows users (e.g., end users or consumers) to communicate with other users on the online virtual experience server 102, where the communication may include voice chat (e.g., synchronous and/or asynchronous voice communications), video chat (e.g., synchronous and/or asynchronous video communications), or text chat (e.g., 1:1 and/or N: N synchronous and/or asynchronous text-based communications). A record of some or all of the user communications may be stored in the data store 120 or the virtual experience 106. The data store 120 may be used to store chat records (text, audio, images, etc.) exchanged between participants, requiring the appropriate permissions of the player and conforming to applicable regulations.
In some implementations, chat records are generated by virtual experience application 112 and/or virtual experience application 132 and stored in data store 120. The chat record may include chat content and related metadata, such as chat text content for each message with the respective sender and receiver, message formats (e.g., bold, italic, loud, etc.), message timestamps, relative positions of participant avatars within the virtual experience environment, accessories used by the virtual experience participants, and so forth. In some implementations, the chat record can include content in multiple languages, and messages in different languages for different sessions of the virtual experience can be stored in the data store 120.
In some implementations, chat records may be stored in the form of conversations between participants based on time stamps. In some implementations, chat records can be stored based on the originator of the message.
In some embodiments of the present disclosure, a "user" may be represented as a single individual. Other embodiments of the present disclosure contemplate that a "user" (e.g., a creative user) is an entity controlled by a set of users or an automation source. For example, a group of individual users that are joined together as a community or group in a user-generated content system may be considered a "user.
In some implementations, the online virtual experience server 102 may be a virtual game server. For example, the game server may provide single or multiplayer games to a community of users that may access (as a "system" herein) a virtual experience comprising an online virtual experience server 102, a data store 120, a client, or interact with the virtual experience using a client device 110 over a network 122. In some implementations, the virtual experience (including virtual domains or worlds, virtual games, other computer simulation environments) may be, for example, a two-dimensional (2D) virtual experience, a three-dimensional (3D) virtual experience (e.g., a 3D user-generated virtual experience), a Virtual Reality (VR) experience, or an Augmented Reality (AR) experience. In some implementations, a user may participate in interactions (e.g., games) with other users. In some implementations, the virtual experience may be experienced in real-time with other users of the virtual experience.
In some implementations, virtual experience participation (engagement) can refer to one or more participants interacting in a virtual experience (e.g., 106) using a client device (e.g., 110) or presenting interactions on a display or other output device (e.g., 114) of the client device 110. For example, virtual experience participation may include interacting with one or more participants in a virtual experience, or presenting interactions on a display of a client device.
In some implementations, the virtual experience 106 can include electronic files that can be executed or loaded using software, firmware, or hardware for presenting virtual experience content (e.g., digital media items) to an entity. In some implementations, virtual experience application 112 may be executed and virtual experience 106 rendered in conjunction with virtual experience engine 104. In some implementations, virtual experience 106 may have a common set of rules or common targets, and the environments of virtual experience 106 share the common set of rules or common targets. In some implementations, different virtual experiences may have different rules or goals than one another.
In some implementations, a virtual experience may have one or more environments (also referred to herein as "virtual experience environments" or "virtual environments"), where multiple environments may be connected. An example of an environment may be a three-dimensional (3D) environment. One or more environments of virtual experience 106 may be collectively referred to herein as a "world," virtual experience world, "" game world, "" virtual world, "or" universe. An example of a world may be a 3D world of virtual experience 106. For example, a user may build a virtual environment that may be connected to another virtual environment created by another user. Roles of virtual experiences may enter adjacent virtual environments across virtual boundaries.
It may be noted that graphics used by a 3D environment or 3D world use a three-dimensional representation of geometric data representing virtual experience content (or at least virtual experience content is displayed as 3D content whether or not a 3D representation of geometric data is used). Graphics used by a 2D environment or 2D world use a two-dimensional representation of geometric data representing virtual experience content.
In some implementations, online virtual experience server 102 may host one or more virtual experiences 106 and may allow users to interact with virtual experiences 106 using virtual experience application 112 of client device 110. A user of online virtual experience server 102 may play virtual experience 106, create virtual experience 106, interact with virtual experience 106, or build virtual experience 106, communicate with other users, and/or create and build objects (e.g., also referred to herein as "items," "virtual experience objects," or "virtual experience items") of virtual experience 106.
For example, in generating a user-generated virtual item, a user may create a character, decoration of a character, one or more virtual environments of an interactive virtual experience, or build a structure used in virtual experience 106, etc. In some implementations, the user may purchase, sell, or transact virtual experience objects, such as in-platform currency (e.g., virtual currency), with other users of the online virtual experience server 102. In some implementations, the online virtual experience server 102 can send the virtual experience content to the virtual experience application (e.g., 112). In some implementations, virtual experience content (also referred to herein as "content") may refer to any data or software instructions (e.g., virtual experience objects, virtual experiences, user information, videos, images, commands, media items, etc.) associated with the online virtual experience server 102 or virtual experience application. In some implementations, virtual experience objects (e.g., also referred to herein as "items," "objects," "virtual objects," or "virtual experience items") may refer to objects that are used, created, shared, or otherwise depicted in the virtual experience 106 of the online virtual experience server 102 or the virtual experience application 112 of the client device 110. For example, virtual experience objects may include parts, models, figures, accessories, tools, weapons, clothing, buildings, vehicles, currency, flora, fauna, components of the above objects (e.g., windows of a building), and the like.
It may be noted that for illustration, an online virtual experience server 102 is provided that hosts a virtual experience 106. In some implementations, the online virtual experience server 102 can host one or more media items, which can include communication messages from one user to one or more other users. With user permissions and explicit user consent, online virtual experience server 102 can analyze chat log data to improve the virtual experience platform. The media items may include, but are not limited to, digital video, digital movies, digital photos, digital music, audio content, melodies, website content, social media updates, electronic books, electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, really simple syndication (REAL SIMPLE syndication, RSS) feeds, electronic romantic books, software applications, and the like. In some implementations, the media items may be electronic files that may be executed or loaded using software, firmware, or hardware for presenting the digital media items to an entity.
In some implementations, the virtual experience 106 may be associated with a particular user or group of users (e.g., a private virtual experience), or be widely available to users (e.g., public virtual experiences) that may access the online virtual experience server 102. In some implementations, when online virtual experience server 102 associates one or more virtual experiences 106 with a particular user or group of users, online virtual experience server 102 can associate the particular user with virtual experience 106 using user account information (e.g., user account identifiers, such as a user name and password).
In some implementations, the online virtual experience server 102 or client device 110 can include a virtual experience engine 104 or a virtual experience application 112. In some implementations, virtual experience engine 104 may be used for development or execution of virtual experience 106. For example, virtual experience engine 104 may include a rendering engine ("renderer"), a physics engine, a collision detection engine (and collision response), a sound engine, a script function, an animation engine, an artificial intelligence engine, a network function, a stream function, a storage management function, a thread function, a scene graph function, or an animated video support, among other functions for 2D, 3D, VR, or AR graphics. Components of virtual experience engine 104 may generate commands (e.g., rendering commands, collision commands, physical commands, etc.) that help calculate and render the virtual experience. In some implementations, the virtual experience applications 112 of the client devices 110 may each work independently, in cooperation with the virtual experience engine 104 of the online virtual experience server 102, or in combination of both.
In some implementations, both the online virtual experience server 102 and the client device 110 can execute the virtual experience engine 104/virtual experience application 112. An online virtual experience server 102 using virtual experience engine 104 may perform some or all of the virtual experience engine functions (e.g., generate physical commands, render commands, etc.), or offload some or all of the virtual experience engine functions to virtual experience engine 104 of client device 110. In some implementations, the ratio between the virtual experience engine functions performed by each virtual experience 106 on the online virtual experience server 102 and the virtual experience engine functions performed on the client device 110 may be different. For example, virtual experience engine 104 of online virtual experience server 102 may be used to generate physical commands in the event of a collision between at least two virtual experience objects, while additional virtual experience engine functionality (e.g., generating rendering commands) may be offloaded to client device 110. In some implementations, the proportion of virtual experience engine functions executing on the online virtual experience server 102 and the client device 110 may change based on virtual experience participation conditions (e.g., dynamically). For example, if the number of users participating in a particular virtual experience 106 exceeds a threshold number, online virtual experience server 102 may perform one or more virtual experience engine functions previously performed by client device 110.
For example, a user may play virtual experience 106 on client device 110 and may send control instructions (e.g., user input such as right, left, up, down, user selection, or character position and speed information, etc.) to online virtual experience server 102. After receiving the control instructions from the client device 110, the online virtual experience server 102 can send experience instructions (e.g., location and speed information of the characters participating in the group experience, or commands, such as rendering commands, collision commands, etc.) to the client device 110 based on the control instructions. For example, online virtual experience server 102 may perform one or more logical operations on control instructions (e.g., using virtual experience engine 104) to generate experience instructions for client device 110. In other examples, online virtual experience server 102 may communicate one or more control instructions from one client device 110 to other client devices participating in virtual experience 106 (e.g., from client device 110a to client device 110 b). The client device 110 may use the experience instructions and render the virtual experience for presentation on a display of the client device 110.
In some implementations, control instructions may refer to instructions that indicate actions of a user character in a virtual experience. For example, the control instructions may include user inputs that control actions in the experience, such as right, left, up, down, user selections, gyroscope position and orientation data, force sensor data, and so forth. The control instructions may include character position and speed information. In some implementations, the control instructions are sent directly to the online virtual experience server 102. In other implementations, control instructions may be sent from client device 110 to another client device (e.g., from client device 110b to client device 110 n), where the other client device generates experience instructions using local virtual experience engine 104. The control instructions may include instructions to play a voice communication message or other sound of another user on an audio device (e.g., speaker, headphones, etc.), such as voice communication or other sound generated using audio spatialization techniques as described herein.
In some implementations, experience instructions may refer to instructions that enable client device 110 to render a virtual experience (such as a multi-participant virtual experience). Experience instructions may include one or more of user input (e.g., control instructions), character position and speed information, or commands (e.g., physical commands, rendering commands, collision commands, etc.).
In some implementations, a character (or, in general, a virtual experience object) is made up of components, where one or more of the components may be selected by a user, which are automatically connected together to assist the user in editing.
In some implementations, the character is implemented as a 3D model and includes a hierarchical set of surface representations (also known as skins or meshes) and interconnecting skeletons (also known as skeletons or bindings) for rendering the character. Binding can be used to animate a character, as well as to simulate the motions and actions of the character. The 3D model may be represented as a data structure and one or more parameters of the data structure may be modified to change various properties of the character, such as size (height, width, circumference, etc.), body shape, movement style, number/type of body parts, scale (e.g., shoulder-to-hip ratio), head size, etc.
One or more characters (also referred to herein as "avatars" or "models") may be associated with a user, wherein the user may control the characters to facilitate user interaction with virtual experience 106.
In some implementations, a character may include components such as body parts (e.g., hair, arms, legs, etc.) and accessories (e.g., t-shirts, eyeglasses, decorative images, tools, etc.). In some embodiments, the body parts of the customizable character include head type, body part type (arms, legs, torso, and hands), face type, hair type, skin type, and the like. In some embodiments, customizable accessories include apparel (e.g., shirts, pants, hats, shoes, glasses, etc.), weapons, or other tools.
In some implementations, for some asset types (e.g., shirts, pants, etc.), the online virtual experience platform may provide the user with access to a simplified 3D virtual object model represented by a grid of low polygon count (e.g., between about 20 and 30 polygons).
In some implementations, the user can also control the size of the character (e.g., height, width, or depth) or the size of the components of the character. In some implementations, the user can control the proportions of the roles (e.g., block, anatomic, etc.). It may be noted that in some implementations, a character may not include a character virtual experience object (e.g., body part, etc.), but a user may control the character (without the character virtual experience object) to facilitate user interaction with the virtual experience (e.g., a educational game in which there are no rendered character game objects, but the user still controls the character to control in-game actions).
In some embodiments, the component (such as a body part) may be a basic geometric shape such as a block, cylinder, sphere, etc., or may be some other basic shape such as wedge, ring, tube, channel, etc. In some implementations, the creator module may publish the roles of the users for viewing or use by other users of the online virtual experience server 102. In some implementations, a user can perform creation, modification, or customization of roles, other virtual experience objects, virtual experience 106, or virtual experience environments using an I/O interface (e.g., a developer interface) and with or without script (or with or without application programming interfaces (application programming interface, APIs)). It should be noted that for purposes of illustration, a character is described as having a humanoid form. It may also be noted that a character may have any form, such as a vehicle, animal, inanimate object, or other creative form.
In some implementations, the online virtual experience server 102 can store the user-created roles in the data store 120. In some implementations, the online virtual experience server 102 maintains a directory of roles and a directory of virtual experiences that can be presented to the user. In some implementations, the virtual experience catalog includes images of virtual experiences stored on the online virtual experience server 102. In addition, a user may select a character (e.g., a user or other user-created character) from a catalog of characters to participate in the selected virtual experience. The character catalog includes images of characters stored on the online virtual experience server 102. In some implementations, one or more roles in the role catalog may have been created or customized by the user. In some implementations, the selected role can have a role setting that defines one or more components of the role.
In some implementations, a user's character (e.g., avatar) may include a configuration of components, where the configuration and appearance of the components, and more generally the appearance of the character, may be defined by the character settings. In some implementations, the role settings of the user's roles can be selected at least in part by the user. In other implementations, the user may select a character having a default character setting or other user-selected character setting. For example, the user may select a default role from a directory of roles having predefined role settings, and the user may further customize the default role by changing some of the role settings (e.g., adding a shirt with a custom identification). The online virtual experience server 102 can associate character settings with particular characters.
In some implementations, the client device 110 may include a computing device, such as a personal computer (personal computer, PC), a mobile device (e.g., a laptop, mobile phone, smart phone, tablet, or netbook computer), an internet television, a game console, or the like. In some implementations, the client device 110 may also be referred to as a "user device". In some implementations, one or more client devices 110 may connect to the online virtual experience server 102 at any given moment. It should be noted that the number of client devices 110 is provided for illustration. In some implementations, any number of client devices 110 may be used.
In some implementations, each client device 110 can include an instance of the virtual experience application 112, respectively. In one implementation, virtual experience application 112 may allow a user to use online virtual experience server 102 and interact with online virtual experience server 102, e.g., control virtual roles in the virtual experience hosted by online virtual experience server 102, or view or upload content (e.g., virtual experience 106, images, video items, web pages, documents, etc.). In one example, the virtual experience application may be a web application (e.g., an application operating in conjunction with a web browser) that may access, retrieve, render, or navigate content provided by a web server (e.g., a virtual character in a virtual environment, etc.). In another example, the virtual experience application may be a native application (e.g., a mobile application, app, virtual experience program, or game program) that is installed on the client device 110 and executes locally and allows a user to interact with the online virtual experience server 102. The virtual experience application may render, display, or otherwise present content (e.g., web pages, media viewers) for the user. In an embodiment, the virtual experience application may also include an embedded media player embedded in the web page (e.g.,Or an HTML5 player).
According to aspects of the disclosure, the virtual experience application may be an online virtual experience server application for users to build, create, edit, upload content to the online virtual experience server 102, and interact with the online virtual experience server 102 (e.g., participate in a virtual experience 106 hosted by the online virtual experience server 102). Thus, online virtual experience server 102 may provide virtual experience applications to client device 110. In another example, the virtual experience application may be an application downloaded from a server.
In some implementations, each developer device 130 can include an instance of a virtual experience application 132, respectively. In one implementation, the virtual experience application 132 may allow developer users to use the online virtual experience server 102 and interact with the online virtual experience server 102, such as controlling virtual roles in the virtual experience hosted by the online virtual experience server 102, or viewing or uploading content (e.g., virtual experience 106, images, video items, web pages, documents, etc.). In one example, the virtual experience application may be a web application (e.g., an application operating in conjunction with a web browser) that may access, retrieve, render, or navigate content provided by a web server (e.g., a virtual character in a virtual environment, etc.). In another example, the virtual experience application may be a native application (e.g., a mobile application, app, virtual experience program, or game program) that is installed on the developer device 130 and executes locally and allows the user to interact with the online virtual experience server 102. The virtual experience application may render, display, or otherwise present content (e.g., web pages, media viewers) for the user. In an embodiment, the virtual experience application may also include an embedded media player embedded in the web page (e.g.,Or an HTML5 player).
According to aspects of the present disclosure, virtual experience application 132 may be an online virtual experience server application for users to build, create, edit, upload content to online virtual experience server 102, and interact with online virtual experience server 102 (e.g., provide and/or participate in virtual experience 106 hosted by online virtual experience server 102). Thus, online virtual experience server 102 can provide virtual experience applications to developer device 130. In another example, the virtual experience application 132 may be an application downloaded from a server. Virtual experience application 132 may be used to interact with online virtual experience server 102 and obtain access to user credentials, user currency, etc. of one or more virtual experiences 106 developed, hosted, or provided by a virtual experience developer.
In some implementations, a user may log into the online virtual experience server 102 through the virtual experience application. The user may access the user account by providing user account information (e.g., a user name and password), wherein the user account is associated with one or more roles of one or more virtual experiences 106 available to participate in the online virtual experience server 102. In some implementations, with appropriate credentials, a virtual experience developer can obtain access rights to virtual experience virtual objects (e.g., in-platform currency (e.g., virtual currency), avatars, special capabilities, accessories owned by or associated with other users).
In general, the functions described in one embodiment as being performed by the online virtual experience server 102 may be performed by the client device 110 or server in other embodiments, if appropriate. Furthermore, the functions attributed to a particular component may be performed by different or multiple components operating together. The online virtual experience server 102 may also be accessed as a service provided to other systems or devices through a suitable Application Programming Interface (API), and thus is not limited to use in websites.
FIG. 2-example body cage
FIG. 2 illustrates an example body cage 200 according to some embodiments. The body cage 200 in the example of fig. 2 is an outer cage that surrounds or is superimposed over the outer surface/contour of the humanoid body shape that serves as a mannequin. The underlying humanoid body shape (manikin, not shown) enclosed by the body cage 200 may be represented by or consist of a body mesh comprising a plurality of polygons and their vertices. The polygons of the body mesh (and the polygons of the garment mesh) may be triangles, with the surface area of each triangle providing one face or one mesh face.
The body cage 200 includes a plurality of feature points 202, the feature points 202 defining or otherwise identifying or corresponding to the shape of the manikin. In some implementations, the feature points 202 are formed by vertices of line segments/edges 204 of multiple polygons (or other geometric shapes) on the manikin. According to various embodiments (although not so shown in fig. 2), the polygons may be triangles, each triangular surface area providing one face or one cage face. In some implementations, the feature points 202 may be discrete points, not necessarily formed by the vertices of any polygon.
The body cage 200 of fig. 2 is an example of a low resolution body cage having 642 feature points (or some other number of feature points) for a humanoid body geometry lacking fingers. Other examples may use a body cage with 475 feature points (or some other number of feature points). For example, a body cage including a human geometry of a finger may have 1164 feature points (or some other number of feature points). The high resolution body cage may include 2716 feature points (or some other number of feature points). These numbers of feature points (and their ranges) are just a few examples-the number of feature points may vary from one implementation to another, depending on factors such as resolution of preference, processing power of the 3D platform, user preference, size/shape of the manikin, etc.
FIG. 3-example body cage
Fig. 3 illustrates another example body cage 300 according to some embodiments. The cage may be provided to any arbitrary avatar body shape or clothing shape. The body cage 300 in the example of fig. 3 is an outer cage that surrounds or is superimposed over the outer surface/contour of the body mesh of the general game avatar body shape.
The body cage 300 of fig. 3 may have the same number of feature points as the body cage 200 of fig. 2. In some implementations, the number of feature points of the body cage 300 may be different from the number of feature points of the body cage 200, such as having a fewer or greater number of feature points 302 due to different (simpler or more complex) geometries of the game avatar and/or based on other factors. Thus, the number of feature points for different body cages may be different, and the number of feature points may be selected based on different body shapes or other body attributes.
FIG. 4-body cage portions
Fig. 4 illustrates examples of portions of a body cage 400 grouped into respective body parts according to some embodiments.
In some embodiments, the number of feature points of the cage may be reduced to a lesser number than provided above, such as 475 feature points (or some other number of feature points), for bandwidth and performance/efficiency purposes or other reasons. Further, in some embodiments, the feature points (vertices) in the body cage may be arranged into a plurality of groups (e.g., 15 groups), each group representing a portion of the body shape.
More specifically, the 15 body parts (for a humanoid manikin) shown in FIG. 4 are the head, torso, buttocks, right foot, left calf, right calf, left thigh, right thigh, left hand, right hand, left lower arm, right lower arm, left upper arm, and right upper arm. The multiple parts in any body shape may be larger or smaller than the 15 body parts shown. For example, a "single arm" avatar character may have 12 (rather than 15) body parts because one hand, lower arm, and upper arm are omitted. In addition, other body shapes may involve fewer or greater numbers of body parts depending on factors such as body geometry, preferred resolution, processing power, type of avatar character (e.g., animal, alien, monster, etc.).
Each of the 15 groups/sites in fig. 4 includes feature points defining the site of the avatar body. Such sets of feature points may in turn be mapped to a corresponding piece of clothing. For example, because the graphical representation of the jacket is composed of graphical grids that present left/right arms and torso of the jacket that are logically correspondingly fitted to the left/right arms and torso of the avatar's body, the feature points defining the left/right lower arms, left/right upper arms, and torso in the body cage 400 may be used as outer cages to be mapped with the inner cages of the jacket.
Furthermore, such separation into multiple sets (as shown in fig. 4) enables a garment to be custom fitted to an atypical body shape. For example, a 3D avatar may be in the form of a "single arm" avatar character lacking a left arm. Thus, the 3D avatar's body cage lacks feature point sets corresponding to the left hand, lower left arm, and upper left arm.
When the jacket is then selected for fitting onto the 3D avatar, the right lower arm, right upper arm, and torso of the jacket may deform to fit onto the corresponding right lower arm, right upper arm, and torso of the 3D avatar (body manikin), and because there is no left arm cage in the body manikin to deform, the left lower arm and left upper arm of the jacket do not deform (e.g., retain rigidity from the original form of their parent space).
Figure 5-clothing layer deformed over body cage
Fig. 5 illustrates an example of a clothing layer 500 deformed over a body cage (e.g., body cage 400 shown in fig. 4) according to some embodiments. Garment layer 500 is a graphical representation of a jacket (shown in gray shading in fig. 5) having portions that may be generated/rendered using a polygonal mesh 502 (e.g., a garment mesh), polygonal mesh 402 being composed of a collection of vertices, edges, and faces (which may be triangular faces or other polygonal faces).
The clothing layer 500 includes an inner cage (not shown in fig. 5) with feature points corresponding to the feature points of the body cage 400. Specifically, the feature points of the inner cage of the clothing layer 500 are mapped to the feature points of the body cage 400 that make up the left lower arm, right lower arm, left upper arm, right upper arm, and torso.
In some embodiments, the mapping includes directly mapping the feature points of the inner cage of the clothing layer 500 to the coordinate locations of the corresponding feature points of the arms and torso of the body cage 400. When two cages have the same number of feature points, such mapping may involve a 1:1 correspondence, and the mapping may be n:1 or 1:n (where n is an integer greater than 1), in which case multiple feature points in one cage may map to the same feature point of the other cage (or some feature points may not).
The garment layer 500 also includes an outer cage having feature points spaced apart from and connected to corresponding feature points of the inner cage of the garment layer 500. The feature points of the outer cage of garment layer 500 are defined along or otherwise position the outer surface contour/geometry of the jacket to define the features of the jacket (e.g., hood 504, cuffs 506, straight torso 508, etc.).
According to various embodiments, the spatial distance (e.g., the spatial distance between the feature points of the inner cage of the clothing layer 500 and the corresponding feature points of the outer cage of the clothing layer 500) remains constant during the fitting of the clothing layer 500 onto the outer cage of the existing layer (or avatar body). In this way, the feature points of the inner cage of the clothing layer 500 may be mapped to the feature points of the body cage 400 to "fit" the interior of the jacket to the torso and arms of the avatar.
Then, with the distances between the feature points of the inner cage of the clothing layer 500 and the corresponding feature points of the outer cage of the clothing layer 500 remaining unchanged, the outer contour of the jacket may also be deformed to match the shape of the avatar body, such that the visual appearance (graphical representation) of the hood, cuffs, torso, and other surface features of the jacket are at least partially maintained, while matching the shape of the avatar body as shown in fig. 5. In this way, the garment layer 500 may be deformed in any suitable manner to fit any arbitrary shape/size of the avatar body (body cage), such as tall, short, slim, strong, humanoid, animal, alien, etc.
FIG. 6-example outer cage
Fig. 6 illustrates an example of portions of the garment layer and body cage 400 of fig. 5 used to form an outer cage 600 in accordance with some embodiments. In some embodiments, additional garment layers may be placed on top of other garment layers (e.g., in response to user selection). More specifically, the feature points of the outer cage of the garment layer 500 of fig. 5 are now combined with the feature points of the body cage 400, resulting in a composite outer cage 600 consisting of the feature points of the exposed portion of the body cage 400 and the feature points along the outer surface of the jacket.
For example, the exposed outer surface 602 of the jacket (formed by the body, hood, and sleeves of the jacket) provides one set of feature points, and the exposed legs, hands, head, and portions of the chest of the body not covered by the jacket provide another set of feature points, and the two sets (in combination) of feature points provide the feature points of the outer cage 600.
The feature points in the outer cage 600 in fig. 6 that correspond to and define the outer surface/shape of the jacket may be the same feature points of the outer cage of the garment layer 500 of fig. 5. In some embodiments, different feature points and/or additional feature points and/or fewer feature points may be used for the jacket region in the outer cage 600 in fig. 6 as compared to the feature points of the outer cage of the jacket (garment layer 500) of fig. 5.
For example, if the next layer of clothing over the outer cage 600 requires a higher resolution or more accurate fit, additional feature points of the outer cage 600 (as compared to the outer cage of the clothing layer 500 of fig. 5) surrounded by the jacket region may be calculated. Similarly, if the next layer of clothing over the outer cage 600 requires a lower resolution or less accurate fit and/or due to other considerations (e.g., by using as few feature points as possible to improve processing/bandwidth efficiency), the feature points of the outer cage 600 enclosed by the jacket region (as compared to the outer cage of the clothing layer 500) may be calculated.
In operation, if a user provides input to fit an additional clothing layer (e.g., a overcoat or other garment) over the jacket (clothing layer 500) and/or other portion of the avatar body, the feature points of the inner cage of such additional clothing layer are mapped to corresponding feature points of the outer cage 600. Thus, the deformation may be performed in a manner similar to that described with reference to fig. 5. According to some embodiments, radial basis function (radialbasis function, RBF) techniques and/or other similar interpolation techniques may be used to deform a piece of clothing that fits onto an underlying clothing or body part of an avatar.
Thus, according to the example of layered garments of fig. 5 and 6, a first layer of garments (garment layer 500) is wrapped around the body by matching the feature points of the "outer cage" of the avatar body (body cage 400) with the feature points of the "inner cage" of the first layer of garments. Such matching may be performed in the UV space of the cage (e.g., a UV finger coordinate system) and thus does not have to rely on the number of feature points that exactly match between the inner and outer cages.
For example, the feature points may be vertices having position and texture space coordinates. Texture space coordinates are typically represented by ranges [0,1], one for each U, V coordinate. Texture space may be considered as the "unwrapped (unwrapped)" normalized coordinate space of the vertex. By performing the correspondence of the two sets of vertices in UV space and not using the locations of these vertices, vertex-to-vertex correspondence can be accomplished in normalized space by eliminating the rigid requirement of an accurate vertex-to-vertex index map.
According to the techniques described herein, each avatar body and clothing item is thus associated with an "inner cage" and an "outer cage". In the case of an avatar body, the inner cage represents a default "mannequin" (and different mannequins may be provided for different avatar body shapes), and the "outer cage" of the avatar body represents a housing surrounding the avatar body shape. For an item of clothing, an "inner cage" represents an inner envelope that defines how the item of clothing wraps around the underlying body (or around the body to which a previous layer of clothing has been fitted), while an "outer cage" represents the manner in which the next layer of clothing wraps around that particular item of clothing when worn on the avatar's body.
According to various embodiments, the various cages described herein may not be visible during runtime. For example, while participating in a virtual experience, including avatars traversing in a virtual 3D environment of the virtual experience, placing clothing on an avatar's body, wearing clothing, animating an avatar, and the like, the vertices and line segments of the cage may be invisible to the user and other users/viewers of the 3D environment. Furthermore, during runtime, the avatar and deformed apparel of the avatar appear to have no cages such that only the deformed apparel, skin, avatar body parts, etc. visual grid is visible to the user—in practice, there may be one or more cages on the avatar for achieving the purposes described herein, deforming the apparel, surrounding the avatar body parts and apparel items, changing the avatar body, etc., but not visible to the user during runtime. The cages may be visible to the user (e.g., such as by viewing/editing cage commands, at a configuration stage, etc.), such that the user may view and manipulate the cages as necessary to change the avatar body described herein, create the cages, or other purposes.
FIG. 7-interpolation between two body cages
Fig. 7 illustrates an example of interpolation between two body cages to obtain a new body cage 700, according to some embodiments. More specifically, the example interpolation of FIG. 7 may be performed where the user has a current avatar body and the user wishes to change/transform the current avatar body to some other (target) avatar body present in the virtual experience, library, or the like. Interpolation of the current avatar's body and corresponding changes may be performed during a session in a studio (studio) or other configuration environment in which the user may create and edit graphical objects or the like, during the running process in which the avatar is participating in the virtual experience.
In the example of fig. 7, the user's current avatar body may be a human body with the body cage 200 of fig. 2 for purposes of illustration and explanation only, and the user may wish to transform the current avatar body (or some portion of the current avatar body) into the target avatar body. In the example of fig. 7, the user selected target avatar body may be a geometric avatar body having the body cage 300 of fig. 3.
The new avatar body shown in fig. 7 has a body cage 700. The new avatar body may be a complete transformation from one or more parts of the original avatar body to one or more parts of the target avatar body. FIG. 7 illustrates a complete transformation and partial transformation from one or more parts of an original avatar body to one or more parts of a target avatar body.
As an example of a complete transformation, the shape of the torso 702 of the new avatar body has been transformed to completely match the rectangular shape of the torso of the target avatar (with body cage 300) -the curved/tapered torso of the original avatar body (with body cage 200) is no longer present in the new avatar body and has been completely deformed or otherwise transformed into a rectangular torso 702.
As an example of a partial transformation, the shape of the arms 704 of the new avatar body is a mixture/mixture between the curved/conical arms of the original avatar body (with body cage 200) and the rectangular arms of the target avatar body (with body cage 300). For example, the shape of the arm 704 is now more rectangular, resembling the arm of the body cage 300, but still retains some of the curvature and taper of the arm of the body cage 200.
Thus, the full transform may be a body transform in which all body parts are fully transformed, and the partial body transform may be a body transform in which one or more body parts are partially transformed or not transformed.
The body cage 700 of the new avatar body also represents a partial transformation because not every part of the entire avatar body is deformed. For example, only the torso 702 and one arm 704 undergo a transformation, while the shape of other parts of the new avatar body (e.g., head, other arm, leg, etc.) remain unchanged relative to the original avatar body. In various embodiments, different parts of the avatar body may undergo some or all of the transformations, while other parts do not undergo any transformations. In some implementations, the entire avatar body may undergo a partial or full transformation. In some implementations, deforming a portion of the first avatar body that is less than the entirety of the first avatar body includes deforming the portion of the first avatar body to perform a partial transformation of the portion of the first avatar body.
The new avatar body (with body cage 700) may be the same size or different size than the original avatar body (with body cage 200) and/or the target avatar body (with body cage 300). In the example of FIG. 7, the new avatar body (with body cage 700) is scaled down to be smaller in size than the original avatar body and the target avatar body.
To obtain a new avatar body (with body cage 700), in some embodiments, one or more interpolation operations 706 may be performed. For example, linear interpolation or non-linear interpolation may be performed between the body cage 200 and the body cage 300. Interpolation may be performed between the values/coordinates of vertices or line segments corresponding to the two body cages (body cage 200 and body cage 300) to obtain the resulting vertices/line segments of the new body cage 700.
Alternatively, or in addition, at least some of the values/coordinates of the vertices/line segments of the new body cage 700 may be calculated/generated as new values that have not been interpolated from other values. For example, if a new vertex/line segment of a new body cage 700 is to be created in a particular region of the avatar body, and there are no vertices/line segments near the two body cages 200 and 300 that can form the basis of interpolation, this operation may be performed.
The example of fig. 7 corresponds to an embodiment in which a new avatar body (with a new body cage) is generated based on or relative to two other avatar bodies (with corresponding body cages). In some implementations, the new avatar body may be generated in a more free form manner, not necessarily based on the existing target avatar body as a reference.
FIG. 8-Generation of New body cage
FIG. 8 illustrates an example of generating a new body cage 800 according to some embodiments. For purposes of illustration and explanation only, the user's current avatar body may be a human body with the body cage 200 of fig. 2, and the user may wish to transform the current avatar body (or some portion of the current avatar body) into a target avatar body.
To perform the transformation, the user may manipulate the vertices and/or line segments of the body cage 200 using a transformation tool. For example, as shown at 800, the user may click and drag the vertex or line segment of the arm to a new position. As shown at 802, the user may click and drag the vertex or line segment of the torso to a new location. In addition to clicking and dragging existing vertices/line segments, the user may use transformation tools to delete or add vertices/line segments to the cage, draw/redraw portions of the cage, etc. in generating the new avatar body.
With respect to the cages, there are interrelationships and dependencies between the plurality of cages as described above with respect to fig. 2-6. For example, a body cage encompasses (fully encompasses) an avatar body (including a body mesh), an inner cage of a first article of apparel maps to the body cage, an outer cage encompasses the first article of apparel (including a garment mesh of the first article of apparel), an inner cage of a second article of apparel maps to an outer cage of the first article of apparel, an outer cage encompasses the second article of apparel (including a garment mesh of the second article of apparel), and so forth.
In view of this interrelationship and dependency, in some embodiments, operation or other changes/transformations of at least one cage may cause automatic and corresponding changes/transformations of one or more other cages. For example, if the body cage of the current avatar body changes (as shown in fig. 7 and 8) to change the shape of the avatar body, if the avatar body wears a garment, the corresponding cage of one or more of the garments overlaid thereon will also automatically change/update to dynamically deform/adapt the garment to match the changing shape of the avatar body. The changes in the cage in turn may cause appropriate changes in the mesh, skin, and other visual aspects of the avatar's body and/or its clothing, respectively.
In other examples, to change the shape or other appearance of the avatar, the cage of the article of apparel may alternatively or additionally be operated while changing the body cage. Using the example of the current humanoid avatar wearing a round baseball cap and the target alien person with a tapered head described above, the user may operate the transformation tool to reshape the outer cage of the baseball cap from round to conical. This reshaping of the outer cage then correspondingly changes the visual appearance of the baseball cap from circular to conical, and also changes the inner cage of the baseball cap and the underlying body cage of the head of the humanoid avatar body, such that the new avatar body now has a tapered head.
According to various embodiments, deformation or other transformations of the avatar body (including its clothing) may be performed during the runtime of the virtual experience. In such an embodiment, the cage may not necessarily be visible while the user is engaged in the virtual experience. Further, the user may select a target avatar or other target graphical object in the virtual experience, and the client-side or server-side virtual experience engine or other related components may perform appropriate cage operations (as shown above with respect to fig. 6) in a manner transparent to the user (e.g., through a background process). Thus, the user is able to seamlessly view the changing/changing visual appearance of the avatar body during the virtual experience without actually viewing the cage itself being manipulated.
FIG. 9-transformation of avatar body during virtual experience
FIG. 9 illustrates an example of a transformation of an avatar's body as the avatar participates in a virtual experience 900, as explained above, according to some embodiments. In the virtual experience 900 of 902, a user's current avatar 904 is a person whose individual body parts (e.g., head, arms, legs, torso, etc.) have a generally geometric/rectangular shape. The user has selected another avatar in the virtual experience 900 (e.g., by clicking with a mouse cursor or by some other input tool) as the target avatar 906.
If the user selects the target avatar 906, a transformation of the current/original avatar 904 to a new (changed/in-change) avatar 910 occurs at 908. At 908, and as compared to the original avatar 904, the new avatar 910 has a larger left arm, tapered waist, peaked shoulder, etc., similar to the corresponding body part of the target avatar 906. At 912, further morphing is performed such that the new avatar 910 is further modified to more closely match the target avatar 906 with a drooping head.
FIG. 10-transformation of avatar body during virtual experience
FIG. 10 illustrates another example of a transformation of an avatar's body as the avatar participates in a virtual experience 1000, according to some embodiments. In particular, FIG. 10 illustrates that a user may select multiple avatars as targets of a current avatar body deformation for an avatar. At 1002, in a virtual experience 1000, a user's current avatar 1004 has a slim-shaped/shaped alien body. The first target avatar 1006 is a humanoid avatar and has a robust torso shape. The second target avatar 1008 is a monster with a drooping monster head with corners.
At 1010, the user has selected the first target avatar 1006 and, thus, the original avatar 1004 has been transformed into a new avatar 1012, the new avatar 1012 having a robust torso like the first target avatar 1006. At 1014, the user has selected the second target avatar 1008 and thus the new avatar 1012 continues to transform/morph, having a drooping monster head, and being shorter, like the second target avatar 1008.
FIG. 11-transformation of avatar body during virtual experience
FIG. 11 illustrates another example of a transformation of an avatar's body as the avatar participates in a virtual experience 1100, according to some embodiments. As previously described, a portion of the avatar body may be changed (rather than the entire avatar body), and then the apparel, skin, accessories, etc. associated with the changed portion of the avatar body may be changed accordingly.
At 1102, in the virtual experience 1100, a user's current avatar 1104 has a human-shaped head with a nose, hair, lips, and lipstick, eyes, eyelashes, and the like. At 1106, the user has selected the head of another avatar in the virtual experience 1100, and thus the head of the current avatar 1104 begins to change from a human head to a different head shape (e.g., animal-like) of the new avatar 1108. Such deformation also affects the appearance (e.g., shape and size) of the nose, lips, eyes, etc.
At 1110, the head of the new avatar 1108 continues to be further deformed into an animal head. Thus, the head, hair, nose, lips, eyes, etc. in the new avatar 1108 have a more pronounced animal-like appearance.
FIG. 12-transformation of avatars in a configuration environment
FIG. 12 illustrates an example of a transformation of an avatar in a configuration environment 1200, according to some embodiments. Configuration environment 1200 may be a studio or other type of environment in which a user may configure an avatar outside of the runtime environment of the virtual experience.
Thus, configuration environment 1200 may be an auxiliary feature that is related to the virtual experience, but not within the virtual experience itself. Alternatively, or in addition, the configuration environment may be decoupled from any particular virtual experience, but the output of the configuration environment (including avatars and other graphical objects) may be used and applied to the virtual experience.
In the configuration environment 1200, there are multiple avatars available, a user's avatar 1202 (bazooka), a first target avatar 1204 (Model 9), and a second target avatar 1206 (roxie). In this example, the avatar body of the user's avatar 1202 resembles a short, strong alien in shape and a skeleton in form.
The user has selected the target avatar 1206 as the target avatar. Thus, the user's avatar 1202 is transformed into a new avatar 1208, and the new avatar 1208 is rendered in the configuration environment 1200. The new avatar 1208 retains some of the skeletal-like characteristics of the original avatar 1202, but now is more humanoid in shape, corresponding to the target avatar 1206, and higher.
According to various embodiments, the configuration environment 1200 may be provided with an adjustment tool and/or other types of transformation tools 1210. For example, transformation tool 1210 instructs transformation tool 1210 to correspond to a "morphed body insert" and provides instructions "select avatar A and then select B to morph A to B. This value is the extent to which a will change to B, and when this value is 0, there is no change. When this value is 1, a will look like B. "
In the example of FIG. 12, transformation tool 1210 includes a slider bar or other similar tool ("deformation Value") to control the amount of deformation (e.g., deformation Value) between the current avatar and the target avatar. There may be a button indicating "currently deformed head (Current Deform Head)" and a button indicating "apply deformation and value (Apply Morph and Value)"
Thus, the new avatar may have a minimum deformation value of 0 (e.g., the current avatar does not change), a maximum deformation value (e.g., the new avatar is a complete transformation to the target avatar), and any other deformation values between the minimum deformation value and the maximum deformation value (e.g., the new avatar is a mixture between the original avatar and the target avatar as shown in fig. 12).
Although not shown in fig. 12, as shown in fig. 2, some embodiments of a configuration environment 1200 enable a user to operate a cage for the purpose of changing an avatar body. Various transformation tools may be provided in the configuration environment 1200 to enable a user to click and drag, draw, delete, modify, etc. vertices of a cage, etc.
FIG. 13-layered garment of avatar body in virtual experience
FIG. 13 illustrates an example of a layered garment of avatar bodies of an avatar participating in a virtual experience 1300, according to some embodiments. At 1302, the user's avatar 1304 is a new avatar transformed from a previous avatar (e.g., by the techniques described herein) and some clothing is worn on a body cage (not shown). The inner cage (not shown) of each outer garment of the avatar 1304 has been deformed in order to conform/adapt each outer garment to the changed/new avatar 1304.
In the virtual experience 1300, the avatar 1304 runs to one of a series of other items of apparel (e.g., the coat 1306). When the avatar wears the coat 1306 at 1308, the coat 1306 deforms to fit and conform to the underlying clothing layers. The engine and/or some other component running the virtual experience 1300 may seamlessly perform morphing of the casing (e.g., cage mapping, cage morphing, etc.) such that adaptation is performed seamlessly from the user's perspective (e.g., the user simply clicks on casing 1306, casing 1306 automatically adapts to avatar 1304).
In the foregoing embodiments and examples thereof, the original avatar body is changed to another (new) avatar body as one or more parts of the original avatar body are deformed in the manner shown and described above. Any clothing layers worn by the skin/mesh and the original avatar body will also deform to fit the new avatar body in a conforming manner.
After changing/updating the geometry (e.g., avatar body, skin, clothing layers, etc.), the user may animate the avatar, e.g., by running, smiling, blinking, waving arms, etc. Examples are provided next in fig. 14 to 17.
Transformation and animation of the avatar of FIGS. 14-17
Fig. 14-17 illustrate examples of transformations and animations of an avatar 1400 according to some embodiments. Each of fig. 14-17 shows the head of the avatar 1400, and three different versions of the avatar 1400.
In fig. 14, the avatar 1400 and the other three avatars have not been deformed yet, and thus the other three avatars are identical to the avatar 1400. Avatar 1400 in fig. 14 is expressionless (e.g., neither smiling nor frowning) and open with both eyes, the other three avatars also having the same appearance.
In fig. 15, avatar 1400 is shown in its original form (undeformed form) for reference, with the other three avatars beginning to deform to different head shapes. Avatar 1400 is also animated to be smiling (lips separated and teeth visible). The corresponding smile can also be seen in the animation of the other three avatars.
In fig. 16, avatar 1400 is shown in its original (non-deformed form) for reference, with the other three avatars continuing to deform to different head shapes and having smile animations corresponding to those of avatar 1400. In fig. 17, an avatar 1400 is animated to blink partially and laugh with lips, and the other three avatars also perform an animation showing the same expression.
According to various embodiments, the avatar may be provided with a skin such that the avatar's mesh is tied with the joints and bones of the avatar's skeleton. Thus, as shown in fig. 15-17 described above, the movement of the joints/bones of the avatar may cause corresponding skin deformation during the animation process. In some implementations, the skeleton of the avatar may be an inferred skeleton of virtual joints and virtual skeletons.
According to various embodiments, when the geometry of the avatar body changes as described in the examples above (e.g., by changing the geometry/shape of arms, legs, torso, etc. for a new avatar), the skeleton is also updated for the new avatar. As shown in fig. 15 to 17, updating the skeleton (including updating its joints and bones, etc.) and updating the skin ensures that the animation effect of the new avatar is accurate.
In some implementations, such updating may be performed by interpolating the attachment locations (e.g., joints) for the new avatar body. Such interpolated attachment positions may be obtained by interpolating between attachment positions of the skeleton of the original avatar body and attachment positions of the skeleton of the target avatar body.
In some implementations, interpolation may be performed between vertices of the body cage of the original avatar body and vertices of the body cage of the target avatar body to obtain attachment points of the skeleton of the new avatar body. Thus, when the original avatar body is deformed by manipulating the body cage of the original avatar body, similar deformation may be performed on the skeleton of the original avatar body so as to reposition the joints and bones of the skeleton.
As an example, the body cage is associated with the mesh of the avatar body by surrounding the mesh of the avatar body. Thus, the body cage approximates the shape, size, contour, etc. of the body part represented by the mesh. In view of the binding (rig or rigging), the mesh of the avatar binds to the joints and bones of the avatar's skeleton, and deformation of the body cage causes deformation of the corresponding mesh, the same or similar deformation (due to the mesh binding to the skeleton) will also apply to the skeleton.
Interpolation is performed between vertices of a pair of cages having direct 1:1 vertex correspondence, the result of the interpolation providing vertices for the body cage of the avatar body. Furthermore, there may be vertices on the body cage of the original avatar body that do not have correspondence to any vertices on the body cage of the target avatar body, and vice versa. In this case, the nearest vertices may be identified for interpolation using UV correspondence techniques or other types of techniques that map between graphical objects. Alignment of the coordinate systems of the pair cages may also be performed to further improve the accuracy of the interpolation.
FIG. 18-changing three-dimensional avatar body
FIG. 18 is a flow chart illustrating a computer-implemented method 1800 of changing a three-dimensional (3D) avatar body, according to some embodiments. For simplicity, the various operations in method 1800 are described in the context of a virtual experience (virtual experience, VE) of a client device performing the operations.
Further, as described below with reference to fig. 22, some operations of the method 1800 and/or any other method described herein may be performed, in whole or in part, by a VE engine located at a VE platform of a server, alternatively or additionally. The example method 1800 may include one or more operations illustrated by one or more blocks (e.g., blocks 1802-1806). The various blocks of method 1800 and/or any other process described herein may be combined into fewer blocks, divided into additional blocks, supplemented with other blocks, and/or eliminated, depending on the implementation.
The method 1800 of fig. 18 is described herein with reference to the elements shown in fig. 2-17 and other figures. In some implementations, the operations of method 1800 may be performed in a pipelined sequential manner. In other implementations, some operations may be performed out of order, in parallel, etc.
At block 1802, a first avatar body to be changed is identified. For example, the first avatar body may be the current avatar body of the user when the user is participating in a virtual experience or when the user is in a configuration environment such as a studio. The first avatar body has a corresponding first body cage. Block 1802 may be followed by block 1804.
At block 1804, a target avatar body is identified. For example, the user may identify the target avatar body in a virtual experience or in a configuration environment. The target avatar body is a body that the user may wish the user's current avatar body to become. The target avatar body has a corresponding target body cage. Block 1804 may be followed by block 1806.
At block 1806, a transformation of the first avatar body is performed. For example, interpolation may be performed between the first body cage and the target body cage to generate a second body cage corresponding to the second avatar body. Thus, the second avatar body may be a mixture between the first avatar body and the target avatar body, or completely transformed into the target avatar body. Additional details of how interpolation is performed are discussed with reference to fig. 20.
FIG. 19-changing three-dimensional avatar body
FIG. 19 is a flow chart illustrating another computer-implemented method 1900 for changing a three-dimensional (3D) avatar body, according to some embodiments. For example, method 1900 may be used for free form transformation of an avatar body without having to use the target avatar body as a reference.
At block 1902, a first avatar body to be changed is identified. For example, the first avatar body may be the user's current avatar body when the user is in a configuration environment such as a studio. In some implementations, the user's avatar may exist in the running virtual experience, and thus the avatar may be changed by pausing the virtual experience, by exiting the virtual experience to enter the configuration environment, or by changing the avatar in the running virtual experience itself. The first avatar body has a corresponding first body cage. Block 1902 may be followed by block 1904.
At block 1904, a transformation of the first avatar body is performed. For example, the transformation may be performed by manipulating the first body cage (e.g., repositioning vertices/line segments of the first body cage) to generate a second body cage corresponding to the second avatar body. The second body cage may be used to provide a transformation of the first avatar body to the second avatar body. The operations may include repositioning portions of the first body cage. Additional details of how the transformation is performed are discussed with reference to fig. 20.
Figure 20-clothing layer deformed over body cage
FIG. 20 is a flowchart illustrating a computer-implemented method for performing skin deformation, according to some embodiments. FIG. 20 is a flowchart illustrating a computer-implemented method 2000 for changing a three-dimensional (3D) avatar body, according to some embodiments. For example, the method 2000 may be used for free form transformation of an avatar body without having to use the target avatar body as a reference. The method 2000 may begin at block 2002.
At block 2002, affine corrections of the skeleton are calculated. For each bone, a linear algebra is used to calculate a 3x4 affine transformation that best fits the deformation of its affected point cloud. This may be item D j, D j may change the skin calculation to better adapt to the changes in mesh shape. Block 2002 may be followed by block 2004.
At block 2004, affine corrections are applied to the bone transformations. This correction is applied to the bone transformation in the skin calculation: block 2004 may be followed by block 2006.
At block 2006, bubble correction is calculated by the skeleton. Block 2006 may be followed by block 2008.
At block 2008, the correction is incorporated into the partial binding transformation. Bubbling and merging merges the correction into a new local binding transformation LB 'j, which generates the same result LB' j=Bj -1·Dj·Bj′·LBj. This is an algebraic operation, using the original LBS formula, removing transform D j and replacing it with a new binding transform LB' j that produces the same result. Block 2008 may be followed by block 2010.
At block 2010, a new skin mesh is created and/or published. Creation simply means that the skinned grid exists as a grid for local user viewing. Release means pushing the skinned grid to Roblox CDN so that other users can also see or purchase it. The use of a new skin mesh in this manner requires security audits and other procedures. Block 2010 may be followed by block 2012.
At block 2012, the translation and rotation are updated. FACS binding involves translation and rotation of each joint in each pose of the binding. After the shape changes, these values need to be updated so that the pose can continue to be used with the new shape. Such updating may be performed using method 2100, as described with respect to fig. 21.
Figure 21-clothing layer deformed over body cage
Fig. 21 is a flowchart illustrating a computer-implemented method 2100 for performing facial motion coding system (FACS) gesture morphing. Method 2100 may begin at block 2102.
With respect to FACS pose deformation, the present FACS binding contains many poses, and each pose includes a set of local transforms P j for each joint in the head binding. Skin deformation techniques update their binding transformations LB' j, changing the parent space of these local pose transformations. Affine correction transformation D j changes the translation of the resulting transformation M j, including the direction and magnitude of the translation, but affine correction transformation D j does not change the rotation of the resulting transformation M j. To further improve the results, the rotation r j,k of each joint per pose will also be updated. In this process, each pose, translation t j,k of each joint, may also be further fine tuned. Such a process may perform one gesture at a time.
At block 2102, a deformed head mesh pose is calculated from the original head mesh. Specifically, an original LBS deformed mesh pose is calculated from the original head shape. A gesture is defined as a collection of joint transformations. These joint transformations are used with techniques such as linear hybrid skin techniques to construct the mesh shape in this pose. Block 2102 may be followed by block 2104.
At block 2104, the skin weighting points are projected onto the nearest point to generate a new point. For each joint in the pose, the skin weighting points of the joint are projected to the closest point on the pose grid, providing a new set of points. These points represent the locations on the original mesh closest to the joint's destination in the pose. For example, when a smiling gesture is considered, the corners of the mouth may slide to the cheek regions. It may be helpful to understand how the cheeks remodel so that it can be known whether and how to adjust the smiling posture accordingly. Block 2104 may be followed by block 2106.
At block 2106, affine corrections are applied to the new points to generate transformations. How the points are transformed between the original shape and the modified shape may be calculated using the same 3x4 affine adaptation function (used at block 2004). Block 2106 may be followed by block 2108.
At block 2108, the transformed rigid components are extracted. The transformed rigid component may then be extracted. Some systems, including some rigid body simulation systems, are not able to efficiently handle scaling and shearing, and therefore these components must be removed before corrections can be provided to these systems. Block 2108 may be followed by block 2110.
At block 2110, translation and/or rotation is fine tuned based on the extracted rigid components. Block 2110 may be followed by block 2112.
At block 2112, the euler angle is calculated by decomposing the rotation matrix and inserting the correction. To minimize the change to the euler interpolation, a minimum change is made to the euler angle that matches the rotation. These angles are calculated by decomposing the rotation matrix and inserting corrections step by step.Then(This shows how the correction rotation with index 2 is incorporated into the rotation calculation), then x =x1+x2,y=y1+y2, and z =z1+z2 (this shows how the euler angle from the correction matrix (index 2 in the previous equation) is directly added to the original rotation and produces the same result). This shows the euler rotation matrix decomposed into separate x, y, z rotation matrices. Where capital letters are transformation matrices and lowercase letters are scalar values.
FIG. 22-example computing device
Fig. 22 is a block diagram illustrating an example computing device 2200 that may be used to implement one or more features described herein, in accordance with some implementations. In one example, computing device 2200 may be used to implement a computer device (e.g., 102 and/or 110 of fig. 1) and perform the appropriate method implementations described herein. Computing device 2200 may be any suitable computer system, server, or other electronic or hardware device. For example, computing device 2200 may be a mainframe computer, desktop computer, workstation, portable computer, or electronic device (portable device, mobile device, cellular telephone, smart phone, tablet computer, television set-top box, personal Digital Assistant (PDA), media player, gaming device, wearable device, etc.). In some implementations, the computing device 2200 includes a processor 2202, memory 2204, an input/output (I/O) interface 2206, and an audio/video input/output device 2214.
The processor 2202 may be one or more processors and/or processing circuits to execute program code and control the basic operations of the computing device 2200. A "processor" includes any suitable hardware and/or software system, mechanism, or component that processes data, signals, or other information. The processor may include a system with a general purpose central processing unit (central processing unit, CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. The processing need not be limited to a particular geographic location nor have time constraints. For example, a processor may perform its functions in "real-time," "offline," "batch mode," etc. The various portions of the process may be performed by different (or the same) processing systems at different times and locations. The computer may be any processor in communication with the memory.
Memory 2204 is typically provided in computing device 2200 for access by processor 2202 and may be any suitable processor-readable storage medium, such as random access memory (random access memory, RAM), read-only memory (ROM), electrically erasable read-only memory (EEPROM), flash memory, etc., memory 2204 being adapted to store instructions for execution by processor 2202 and being separate from processor 2202 and/or integral to processor 2202. The memory 2204 may store software operated by the processor 2202 on the computing device 2200, including an operating system 2208, virtual experience applications 2210, 3D avatar modification applications 2212, and other applications (not shown). In some implementations, the virtual experience application 2210 and/or the 3D avatar modification application 2212 may include instructions that enable the processor 2202 to perform (or control) the functions described herein, e.g., some or all of the methods described with respect to fig. 18-21.
For example, the virtual experience application 2210 may include a 3D avatar modification application 2212, as described herein, the 3D avatar modification application 2212 may dynamically change the 3D avatar within the online virtual experience server (e.g., 102). The software elements in memory 2204 may alternatively be stored on any other suitable storage location or computer readable medium. Further, memory 2204 (and/or other connected storage devices) may store instructions and data used in the features described herein. Memory 2204 and any other type of storage (magnetic disk, optical disk, magnetic tape, or other tangible medium) may be considered a "storage area" or "storage device.
The I/O interface 2206 may provide functionality that enables the computing device 2200 to interface with other systems and devices. For example, network communication devices, storage devices (e.g., memory and/or data storage 120), and input/output devices may communicate over I/O interface 2206. In some implementations, the I/O interface may be connected to interface devices including input devices (keyboard, pointing device, touch screen, microphone, camera, scanner, etc.) and/or output devices (display device, speaker device, printer, motor, etc.).
The audio/video input/output devices 2214 can include user input devices (e.g., mice, etc.) that can be utilized to receive user input, display devices (e.g., screens, monitors, etc.) that can be utilized to provide graphical and/or visual output, and/or combined input and display devices.
For ease of illustration, FIG. 22 shows one box for each of the software boxes of the processor 2202, memory 2204, I/O interface 2206, and operating system 2208, virtual experience application 2210, and 3D avatar modification application 2212. These blocks may represent one or more processors or processing circuits, operating systems, memory, I/O interfaces, applications, and/or software engines. In other implementations, computing device 2200 may not have all of the components shown, and/or may have other elements, including other types of elements not shown herein, or other types of elements in addition to those shown herein. Although online virtual experience server 102 is described as performing the operations described in some embodiments herein, any suitable component or combination of components of online virtual experience server 102 or similar systems, or any suitable processor associated with such systems, may perform the operations.
The user equipment may also implement and/or be used with the features described herein. An example user device may be a computer device that includes some similar components (e.g., a processor 2202, memory 2204, and I/O interface 2206) as computing device 2200. An operating system, software, and applications suitable for the client device may be provided in memory and used by the processor. The I/O interface for the client device may be connected to a network communication device, as well as input and output devices, such as a microphone for capturing sound, a camera for capturing images or video, a mouse for capturing user input, a gesture device for recognizing user gestures, a touch screen for detecting user input, an audio speaker device for outputting sound, a display device for outputting images or video, or other output devices. For example, a display device within audio/video input/output device 2214 may be connected to (or included in) computing device 2200 to display pre-processed and post-processed images as described herein, where such display device may include any suitable display device, such as an LCD, LED, or plasma display screen, CRT, television, monitor, touch screen, 3D display screen, projector, or other visual display device. Some implementations may provide an audio output device, such as a speech output or a synthesized voice of reading text.
One or more of the methods described herein (e.g., methods 1800, 1900, 2000, and 2100) may be implemented by computer program instructions or code that can be executed on a computer. For example, the code may be implemented by one or more digital processors (e.g., microprocessors or other processing circuits), and may be stored on a computer program product comprising a non-transitory computer readable medium (e.g., a storage medium), such as a magnetic, optical, electromagnetic, or semiconductor storage medium, including semiconductor or solid state memory, magnetic tape, removable computer diskette, random Access Memory (RAM), read Only Memory (ROM), flash memory, rigid magnetic disk, optical disk, solid state storage drive, and the like. The program instructions may also be contained in and provided as an electronic signal, for example, in the form of software-as-a-service (SaaS) delivered from a server (e.g., a distributed system and/or cloud computing system). Or one or more of the methods may be implemented in hardware (logic gates, etc.) or a combination of hardware and software. Example hardware may be a programmable processor (e.g., field-programmable gate array (FPGA) GATE ARRAY, complex programmable logic device), general purpose processor, graphics processor, application Specific Integrated Circuit (ASIC), etc. One or more methods may be performed as part or component of an application running on a system or as an application or software running with other applications and operating systems.
One or more of the methods described herein may be run in a stand-alone program capable of running on any type of computing device, a program running on a web browser, a mobile application ("app") executing on a mobile computing device (e.g., a cell phone, a smartphone, a tablet, a wearable device (watch, arm band, jewelry, headwear, goggles, glasses, etc.), a notebook, etc.). In one example, a client/server architecture may be used, for example, a mobile computing device (as a client device) to send user input data to a server device and receive final output data from the server for output (e.g., for display). In another example, all of the computations are performed within a mobile application (and/or other application) on the mobile computing device. In another example, the computation may be split between the mobile computing device and one or more server devices.
Although the description has been described with respect to specific embodiments thereof, these specific embodiments are presented for purposes of illustration only and not limitation. The concepts illustrated in the examples may be applied to other examples and implementations.
The functional blocks, operations, features, methods, devices, and systems described in this disclosure may be integrated or partitioned into different combinations of systems, devices, and functional blocks as known to those of skill in the art. The routines of the particular embodiments may be implemented using any suitable programming language and programming technique. Different programming techniques may be employed, such as, for example, procedural or object oriented. The routines may execute on a single processing device or multiple processors. Although steps, operations, or computations may be presented in a specific order, the order may be changed in different specific implementations. In some embodiments, multiple steps or operations shown as sequential in this specification can be performed at the same time.

Claims (20)

1. A computer-implemented method for modifying a three-dimensional (3D) avatar body, the computer-implemented method comprising:
identifying a first avatar body having a first body cage;
identifying a target avatar body having a target body cage, and
Interpolation is performed between the first body cage and the target body cage to obtain a second body cage corresponding to a second avatar body, thereby providing a transformation of the first avatar body to the second avatar body.
2. The computer-implemented method of claim 1, wherein performing the interpolation comprises performing the interpolation to generate the second body cage that exactly matches the target body cage, thereby providing a complete transformation.
3. The computer-implemented method of claim 1, wherein performing the interpolation includes transforming the first avatar body into the second avatar body, the second avatar body being a mixture between the first avatar body and the target avatar body, thereby providing a partial transformation.
4. The computer-implemented method of claim 1, wherein performing the interpolation includes deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
5. The computer-implemented method of claim 4, wherein deforming the portion of the first avatar body that is less than the entirety of the first avatar body comprises deforming the portion of the first avatar body to perform a partial transformation of the portion of the first avatar body.
6. The computer-implemented method of claim 1, wherein the first avatar body is part of a virtual experience, the interpolation is performed while the avatar is engaged in the virtual experience, and the target avatar body is selected from a plurality of target avatar bodies in the virtual experience.
7. The computer-implemented method of claim 1, wherein the interpolation is performed in a configuration environment and the target avatar body is selected from a plurality of target avatar bodies in a library in the configuration environment.
8. The computer-implemented method of claim 7, wherein the configuration environment includes a transformation tool that enables a user to control the transformed amount of the first avatar body to obtain the second avatar body, and wherein the interpolation is performed based on the transformed amount.
9. The computer-implemented method of claim 1, further comprising:
identifying a binding of the first avatar body, the binding including identifying a skeleton of the first avatar body and a skin of the first avatar body;
Updating the binding of the first avatar body to correspond to the second body cage after performing the interpolation, and
The first avatar body is animated by moving the updated bound skeleton and deforming the updated bound skin.
10. The computer-implemented method of claim 9, wherein moving the updated bound skeleton and deforming the updated bound skin comprises reusing skin weights from the skin of the first avatar body based on determining areas of the updated bound skin affected by bones in the skeleton of the first avatar body.
11. A computer-implemented method for modifying a three-dimensional (3D) avatar body, the computer-implemented method comprising:
Identifying a first avatar body having a corresponding first body cage, and
Performing an operation on the first body cage to generate a second body cage corresponding to a second avatar body to provide a transformation of the first avatar body to the second avatar body, wherein the operation includes repositioning portions of the first body cage.
12. The computer-implemented method of claim 11, wherein the operations are performed in a configuration environment, and wherein the configuration environment includes a transformation tool that enables a user to control aspects of the operations of the first body cage to obtain the second body cage, and wherein the operations are performed based on the aspects of the operations.
13. The computer-implemented method of claim 11, further comprising:
identifying a binding of the first avatar body, the binding including identifying a skeleton of the first avatar body and a skin of the first avatar body;
after performing the operation, updating the binding of the first avatar body to correspond to the second body cage, and
The first avatar body is animated by moving the updated bound skeleton and deforming the updated bound skin.
14. The computer-implemented method of claim 13, wherein moving the updated bound skeleton and deforming the updated bound skin comprises reusing skin weights from the skin of the first avatar body based on determining areas of the updated bound skin affected by bones in the skeleton of the first avatar body.
15. The computer-implemented method of claim 13, wherein transforming the first avatar body into the second avatar body comprises performing interpolation between the first body cage and the second body cage.
16. The computer-implemented method of claim 13, wherein transforming the first avatar body into the second avatar body comprises deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
17. A system, comprising:
a memory having instructions stored thereon, and
A processing device coupled to the memory, the processing device to access the memory and execute the instructions, wherein the instructions cause the processing device to perform operations comprising:
identifying a first avatar body having a first body cage;
identifying a target avatar body having a target body cage, and
Interpolation is performed between the first body cage and the target body cage to obtain a second body cage corresponding to a second avatar body, thereby providing a transformation of the first avatar body to the second avatar body.
18. The system of claim 17, wherein performing the interpolation includes performing the interpolation to generate the second body cage that exactly matches the target body cage, thereby providing a complete transformation.
19. The system of claim 17, wherein performing the interpolation includes transforming the first avatar body into the second avatar body, the second avatar body being a mixture between the first avatar body and the target avatar body, thereby providing a partial transformation.
20. The system of claim 17, wherein performing the interpolation includes deforming a portion of the first avatar body that is less than an entirety of the first avatar body.
CN202480004666.6A 2023-08-14 2024-08-13 Dynamically changing avatar bodies in virtual experiences Pending CN120188198A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202363532556P 2023-08-14 2023-08-14
US63/532,556 2023-08-14
PCT/US2024/042103 WO2025038632A1 (en) 2023-08-14 2024-08-13 Dynamically changing avatar bodies in a virtual experience

Publications (1)

Publication Number Publication Date
CN120188198A true CN120188198A (en) 2025-06-20

Family

ID=92763045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202480004666.6A Pending CN120188198A (en) 2023-08-14 2024-08-13 Dynamically changing avatar bodies in virtual experiences

Country Status (5)

Country Link
EP (1) EP4584754A1 (en)
JP (1) JP2026501495A (en)
KR (1) KR20250078549A (en)
CN (1) CN120188198A (en)
WO (1) WO2025038632A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120560518A (en) * 2025-07-30 2025-08-29 杭州秋果计划科技有限公司 Digital human dynamic interaction method, device and equipment based on scene perception

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615601B2 (en) * 2021-03-15 2023-03-28 Roblox Corporation Layered clothing that conforms to an underlying body and/or clothing layer

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120560518A (en) * 2025-07-30 2025-08-29 杭州秋果计划科技有限公司 Digital human dynamic interaction method, device and equipment based on scene perception

Also Published As

Publication number Publication date
KR20250078549A (en) 2025-06-02
WO2025038632A1 (en) 2025-02-20
EP4584754A1 (en) 2025-07-16
JP2026501495A (en) 2026-01-16

Similar Documents

Publication Publication Date Title
KR102374307B1 (en) Modification of animated characters
US11645805B2 (en) Animated faces using texture manipulation
CN120188198A (en) Dynamically changing avatar bodies in virtual experiences
US20250148720A1 (en) Generation of three-dimensional meshes of virtual characters
US20250157152A1 (en) Automatic generation of avatar body models
US20250054257A1 (en) Automatic fitting and tailoring for stylized avatars
US12505635B2 (en) Determination and display of inverse kinematic poses of virtual characters in a virtual environment
US20250061673A1 (en) Normal-regularized conformal deformation for stylized three dimensional (3d) modeling
US20250299445A1 (en) Mesh retopology for improved animation of three-dimensional avatar heads
US20240378836A1 (en) Creation of variants of an animated avatar model using low-resolution cages
CN121336239A (en) Automatic skin migration and rigid automatic skin
WO2025226594A1 (en) Projecting radiance fields to mesh surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination